diff --git a/spaces/1phancelerku/anime-remove-background/Catch and Evolve Monsters in Monster Squad Rush - Download APK Now.md b/spaces/1phancelerku/anime-remove-background/Catch and Evolve Monsters in Monster Squad Rush - Download APK Now.md deleted file mode 100644 index f30cb9ada3314d478d5e3da4875316104bc248cf..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Catch and Evolve Monsters in Monster Squad Rush - Download APK Now.md +++ /dev/null @@ -1,125 +0,0 @@ - -

Monster Squad Rush: A Guide to Download and Play the Game

-

Do you love catching, training, and battling with monsters? If so, you might want to check out Monster Squad Rush, a new game that combines elements of Pokemon, auto-runners, and monster fight games. In this game, you will run through various tracks filled with obstacles, gems, and balls. You will use the balls to catch different monsters and build your own team. You will also train and evolve your monsters to make them stronger and more powerful. Then, you will face other monster trainers in epic battles at the end of each level. Along the way, you will complete challenges and unlock rewards that will help you progress in the game.

-

monster squad rush download apk


Download · https://jinyurl.com/2uNNVn



-

Monster Squad Rush is a fun, addictive, and colorful game that will appeal to fans of monster-catching games. It has simple controls, a variety of monsters, and exciting gameplay. It is also free to play, although it does have some ads and in-app purchases. If you are interested in playing this game, you might be wondering how to download it on your device. In this article, we will show you how to download Monster Squad Rush APK for Android, how to play it on iOS, and how to play it on PC. We will also give you some game features, tips and tricks, and a game review of Monster Squad Rush. Let's get started!

-

How to Download Monster Squad Rush APK for Android

-

If you want to play Monster Squad Rush on your Android device, you will need to download its APK file from a third-party source. APK stands for Android Package Kit, which is a file format that contains all the necessary components for installing an app on Android devices. However, not all APK files are safe or compatible with your device, so you need to be careful when downloading them. Here are the steps to download Monster Squad Rush APK for Android:

-
    -
  1. Go to APKCombo.com, which is a reliable website that offers free APK downloads for various apps and games.
  2. -
  3. Search for Monster Squad Rush in the search bar or browse through the categories until you find it.
  4. -
  5. Choose the latest version of the game (1.3.2 as of June 2023) and click on Download APK
  6. -
  7. Allow unknown sources on your device by going to Settings > Security > Unknown Sources and toggling it on. This will enable you to install apps from sources other than the Google Play Store.
  8. -
  9. Install the APK file by tapping on it and following the instructions on the screen. You might need to grant some permissions to the app before installing it.
  10. -
-

Congratulations, you have successfully downloaded and installed Monster Squad Rush APK for Android. You can now launch the game and enjoy the monster-catching action.

-

How to Play Monster Squad Rush on iOS

-

If you have an iOS device, such as an iPhone or an iPad, you can play Monster Squad Rush without downloading any APK files. The game is available on the App Store, which is the official source of apps and games for iOS devices. Here are the steps to play Monster Squad Rush on iOS:

-
    -
  1. Go to the App Store and search for Monster Squad Rush in the search bar or browse through the categories until you find it.
  2. -
  3. Tap on Get and install the game on your device. You might need to enter your Apple ID and password or use Touch ID or Face ID to confirm the installation.
  4. -
  5. Launch the game and enjoy the monster-catching action.
  6. -
-

That's it, you have successfully installed and played Monster Squad Rush on iOS. You can now run, catch, train, and battle with your monsters.

-

monster squad rush apk free download
-download monster squad rush android game
-monster squad rush mod apk unlimited money
-how to install monster squad rush apk
-monster squad rush latest version apk
-monster squad rush game download for pc
-monster squad rush hack apk download
-monster squad rush offline apk
-monster squad rush by tapnation apk
-monster squad rush action game apk
-monster squad rush apk pure
-download monster squad rush from google play
-monster squad rush apk mirror
-monster squad rush old version apk
-monster squad rush update apk
-monster squad rush apk for ios
-monster squad rush appbrain apk
-monster squad rush cheats apk
-monster squad rush 1.2.9 apk download
-monster squad rush review apk
-monster squad rush gameplay apk
-monster squad rush tips and tricks apk
-monster squad rush best monsters apk
-monster squad rush evolution guide apk
-monster squad rush online multiplayer apk
-monster squad rush no ads apk
-monster squad rush premium apk
-monster squad rush pro apk
-monster squad rush full version apk
-monster squad rush cracked apk
-monster squad rush unlocked apk
-monster squad rush android 11 apk
-monster squad rush android tv apk
-monster squad rush android emulator apk
-monster squad rush android studio apk
-monster squad rush android oreo apk
-monster squad rush android pie apk
-monster squad rush android 10 apk
-monster squad rush android 9 apk
-monster squad rush android 8 apk
-monster squad rush android 7 apk
-monster squad rush android 6 apk
-monster squad rush android 5.1+ apk
-download and play monster squad rush on pc with bluestacks
-download and play monster squad rush on pc with noxplayer
-download and play monster squad rush on pc with memu
-download and play monster squad rush on pc with ldplayer
-download and play monster squad rush on pc with gameloop
-download and play monster squad rush on mac with bluestacks

-

How to Play Monster Squad Rush on PC

-

If you prefer playing games on a bigger screen, you can also play Monster Squad Rush on your PC. However, since the game is designed for mobile devices, you will need to use an Android emulator to run it on your PC. An Android emulator is a software that mimics the Android operating system on your PC, allowing you to run Android apps and games on your computer. There are many Android emulators available online, but we recommend using BlueStacks or NoxPlayer, which are two of the most popular and reliable ones. Here are the steps to play Monster Squad Rush on PC using an Android emulator:

-
    -
  1. Download and install an Android emulator such as BlueStacks or NoxPlayer on your PC. Follow the instructions on their websites to complete the installation process.
  2. -
  3. Launch the emulator and sign in with your Google account. This will give you access to the Google Play Store and other Google services on the emulator.
  4. -
  5. Go to the Google Play Store and search for Monster Squad Rush in the search bar or browse through the categories until you find it.
  6. -
  7. Install the game and start playing on your PC.
  8. -
-

Voila, you have successfully played Monster Squad Rush on PC using an Android emulator. You can now enjoy the game on a larger screen and with better controls.

-

Game Features of Monster Squad Rush

-

Now that you know how to download and play Monster Squad Rush on different devices, let's take a look at some of the game features that make it fun and exciting. Here are some of the game features of Monster Squad Rush:

-

Collect Monster Balls and Catch Monsters

-

The main goal of Monster Squad Rush is to collect as many monster balls as possible while running through various tracks. Monster balls are spherical items that contain different types of monsters inside them. You can use these balls to catch monsters and add them to your team. There are over 100 different monsters in the game, each with their own unique appearance, abilities, and attributes. You can catch common, rare, epic, or legendary monsters depending on the color of the ball. The rarer the ball, the more powerful the monster inside it.

-

Train and Evolve Monsters to Make Them Stronger

-

Once you catch a monster, you can train it and evolve it to make it stronger and more powerful. You can train your monsters by feeding them with food items that you collect during your runs. Feeding your monsters will increase their level and stats, such as HP, attack, defense, speed, and skill. You can also evolve your monsters by using evolution stones that you obtain from completing challenges or buying them with gems. Evolving your monsters will change their appearance, increase their stats, and unlock new skills for them.

-

Build a Team of Monsters and Compete in Battles

-

You can build a team of up to four monsters and compete in battles against other monster trainers at the end of each level. You can choose which monsters to include in your team based on their type, attribute, skill, and compatibility. Each monster has a type (fire, water, grass, electric, or dark) that determines its strength and weakness against other types. Each monster also has an attribute (red, blue, green, yellow, or purple) that affects its compatibility with other monsters in your team. You can see the compatibility level by looking at the hearts above your monsters' heads. The more hearts, the better the compatibility. Having a high compatibility will boost your monsters' stats and skills during battles. Each monster also has a skill that can be activated by tapping on it during battles. Skills can have various effects, such as dealing damage, healing, buffing, debuffing, or stunning the enemy.

-

Complete Challenges and Unlock Rewards

-

As you play Monster Squad Rush, you will encounter various challenges that will test your skills and abilities. Challenges are tasks that you need to complete within a certain time limit or number of runs. For example, you might need to catch a specific monster, collect a certain amount of gems, or defeat a certain boss. Completing challenges will reward you with various items, such as food, evolution stones, gems, or coins. You can use these items to train, evolve, or buy more monsters for your team. You can also unlock new tracks, modes, and features by completing challenges.

-

Game Tips and Tricks for Monster Squad Rush

-

Now that you know some of the game features of Monster Squad Rush, let's move on to some game tips and tricks that will help you play better and have more fun. Here are some game tips and tricks for Monster Squad Rush:

-

Focus on Collecting Balls Rather Than Gems

-

While running through the tracks, you will see two kinds of items: balls and gems. Balls are used to catch monsters, while gems are used to buy items or upgrade your power-ups. While both are important, you should prioritize collecting balls over gems. This is because balls are more rare and valuable than gems, and they will help you catch more monsters for your team. Gems are more abundant and easy to obtain, and you can always watch ads or complete challenges to get more of them.

-

Always Upgrade Your Monsters and Power Up at the Start of a Run

-

Before you start a run, you should always upgrade your monsters and power up your team. Upgrading your monsters will increase their level and stats, making them stronger and more durable during battles. Powering up your team will give them a temporary boost in speed, attack, defense, or skill at the start of a run. You can upgrade your monsters by feeding them with food items that you collect or buy with gems. You can power up your team by spending coins that you earn from completing runs or challenges.

-

Use Your Monsters to Pick Up More Items on the Track

-

While running through the tracks, you can use your monsters to pick up more items on the track. You can do this by tapping on your monsters to make them jump or fly over obstacles and reach higher places. You can also swipe left or right on the screen to make your monsters move sideways and collect items on the sides of the track. Using your monsters to pick up more items will help you get more balls, gems, food, evolution stones, and power-ups.

-

Choose the Best Option Between Two Gates

-

At the end of each track, you will encounter two gates that lead to different paths. One gate will have a monster icon on it, while the other gate will have a gem icon on it. The monster gate will lead you to a battle against another monster trainer, while the gem gate will lead you to a bonus stage where you can collect more gems. You should choose the best option depending on your situation and preference. If you want to catch more monsters and test your skills in battles, you should choose the monster gate. If you want to get more gems and avoid battles, you should choose the gem gate.

-

Tap Hard During Boss Fights and Filling Up Bars

-

During boss fights and filling up bars, you will need to tap hard on the screen to deal damage or fill up the bar faster. Boss fights are special battles that occur at the end of each world or mode. You will face a powerful boss monster that has a lot of HP and skills. To defeat the boss monster, you will need to tap hard on the screen to attack it with your monsters' skills. Filling up bars are mini-games that occur randomly during runs or battles. You will see a bar on the screen that has a marker on it. To fill up the bar, you will need to tap hard on the screen when the marker is in the green zone. Filling up the bar will give you a bonus effect, such as healing, buffing, or stunning the enemy. Tapping hard during boss fights and filling up bars will help you win more easily and get more rewards.

-

Game Review of Monster Squad Rush

-

Monster Squad Rush is a game that has a lot of potential and appeal, but also some flaws and drawbacks. Here are some of the pros and cons of Monster Squad Rush:

-

Pros: Fun, Addictive, Colorful, Variety of Monsters, Easy Controls, Free to Play

-

Monster Squad Rush is a game that is fun, addictive, colorful, and has a variety of monsters to catch and train. The game is easy to play, with simple controls that only require tapping or swiping on the screen. The game is also free to play, which means you can enjoy it without spending any money.

-

Cons: Repetitive, Ads, Bugs, Limited Content, Pay to Win Elements

-

Monster Squad Rush is a game that is repetitive, has ads, bugs, limited content, and pay to win elements. The game can get boring after a while, as you run through the same tracks and face the same enemies over and over again. The game also has ads that pop up frequently and interrupt your gameplay. The game has some bugs and glitches that can affect your performance or progress in the game. The game also has limited content, as there are only a few worlds and modes to play. The game also has pay to win elements, as some of the best monsters and items can only be obtained by spending real money.

-

Conclusion

-

Monster Squad Rush is a game that combines elements of Pokemon, auto-runners, and monster fight games. It is a fun, addictive, and colorful game that will appeal to fans of monster-catching games. It has simple controls, a variety of monsters, and exciting gameplay. It is also free to play, although it does have some ads and in-app purchases. You can download and play Monster Squad Rush on Android, iOS, or PC using the methods we showed you in this article. You can also use the game features, tips and tricks we gave you to enhance your gaming experience. If you are looking for a new game to try out, you might want to give Monster Squad Rush a shot.

-

FAQs

-

Here are some of the frequently asked questions about Monster Squad Rush:

-

Q: How many monsters are there in Monster Squad Rush?

-

A: There are over 100 different monsters in Monster Squad Rush, each with their own unique appearance, abilities, and attributes. You can catch common, rare, epic, or legendary monsters depending on the color of the ball.

-

Q: How do I evolve my monsters in Monster Squad Rush?

-

A: You can evolve your monsters by using evolution stones that you obtain from completing challenges or buying them with gems. Evolving your monsters will change their appearance, increase their stats, and unlock new skills for them.

-

Q: How do I get more gems in Monster Squad Rush?

-

A: You can get more gems in Monster Squad Rush by collecting them during your runs or bonus stages, completing challenges or achievements, watching ads or videos, or buying them with real money.

-

Q: How do I get more balls in Monster Squad Rush?

-

A: You can get more balls in Monster Squad Rush by collecting them during your runs, completing challenges or achievements, or buying them with gems. You can also get more balls by using the ball magnet power-up, which will attract more balls to you during your runs.

-

Q: How do I get more coins in Monster Squad Rush?

-

A: You can get more coins in Monster Squad Rush by completing runs or battles, completing challenges or achievements, or watching ads or videos. You can also get more coins by using the coin magnet power-up, which will attract more coins to you during your runs.

401be4b1e0
-
-
\ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/Download Drift Wars MOD APK and Join the Online Drifting Community.md b/spaces/1phancelerku/anime-remove-background/Download Drift Wars MOD APK and Join the Online Drifting Community.md deleted file mode 100644 index 991aca2291b80a0274c9187c619be12a5a560f15..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Download Drift Wars MOD APK and Join the Online Drifting Community.md +++ /dev/null @@ -1,142 +0,0 @@ -
-

Drift Wars Mod APK AN1: A Guide for Drifting Enthusiasts

-

If you are a fan of racing games, especially those that involve drifting, you might have heard of Drift Wars, a popular online multiplayer drifting game that lets you compete with other players around the world. But did you know that there is a way to enjoy this game even more? In this article, we will tell you everything you need to know about Drift Wars Mod APK AN1, a modified version of the original game that gives you unlimited money and unlocked features. We will also show you how to download and install it on your device, how to play it, and what are the best tips and tricks to master drifting and win races. So buckle up and get ready for some adrenaline-filled drifting action!

-

What is Drift Wars?

-

Drift Wars is a free-to-play 3D drifting game developed by Zero Four LLC and released in December 2015. It is one of the most realistic and immersive drifting games available on mobile devices, as it features:

-

drift wars mod apk an1


Download Filehttps://jinyurl.com/2uNLCo



-

A multiplayer drifting game with realistic physics and graphics

-

In Drift Wars, you can play online against millions of players from all over the world, or join a drifting club and practice with your teammates. You can also challenge your friends in private lobbies or join tournaments of up to 32 players. The game uses realistic physics and graphics to simulate the feeling of drifting on various tracks and arenas. You can see the smoke, sparks, and flames from your tires as you drift around corners and obstacles. You can also customize your car's appearance and performance with different parts, colors, decals, masks, and effects.

-

A variety of cars, tracks, and modes to choose from

-

Drift Wars offers a wide range of cars to suit your preferences and style. You can choose from over 20 licensed cars from brands like Toyota, Mazda, Nissan, Subaru, BMW, Ford, and more. Each car has its own attributes and stats that affect its handling, speed, acceleration, braking, and driftability. You can also unlock more cars by completing challenges or buying them with in-game currency. The game also features over 15 exciting tracks from exotic locations like Dubai, Japan, Poland, and more. Each track has its own layout, obstacles, weather conditions, and time of day. You can also explore different modes like Career Mode, Quick Race Mode, Sandbox Mode, Free Ride Mode, Solo Run Mode, Time Attack Mode, Gymkhana Mode, and more.

-

A cross-platform game that supports Android, iOS, and PC

-

One of the best things about Drift Wars is that it supports cross-platform play between Android, iOS, and PC devices. This means that you can play with anyone regardless of their device or platform. You can also sync your progress across different devices using your Facebook account. The game also supports MOGA controllers for a more console-like experience on your mobile device.

-

What is Drift Wars Mod APK AN1?

-

Drift Wars Mod APK AN1 is a modified version

of the original game that offers unlimited money and unlocked features. This means that you can buy any car, part, or track you want without worrying about the cost. You can also access all the modes and features that are normally locked or restricted in the original game. For example, you can play Sandbox Mode without having to complete Career Mode first, or you can use any car in any track without having to unlock them first.

-

drift wars mod apk unlimited money
-drift wars mod apk download for android
-drift wars mod apk latest version
-drift wars mod apk rexdl
-drift wars mod apk revdl
-drift wars mod apk happymod
-drift wars mod apk android 1
-drift wars mod apk free shopping
-drift wars mod apk obb
-drift wars mod apk offline
-drift wars hack mod apk
-drift wars 2 mod apk
-drift wars 3d mod apk
-drift wars car racing mod apk
-drift wars extreme car driving mod apk
-drift wars pro mod apk
-drift wars turbo racing mod apk
-drift wars ultimate mod apk
-drift wars vip mod apk
-drift wars 2023 mod apk
-download game drift wars mod apk
-download drift wars hack mod apk
-download drift wars 2 mod apk
-download drift wars 3d mod apk
-download drift wars car racing mod apk
-download drift wars extreme car driving mod apk
-download drift wars pro mod apk
-download drift wars turbo racing mod apk
-download drift wars ultimate mod apk
-download drift wars vip mod apk
-download drift wars 2023 mod apk
-how to install drift wars mod apk
-how to play drift wars mod apk
-how to update drift wars mod apk
-how to get drift wars mod apk
-how to download drift wars mod apk on pc
-how to download drift wars mod apk on ios
-how to download drift wars mod apk on laptop
-how to download drift wars mod apk on mac
-how to download drift wars mod apk on windows 10
-is drift wars mod apk safe
-is drift wars mod apk offline or online
-is drift wars mod apk compatible with all devices
-is drift wars mod apk legal or illegal
-is drift wars mod apk virus free or not
-what is the best site to download drift wars mod apk
-what is the latest version of drift wars mod apk
-what is the size of drift wars mod apk
-what are the features of drift wars mod apk

-

How to download and install Drift Wars Mod APK AN1 on your device

-

If you want to try Drift Wars Mod APK AN1, you will need to download and install it on your device manually. Here are the steps you need to follow:

-
    -
  1. Go to a trusted website that offers Drift Wars Mod APK AN1 download link, such as or . Make sure you download the latest version of the mod, which is v1.1.6 as of June 2023.
  2. -
  3. Before you install the mod, you need to uninstall the original Drift Wars game from your device if you have it. This is to avoid any conflicts or errors between the two versions.
  4. -
  5. After you uninstall the original game, go to your device settings and enable the option to install apps from unknown sources. This will allow you to install the mod apk file that you downloaded.
  6. -
  7. Locate the mod apk file in your device storage and tap on it to start the installation process. Follow the instructions on the screen and wait for the installation to finish.
  8. -
  9. Once the installation is done, you can launch Drift Wars Mod APK AN1 from your app drawer and enjoy the game with unlimited money and unlocked features.
  10. -
-

The benefits and risks of using Drift Wars Mod APK AN1

-

Using Drift Wars Mod APK AN1 can be very fun and satisfying, as you can experience the game without any limitations or restrictions. You can buy any car, part, or track you want, customize your car as much as you like, and play any mode or feature you want. You can also compete with other players online with your modded cars and show off your drifting skills.

-

However, using Drift Wars Mod APK AN1 also comes with some risks and drawbacks that you should be aware of. For one thing, using a modded version of the game may violate the terms and conditions of the game developer and publisher, Zero Four LLC. This means that they may ban your account or take legal action against you if they find out that you are using a modded version of their game. For another thing, using a modded version of the game may affect your device's performance and security, as it may contain viruses, malware, or spyware that can harm your device or steal your personal information. Therefore, you should always download and install Drift Wars Mod APK AN1 from a trusted and reputable source, and scan it with an antivirus program before installing it.

-

How to play Drift Wars Mod APK AN1?

-

If you have played the original Drift Wars game before, then playing Drift Wars Mod APK AN1 should be easy for you, as it has the same gameplay and mechanics. However, if you are new to the game or want to improve your drifting skills, here are some tips and tricks that can help you:

-

The basic controls and mechanics of drifting

-

The basic controls of Drift Wars Mod APK AN1 are simple and intuitive. You can use the on-screen buttons or tilt your device to steer your car left or right. You can also use the accelerator pedal to speed up or the brake pedal to slow down. To drift, you need to press and hold the handbrake button while turning your car at high speed. This will make your car slide sideways and create smoke from your tires. The longer and smoother you drift, the more points you will earn.

-

The mechanics of drifting in Drift Wars Mod APK AN1 are realistic and challenging. You need to consider factors like speed, angle, timing, traction, and balance when drifting. You also need to adjust your car's settings like suspension, tire pressure, camber, toe, differential, and more to suit your preferences and style. You can also use different driving techniques like clutch kicking, feinting, braking drifts, power oversteers, e-brake drifts, and more to perform different types of drifts.

-

The tips and tricks to master drifting and win races

-

To master drifting and win races in Drift Wars Mod APK AN1, you need to practice a lot and learn from your mistakes. You also need to follow some tips and tricks that can give you an edge over your opponents. Here are some of them:

- -

The best cars, parts, and tracks to use in Drift Wars Mod APK AN1

-

While there is no definitive answer to what are the best cars, parts, and tracks to use in Drift Wars Mod APK AN1, as it depends on your personal preference and style, here are some suggestions that might help you:

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
CarPartTrack
Toyota AE86TurbochargerTokyo Drift
Mazda RX-7Nitrous OxideDubai Desert
Nissan Skyline GT-RSpoilerNew York City
Subaru Impreza WRX STIAll-Wheel DrivePoland Winter
BMW M3 E46Drift TiresLondon Bridge
-

These are just some examples of cars, parts, and tracks that might suit your drifting style and the track you are racing on. You can try them out or find your own favorites.

-

Conclusion

-

In conclusion, Drift Wars Mod APK AN1 is a modified version of the original Drift Wars game that offers unlimited money and unlocked features. It is a realistic and immersive drifting game that lets you compete with other players online or join a drifting club. It also features a variety of cars, parts, tracks, modes, and features that you can choose from depending on your preference and style. To play Drift Wars Mod APK AN1, you need to download and install it on your device manually from a trusted source. You also need to follow some tips and tricks to master drifting and win races.

-

If you are a drifting enthusiast who wants to enjoy Drift Wars without any limitations or restrictions, then Drift Wars Mod APK AN1 is the game for you. Download it now and experience the thrill of drifting!

-

Five unique FAQs about Drift Wars Mod APK AN1

-
    -
  1. Q: Is Drift Wars Mod APK AN1 safe to use? -A: Drift Wars Mod APK AN1 is generally safe to use, as long as you download and install it from a trusted and reputable source. However, you should always scan it with an antivirus program before installing it, and be aware of the risks and drawbacks of using a modded version of the game, such as violating the terms and conditions of the game developer and publisher, or affecting your device's performance and security.
  2. -
  3. Q: How can I update Drift Wars Mod APK AN1? -A: Drift Wars Mod APK AN1 is not available on the official app stores, so you cannot update it automatically. You will need to check the website where you downloaded it from for any updates, and download and install them manually. You may also need to uninstall the previous version of the mod before installing the new one.
  4. -
  5. Q: Can I play Drift Wars Mod APK AN1 offline? -A: Drift Wars Mod APK AN1 requires an internet connection to play, as it is a multiplayer game that connects you with other players online. However, you can play some modes offline, such as Sandbox Mode, Free Ride Mode, Solo Run Mode, or Time Attack Mode.
  6. -
  7. Q: Can I sync my progress in Drift Wars Mod APK AN1 with the original Drift Wars game? -A: No, you cannot sync your progress in Drift Wars Mod APK AN1 with the original Drift Wars game, as they are different versions of the game. You will need to start from scratch if you switch between the two versions.
  8. -
  9. Q: Can I use Drift Wars Mod APK AN1 on my PC? -A: Yes, you can use Drift Wars Mod APK AN1 on your PC, as it supports cross-platform play between Android, iOS, and PC devices. You will need to use an Android emulator program on your PC, such as BlueStacks or NoxPlayer, to run the mod apk file. You can also use a MOGA controller for a more console-like experience on your PC.
  10. -

401be4b1e0
-
-
\ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/Download Kora Live APK and Stream Your Favorite Sports Channels Anytime Anywhere.md b/spaces/1phancelerku/anime-remove-background/Download Kora Live APK and Stream Your Favorite Sports Channels Anytime Anywhere.md deleted file mode 100644 index 42764c0488d18f34037cae60c057159a7b60012f..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Download Kora Live APK and Stream Your Favorite Sports Channels Anytime Anywhere.md +++ /dev/null @@ -1,149 +0,0 @@ -
-

Download Kora Live Apk: The Best App for Streaming Live Football Matches

-

If you are a football fan, you probably want to watch your favorite teams and leagues live on your smartphone. However, most of the official streaming services are expensive, require a subscription, or are not available in your region. That's why you need Kora Live Apk, a free app that lets you stream live football matches from various channels, including Bein Sports Arab, without any hassle. In this article, we will tell you everything you need to know about Kora Live Apk, how to download and install it, why you should use it, how to use it, and some alternatives to it.

-

What is Kora Live Apk?

-

Kora Live Apk is one of the best apps for streaming live football matches on your smartphone. It offers a wide range of channels that cover various leagues and competitions, such as the UEFA Champions League, La Liga, Bundesliga, and Arab League. You can watch the matches in high quality and with Arabic commentary. You can also choose the language and quality of the stream according to your preference. Kora Live Apk is easy to use, fast, and reliable. You don't need to sign up or pay anything to use it. All you need is a stable internet connection and some storage space on your device.

-

download kora live apk


Downloadhttps://jinyurl.com/2uNTaH



-

Features of Kora Live Apk

-

Some of the features that make Kora Live Apk stand out from other streaming apps are:

- -

How to Download and Install Kora Live Apk

-

To download and install Kora Live Apk on your smartphone, follow these steps:

-
    -
  1. Go to [this link](^1^) and click on the download button to get the latest version of Kora Live Apk.
  2. -
  3. Once the download is complete, go to your device settings and enable the installation of apps from unknown sources.
  4. -
  5. Locate the downloaded file in your file manager and tap on it to start the installation process.
  6. -
  7. Follow the instructions on the screen and wait for the installation to finish.
  8. -
  9. Launch the app from your app drawer and enjoy streaming live football matches on your smartphone.
  10. -
-

Why You Should Use Kora Live Apk

Advantages of Kora Live Apk

-

There are many reasons why you should use Kora Live Apk to stream live football matches on your smartphone. Some of the advantages are:

- -

Disadvantages of Kora Live Apk

-

However, Kora Live Apk is not perfect and has some drawbacks that you should be aware of. Some of the disadvantages are:

- -

How to Use Kora Live Apk

-

Using Kora Live Apk is very easy and straightforward. Here are some tips on how to use it:

-

How to Watch Live Football Matches on Kora Live Apk

-

To watch live football matches on Kora Live Apk, follow these steps:

-
    -
  1. Launch the app from your app drawer and grant it the necessary permissions.
  2. -
  3. Select the channel that broadcasts the match you want to watch from the list of available channels.
  4. -
  5. If the channel is not available, use the search feature to find it by typing its name or keyword.
  6. -
  7. If the match has not started yet, wait for it to begin or check the schedule feature to see when it will start.
  8. -
  9. If the match has started, tap on the play button to start streaming it on your smartphone.
  10. -
  11. If you want to pause, resume, or stop the stream, use the controls on the screen.
  12. -
-

How to Change the Language and Quality of the Stream on Kora Live Apk

-

To change the language and quality of the stream on Kora Live Apk, follow these steps:

-
    -
  1. While streaming a match, tap on the settings icon on the top right corner of the screen.
  2. -
  3. Select the language option and choose from Arabic, English, French, Spanish, or German.
  4. -
  5. Select the quality option and choose from HD, SD, or Low.
  6. -
  7. Tap on OK to save your changes and enjoy watching the match in your preferred language and quality.
  8. -
-

Alternatives to Kora Live Apk

-

If you are looking for some alternatives to Kora Live Apk, here are some apps that you can try:

-

download kora live app for android
-download kora live tv apk
-download kora live football apk
-download kora live streaming apk
-download kora live hd apk
-download kora live 2023 apk
-download kora live online apk
-download kora live free apk
-download kora live latest version apk
-download kora live update apk
-download kora live pro apk
-download kora live mod apk
-download kora live premium apk
-download kora live cracked apk
-download kora live hack apk
-download kora live unlocked apk
-download kora live full apk
-download kora live plus apk
-download kora live extra apk
-download kora live new apk
-download kora live best apk
-download kora live official apk
-download kora live original apk
-download kora live safe apk
-download kora live secure apk
-download kora live virus free apk
-download kora live malware free apk
-download kora live ad free apk
-download kora live no ads apk
-download kora live without ads apk
-download kora live unlimited apk
-download kora live unlimited access apk
-download kora live unlimited channels apk
-download kora live unlimited matches apk
-download kora live unlimited streams apk
-download kora live fast apk
-download kora live speed apk
-download kora live smooth apk
-download kora live easy apk
-download kora live simple apk
-download kora live user friendly apk
-download kora live high quality apk
-download kora live high definition apk
-download kora live 4k apk
-download kora live 1080p apk
-download kora live 720p apk
-download kora live low size apk
-download kora live low mb apk
-download kora live low data usage apk

-

Mobdro

-

Mobdro is a popular app that allows you to stream live TV channels from various categories, including sports, news, movies, music, and more. You can watch live football matches from different leagues and countries on Mobdro. You can also download your favorite streams for offline viewing. Mobdro is free but has a premium version that offers more features and removes ads. You can download Mobdro from [this link].

-

Live NetTV

-

Live NetTV is another app that lets you stream live TV channels from various genres, such as sports, entertainment, news, documentaries, and more. You can watch live football matches from different channels and regions on Live NetTV. You can also request for new channels or report broken links. Live NetTV is free and does not require any registration. You can download Live NetTV from [this link].

-

RedBox TV

-

Red Box TV is a third app that enables you to stream live TV channels from various categories, such as sports, movies, news, kids, and more. You can watch live football matches from different sources and languages on RedBox TV. You can also choose the video player of your choice and adjust the volume and brightness of the stream. RedBox TV is free and does not require any sign up. You can download RedBox TV from [this link].

-

Conclusion

-

Kora Live Apk is a great app for streaming live football matches on your smartphone. It offers a wide range of channels that cover various leagues and competitions, such as the UEFA Champions League, La Liga, Bundesliga, and Arab League. You can watch the matches in high quality and with Arabic commentary. You can also change the language and quality of the stream according to your preference. Kora Live Apk is easy to use, fast, and reliable. You don't need to sign up or pay anything to use it. All you need is a stable internet connection and some storage space on your device.

-

However, Kora Live Apk is not perfect and has some drawbacks that you should be aware of. It is not available on the Google Play Store and has to be downloaded from a third-party source, which may pose some security risks. It may contain some ads that may interrupt your viewing experience or redirect you to unwanted sites. It may not work properly on some devices or regions due to technical issues or geo-restrictions. It may not have all the channels or matches that you want to watch, especially if they are exclusive to certain platforms or providers. It may have some bugs or errors that may affect its performance or functionality.

-

If you are looking for some alternatives to Kora Live Apk, you can try Mobdro, Live NetTV, or RedBox TV. These apps also allow you to stream live TV channels from various categories, including sports, news, movies, music, and more. You can watch live football matches from different leagues and countries on these apps. You can also download your favorite streams for offline viewing, request for new channels or report broken links, choose the video player of your choice, and adjust the volume and brightness of the stream. These apps are free and do not require any registration.

-

We hope this article has helped you learn more about Kora Live Apk, how to download and install it, why you should use it, how to use it, and some alternatives to it. If you have any questions or feedback, please feel free to leave a comment below.

-

FAQs

-

Here are some frequently asked questions about Kora Live Apk:

-
    -
  1. Is Kora Live Apk safe to use?
  2. -

    Kora Live Apk is generally safe to use as long as you download it from a trusted source and scan it with an antivirus before installing it. However, since it is not an official app and has to be installed from a third-party source, there may be some security risks involved. Therefore, we recommend you to use it at your own discretion and responsibility.

    -
  3. Is Kora Live Apk legal to use?
  4. -

    Kora Live Apk is not legal to use in some countries or regions where streaming live TV channels without permission or license is prohibited by law. Therefore, we advise you to check the laws and regulations of your country or region before using Kora Live Apk. We also suggest you to use a VPN service to protect your privacy and security while using Kora Live Apk.

    -
  5. Does Kora Live Apk work on iOS devices?
  6. -

    No, Kora Live Apk does not work on iOS devices as it is only compatible with Android devices. However, you can use other apps that are similar to Kora Live Apk on iOS devices, such as [this app].

    -
  7. Does Kora Live Apk require root access?
  8. -

    No, Kora Live Apk does not require root access to work on your device. You can install and use it without rooting your device.

    -
  9. How can I update Kora Live Apk?
  10. -

    To update Kora Live Apk, you can either check for updates within the app or visit [this link] to download the latest version of Kora Live Apk.

    -

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/Epic Conquest 2 Mod Apk Free Purchase Enjoy the RPG Adventure with No Limits.md b/spaces/1phancelerku/anime-remove-background/Epic Conquest 2 Mod Apk Free Purchase Enjoy the RPG Adventure with No Limits.md deleted file mode 100644 index 3b57dacee42d8f1d034546cab1a577268fd6d280..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Epic Conquest 2 Mod Apk Free Purchase Enjoy the RPG Adventure with No Limits.md +++ /dev/null @@ -1,98 +0,0 @@ - -

Epic Conquest 2 Mod APK Free Purchase: How to Enjoy the Game Without Spending a Dime

-

If you are a fan of action RPG games and anime, you may have heard of Epic Conquest 2, a game created by a small indie team of four with burning passion and love for the genre. Epic Conquest 2 is an exciting and challenging game that offers a rich story, engaging gameplay, and stunning graphics. However, like many other games, it also has some premium features that require real money to unlock. If you are looking for a way to enjoy the game without spending a dime, you may be tempted to try Epic Conquest 2 Mod APK Free Purchase, a modified version of the game that allows you to buy anything in the game for free. But is it worth it? And what are the risks involved? In this article, we will tell you everything you need to know about Epic Conquest 2 Mod APK Free Purchase, how to install it, how to use it, and what are the pros and cons of using it.

-

What is Epic Conquest 2?

-

A classic action RPG with anime-style story telling

-

A sequel to the popular mobile game Epic Conquest

-

Epic Conquest 2 is a sequel to the popular mobile game Epic Conquest, which was released in 2017 and has over 5 million downloads on Google Play. The sequel continues the story of the first game, but with new characters, new settings, and new challenges. You can play Epic Conquest 2 without playing the first game, but you will appreciate the story more if you have played the first game. You can also import your save data from the first game to the second game, and get some rewards and bonuses for doing so.

-

epic conquest 2 mod apk free purchase


Download Ziphttps://jinyurl.com/2uNObU



-

An early access game on Steam and Google Play

-

Epic Conquest 2 is currently in early access, which means that the game is still in development and may have bugs, errors, or incomplete features. The developers are constantly working on improving the game and adding new content, and they welcome feedback and suggestions from the players. You can download and play Epic Conquest 2 for free on Steam or Google Play, but you can also support the developers by purchasing the premium currency or the supporter pack. The premium currency can be used to buy some items in the game, such as costumes, materials, or gems. The supporter pack can give you some exclusive items and benefits, such as a supporter badge, a unique costume, a special weapon, and more.

-

What is Epic Conquest 2 Mod APK Free Purchase?

-

A modified version of the game that allows free purchases of in-game items

-

Epic Conquest 2 Mod APK Free Purchase is a modified version of the game that allows you to buy any item in the game for free, without using any real money or premium currency. This means that you can get unlimited gold, gems, materials, skills, masteries, costumes, and anything else that you want in the game. You can also unlock all features and content of the game without waiting for updates or completing quests.

-

A way to bypass the premium currency and unlock all features

-

Some players may find the premium currency and the locked features of Epic Conquest 2 annoying or unfair, especially if they want to enjoy the game without spending any money. They may think that Epic Conquest 2 Mod APK Free Purchase is a way to bypass these limitations and unlock all features of the game. They may also think that Epic Conquest 2 Mod APK Free Purchase is a way to support the developers by playing their game and giving them feedback.

-

A risky download that may contain malware or viruses

-

How to Install Epic Conquest 2 Mod APK Free Purchase?

-

Download the mod apk file from a trusted source

-

If you still want to try Epic Conquest 2 Mod APK Free Purchase, you will need to download the mod apk file from a trusted source. You can search online for websites that offer mod apk files for various games, but be careful and do some research before downloading anything. Some websites may be fake or malicious, and some mod apk files may be outdated or incompatible with your device. You should also check the reviews and ratings of the mod apk file, and scan it with an antivirus software before installing it.

-

epic conquest 2 mod apk unlimited money
-epic conquest 2 mod apk mega menu
-epic conquest 2 mod apk latest version
-epic conquest 2 mod apk download for android
-epic conquest 2 mod apk offline
-epic conquest 2 mod apk no root
-epic conquest 2 mod apk happymod
-epic conquest 2 mod apk revdl
-epic conquest 2 mod apk rexdl
-epic conquest 2 mod apk android 1
-epic conquest 2 mod apk obb
-epic conquest 2 mod apk data
-epic conquest 2 mod apk free shopping
-epic conquest 2 mod apk unlimited gems
-epic conquest 2 mod apk unlimited skill points
-epic conquest 2 mod apk unlimited ruby
-epic conquest 2 mod apk unlimited gold
-epic conquest 2 mod apk unlimited everything
-epic conquest 2 mod apk premium unlocked
-epic conquest 2 mod apk god mode
-epic conquest 2 mod apk one hit kill
-epic conquest 2 mod apk high damage
-epic conquest 2 mod apk max level
-epic conquest 2 mod apk full version
-epic conquest 2 mod apk all characters unlocked
-epic conquest 2 mod apk all items unlocked
-epic conquest 2 mod apk all costumes unlocked
-epic conquest 2 mod apk all weapons unlocked
-epic conquest 2 mod apk all skills unlocked
-epic conquest 2 mod apk all features unlocked
-epic conquest 2 hack apk free download
-epic conquest 2 hack apk unlimited money and gems
-epic conquest 2 hack apk no verification
-epic conquest 2 hack apk online generator
-epic conquest 2 hack apk without human verification
-epic conquest 2 cheat apk free download
-epic conquest 2 cheat apk unlimited money and gems
-epic conquest 2 cheat codes for android
-how to install epic conquest 2 mod apk on android device
-how to update epic conquest 2 mod apk to latest version

-

Enable unknown sources on your device settings

-

After downloading the mod apk file, you will need to enable unknown sources on your device settings. This will allow you to install apps that are not from the official app store, such as the mod apk file. To do this, go to your device settings, then security, then unknown sources, and toggle it on. You may see a warning message that installing apps from unknown sources may harm your device or data, but you can ignore it if you trust the source of the mod apk file.

-

Install the mod apk file and launch the game

-

Once you have enabled unknown sources, you can install the mod apk file by tapping on it and following the instructions. You may need to uninstall the original version of Epic Conquest 2 if you have it on your device, or use a different device or account to avoid conflicts. After installing the mod apk file, you can launch the game and enjoy the free purchases.

-

How to Use Epic Conquest 2 Mod APK Free Purchase?

-

Access the in-game shop and select any item you want

-

To use Epic Conquest 2 Mod APK Free Purchase, you just need to access the in-game shop and select any item you want. You can buy gold, gems, materials, skills, masteries, costumes, and anything else that is available in the shop. You can also buy items that are normally locked or require premium currency.

-

Tap on the purchase button and confirm the transaction

-

After selecting an item, you just need to tap on the purchase button and confirm the transaction. You will not be charged any real money or premium currency for the purchase. Instead, you will see a message that says "Free Purchase" or something similar. You will then receive your item instantly in your inventory or character screen.

-

Enjoy your free item without spending any real money

-

What are the Benefits of Epic Conquest 2 Mod APK Free Purchase?

-

You can get unlimited gold, gems, and materials to upgrade your character and equipment

-

One of the benefits of Epic Conquest 2 Mod APK Free Purchase is that you can get unlimited gold, gems, and materials to upgrade your character and equipment. Gold is the main currency in the game, and you can use it to buy items, skills, masteries, and costumes. Gems are the premium currency in the game, and you can use them to buy special items, costumes, or materials. Materials are used to craft or enhance your equipment, such as weapons, armor, accessories, or potions. With Epic Conquest 2 Mod APK Free Purchase, you can get as much gold, gems, and materials as you want, and upgrade your character and equipment to the max level.

-

You can unlock all skills, masteries, and costumes for your character

-

Another benefit of Epic Conquest 2 Mod APK Free Purchase is that you can unlock all skills, masteries, and costumes for your character. Skills are the abilities that you can use in combat, such as attacks, spells, or buffs. Masteries are the passive bonuses that you can activate to enhance your skills or stats. Costumes are the outfits that you can wear to change your appearance or get some extra effects. With Epic Conquest 2 Mod APK Free Purchase, you can unlock all skills, masteries, and costumes for your character, and customize them according to your preference. You can also mix and match different skills, masteries, and costumes to create your own unique build.

-

You can experience the full story and content of the game without waiting for updates

-

A final benefit of Epic Conquest 2 Mod APK Free Purchase is that you can experience the full story and content of the game without waiting for updates. Epic Conquest 2 has a captivating story that is done right, with cutscenes and character expressions that enrich the story telling. You will encounter childhood friends, careless adventurers, mysterious mages, powerful enemies, ancient secrets, and epic battles. You will also explore various locations, such as cities, forests, deserts, mountains, dungeons, and more. With Epic Conquest 2 Mod APK Free Purchase, you can access all chapters and quests of the story without waiting for the developers to release new updates. You can also enjoy all features and content of the game without any restrictions.

-

What are the Drawbacks of Epic Conquest 2 Mod APK Free Purchase?

-

You may lose your progress and data if the game updates or detects the mod

-

One of the drawbacks of Epic Conquest 2 Mod APK Free Purchase is that you may lose your progress and data if the game updates or detects the mod. Since Epic Conquest 2 is still in early access, the developers are constantly working on improving the game and adding new content. This means that they may release new updates that may change or fix some aspects of the game. If you update your game with Epic Conquest 2 Mod APK Free Purchase installed, you may encounter errors or crashes that may prevent you from playing the game. You may also lose your save data or progress if the update overwrites or deletes them. Moreover, if the developers detect that you are using a modded version of the game, they may ban your account or device from playing the game.

-

You may face legal issues or bans from the developers or publishers

-

Another drawback of Epic Conquest 2 Mod APK Free Purchase is that you may face legal issues or bans from the developers or publishers. Epic Conquest 2 is an intellectual property of Gaco Games and Persephone Media LLC. They have the exclusive rights to distribute and monetize their game. By using Epic Conquest 2 Mod APK Free Purchase, you are violating their terms of service and infringing on their intellectual property rights. You are also depriving them of their rightful revenue and support from their loyal fans. If they find out that you are using a modded version of their game, they may take legal action against you or ban your account or device from playing their game.

-

You may harm your device or compromise your security if the mod contains malware or viruses

-

other threats or dangers online, such as phishing, scams, or hackers. You should be careful and cautious when downloading or installing any mod apk file, and always protect your device and security with an antivirus software and a VPN service.

-

Conclusion

-

Epic Conquest 2 Mod APK Free Purchase is a tempting option for fans of the game who want to enjoy it without spending any money. It allows you to buy any item in the game for free, and unlock all features and content of the game. However, it is also a risky and unethical choice that may ruin your gaming experience and expose you to dangers. It may cause errors or crashes in the game, or lose your progress and data. It may also face legal issues or bans from the developers or publishers. It may also harm your device or compromise your security if the mod contains malware or viruses. It is better to support the developers by playing the official version of the game and purchasing items legitimately. Epic Conquest 2 is a great game that deserves your respect and appreciation.

-

FAQs

-

Q: Is Epic Conquest 2 Mod APK Free Purchase safe to use?

-

A: No, Epic Conquest 2 Mod APK Free Purchase is not safe to use. It may contain malware or viruses that can harm your device or compromise your security. It may also cause errors or crashes in the game, or lose your progress and data. It may also face legal issues or bans from the developers or publishers.

-

Q: How can I support the developers of Epic Conquest 2?

-

A: You can support the developers of Epic Conquest 2 by playing the official version of the game and purchasing items legitimately. You can also leave a positive review and rating on Steam or Google Play, and share the game with your friends and family. You can also follow their social media accounts and join their community forums.

-

Q: What are some alternatives to Epic Conquest 2 Mod APK Free Purchase?

-

A: Some alternatives to Epic Conquest 2 Mod APK Free Purchase are playing the game normally and earning items through gameplay, using cheats or hacks that do not require downloading any mod apk file, or playing other similar games that are free or cheap.

-

Q: How can I download Epic Conquest 2 Mod APK Free Purchase?

-

A: You can download Epic Conquest 2 Mod APK Free Purchase by searching online for websites that offer mod apk files for various games. However, you should be careful and do some research before downloading anything, as some websites may be fake or malicious, and some mod apk files may be outdated or incompatible with your device. You should also check the reviews and ratings of the mod apk file, and scan it with an antivirus software before installing it.

-

Q: How can I uninstall Epic Conquest 2 Mod APK Free Purchase?

-

A: You can uninstall Epic Conquest 2 Mod APK Free Purchase by going to your device settings, then apps, then Epic Conquest 2, then uninstall. You may also need to delete any residual files or folders related to the mod apk file from your device storage. You may also need to reinstall the original version of Epic Conquest 2 from Steam or Google Play if you want to play it again.

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/4Taps/SadTalker/src/face3d/models/arcface_torch/backbones/mobilefacenet.py b/spaces/4Taps/SadTalker/src/face3d/models/arcface_torch/backbones/mobilefacenet.py deleted file mode 100644 index 87731491d76f9ff61cc70e57bb3f18c54fae308c..0000000000000000000000000000000000000000 --- a/spaces/4Taps/SadTalker/src/face3d/models/arcface_torch/backbones/mobilefacenet.py +++ /dev/null @@ -1,130 +0,0 @@ -''' -Adapted from https://github.com/cavalleria/cavaface.pytorch/blob/master/backbone/mobilefacenet.py -Original author cavalleria -''' - -import torch.nn as nn -from torch.nn import Linear, Conv2d, BatchNorm1d, BatchNorm2d, PReLU, Sequential, Module -import torch - - -class Flatten(Module): - def forward(self, x): - return x.view(x.size(0), -1) - - -class ConvBlock(Module): - def __init__(self, in_c, out_c, kernel=(1, 1), stride=(1, 1), padding=(0, 0), groups=1): - super(ConvBlock, self).__init__() - self.layers = nn.Sequential( - Conv2d(in_c, out_c, kernel, groups=groups, stride=stride, padding=padding, bias=False), - BatchNorm2d(num_features=out_c), - PReLU(num_parameters=out_c) - ) - - def forward(self, x): - return self.layers(x) - - -class LinearBlock(Module): - def __init__(self, in_c, out_c, kernel=(1, 1), stride=(1, 1), padding=(0, 0), groups=1): - super(LinearBlock, self).__init__() - self.layers = nn.Sequential( - Conv2d(in_c, out_c, kernel, stride, padding, groups=groups, bias=False), - BatchNorm2d(num_features=out_c) - ) - - def forward(self, x): - return self.layers(x) - - -class DepthWise(Module): - def __init__(self, in_c, out_c, residual=False, kernel=(3, 3), stride=(2, 2), padding=(1, 1), groups=1): - super(DepthWise, self).__init__() - self.residual = residual - self.layers = nn.Sequential( - ConvBlock(in_c, out_c=groups, kernel=(1, 1), padding=(0, 0), stride=(1, 1)), - ConvBlock(groups, groups, groups=groups, kernel=kernel, padding=padding, stride=stride), - LinearBlock(groups, out_c, kernel=(1, 1), padding=(0, 0), stride=(1, 1)) - ) - - def forward(self, x): - short_cut = None - if self.residual: - short_cut = x - x = self.layers(x) - if self.residual: - output = short_cut + x - else: - output = x - return output - - -class Residual(Module): - def __init__(self, c, num_block, groups, kernel=(3, 3), stride=(1, 1), padding=(1, 1)): - super(Residual, self).__init__() - modules = [] - for _ in range(num_block): - modules.append(DepthWise(c, c, True, kernel, stride, padding, groups)) - self.layers = Sequential(*modules) - - def forward(self, x): - return self.layers(x) - - -class GDC(Module): - def __init__(self, embedding_size): - super(GDC, self).__init__() - self.layers = nn.Sequential( - LinearBlock(512, 512, groups=512, kernel=(7, 7), stride=(1, 1), padding=(0, 0)), - Flatten(), - Linear(512, embedding_size, bias=False), - BatchNorm1d(embedding_size)) - - def forward(self, x): - return self.layers(x) - - -class MobileFaceNet(Module): - def __init__(self, fp16=False, num_features=512): - super(MobileFaceNet, self).__init__() - scale = 2 - self.fp16 = fp16 - self.layers = nn.Sequential( - ConvBlock(3, 64 * scale, kernel=(3, 3), stride=(2, 2), padding=(1, 1)), - ConvBlock(64 * scale, 64 * scale, kernel=(3, 3), stride=(1, 1), padding=(1, 1), groups=64), - DepthWise(64 * scale, 64 * scale, kernel=(3, 3), stride=(2, 2), padding=(1, 1), groups=128), - Residual(64 * scale, num_block=4, groups=128, kernel=(3, 3), stride=(1, 1), padding=(1, 1)), - DepthWise(64 * scale, 128 * scale, kernel=(3, 3), stride=(2, 2), padding=(1, 1), groups=256), - Residual(128 * scale, num_block=6, groups=256, kernel=(3, 3), stride=(1, 1), padding=(1, 1)), - DepthWise(128 * scale, 128 * scale, kernel=(3, 3), stride=(2, 2), padding=(1, 1), groups=512), - Residual(128 * scale, num_block=2, groups=256, kernel=(3, 3), stride=(1, 1), padding=(1, 1)), - ) - self.conv_sep = ConvBlock(128 * scale, 512, kernel=(1, 1), stride=(1, 1), padding=(0, 0)) - self.features = GDC(num_features) - self._initialize_weights() - - def _initialize_weights(self): - for m in self.modules(): - if isinstance(m, nn.Conv2d): - nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu') - if m.bias is not None: - m.bias.data.zero_() - elif isinstance(m, nn.BatchNorm2d): - m.weight.data.fill_(1) - m.bias.data.zero_() - elif isinstance(m, nn.Linear): - nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu') - if m.bias is not None: - m.bias.data.zero_() - - def forward(self, x): - with torch.cuda.amp.autocast(self.fp16): - x = self.layers(x) - x = self.conv_sep(x.float() if self.fp16 else x) - x = self.features(x) - return x - - -def get_mbf(fp16, num_features): - return MobileFaceNet(fp16, num_features) \ No newline at end of file diff --git a/spaces/801artistry/RVC801/lib/infer_pack/models.py b/spaces/801artistry/RVC801/lib/infer_pack/models.py deleted file mode 100644 index ec107476df968e51aafc6c3d102a9ed8c53f141a..0000000000000000000000000000000000000000 --- a/spaces/801artistry/RVC801/lib/infer_pack/models.py +++ /dev/null @@ -1,1144 +0,0 @@ -import math, pdb, os -from time import time as ttime -import torch -from torch import nn -from torch.nn import functional as F -from lib.infer_pack import modules -from lib.infer_pack import attentions -from lib.infer_pack import commons -from lib.infer_pack.commons import init_weights, get_padding -from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm -from lib.infer_pack.commons import init_weights -import numpy as np -from lib.infer_pack import commons - - -class TextEncoder256(nn.Module): - def __init__( - self, - out_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - f0=True, - ): - super().__init__() - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.emb_phone = nn.Linear(256, hidden_channels) - self.lrelu = nn.LeakyReLU(0.1, inplace=True) - if f0 == True: - self.emb_pitch = nn.Embedding(256, hidden_channels) # pitch 256 - self.encoder = attentions.Encoder( - hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout - ) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, phone, pitch, lengths): - if pitch == None: - x = self.emb_phone(phone) - else: - x = self.emb_phone(phone) + self.emb_pitch(pitch) - x = x * math.sqrt(self.hidden_channels) # [b, t, h] - x = self.lrelu(x) - x = torch.transpose(x, 1, -1) # [b, h, t] - x_mask = torch.unsqueeze(commons.sequence_mask(lengths, x.size(2)), 1).to( - x.dtype - ) - x = self.encoder(x * x_mask, x_mask) - stats = self.proj(x) * x_mask - - m, logs = torch.split(stats, self.out_channels, dim=1) - return m, logs, x_mask - - -class TextEncoder768(nn.Module): - def __init__( - self, - out_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - f0=True, - ): - super().__init__() - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.emb_phone = nn.Linear(768, hidden_channels) - self.lrelu = nn.LeakyReLU(0.1, inplace=True) - if f0 == True: - self.emb_pitch = nn.Embedding(256, hidden_channels) # pitch 256 - self.encoder = attentions.Encoder( - hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout - ) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, phone, pitch, lengths): - if pitch == None: - x = self.emb_phone(phone) - else: - x = self.emb_phone(phone) + self.emb_pitch(pitch) - x = x * math.sqrt(self.hidden_channels) # [b, t, h] - x = self.lrelu(x) - x = torch.transpose(x, 1, -1) # [b, h, t] - x_mask = torch.unsqueeze(commons.sequence_mask(lengths, x.size(2)), 1).to( - x.dtype - ) - x = self.encoder(x * x_mask, x_mask) - stats = self.proj(x) * x_mask - - m, logs = torch.split(stats, self.out_channels, dim=1) - return m, logs, x_mask - - -class ResidualCouplingBlock(nn.Module): - def __init__( - self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - n_flows=4, - gin_channels=0, - ): - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.n_flows = n_flows - self.gin_channels = gin_channels - - self.flows = nn.ModuleList() - for i in range(n_flows): - self.flows.append( - modules.ResidualCouplingLayer( - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=gin_channels, - mean_only=True, - ) - ) - self.flows.append(modules.Flip()) - - def forward(self, x, x_mask, g=None, reverse=False): - if not reverse: - for flow in self.flows: - x, _ = flow(x, x_mask, g=g, reverse=reverse) - else: - for flow in reversed(self.flows): - x = flow(x, x_mask, g=g, reverse=reverse) - return x - - def remove_weight_norm(self): - for i in range(self.n_flows): - self.flows[i * 2].remove_weight_norm() - - -class PosteriorEncoder(nn.Module): - def __init__( - self, - in_channels, - out_channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=0, - ): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - - self.pre = nn.Conv1d(in_channels, hidden_channels, 1) - self.enc = modules.WN( - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=gin_channels, - ) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, x, x_lengths, g=None): - x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to( - x.dtype - ) - x = self.pre(x) * x_mask - x = self.enc(x, x_mask, g=g) - stats = self.proj(x) * x_mask - m, logs = torch.split(stats, self.out_channels, dim=1) - z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask - return z, m, logs, x_mask - - def remove_weight_norm(self): - self.enc.remove_weight_norm() - - -class Generator(torch.nn.Module): - def __init__( - self, - initial_channel, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=0, - ): - super(Generator, self).__init__() - self.num_kernels = len(resblock_kernel_sizes) - self.num_upsamples = len(upsample_rates) - self.conv_pre = Conv1d( - initial_channel, upsample_initial_channel, 7, 1, padding=3 - ) - resblock = modules.ResBlock1 if resblock == "1" else modules.ResBlock2 - - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)): - self.ups.append( - weight_norm( - ConvTranspose1d( - upsample_initial_channel // (2**i), - upsample_initial_channel // (2 ** (i + 1)), - k, - u, - padding=(k - u) // 2, - ) - ) - ) - - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = upsample_initial_channel // (2 ** (i + 1)) - for j, (k, d) in enumerate( - zip(resblock_kernel_sizes, resblock_dilation_sizes) - ): - self.resblocks.append(resblock(ch, k, d)) - - self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False) - self.ups.apply(init_weights) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1) - - def forward(self, x, g=None): - x = self.conv_pre(x) - if g is not None: - x = x + self.cond(g) - - for i in range(self.num_upsamples): - x = F.leaky_relu(x, modules.LRELU_SLOPE) - x = self.ups[i](x) - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i * self.num_kernels + j](x) - else: - xs += self.resblocks[i * self.num_kernels + j](x) - x = xs / self.num_kernels - x = F.leaky_relu(x) - x = self.conv_post(x) - x = torch.tanh(x) - - return x - - def remove_weight_norm(self): - for l in self.ups: - remove_weight_norm(l) - for l in self.resblocks: - l.remove_weight_norm() - - -class SineGen(torch.nn.Module): - """Definition of sine generator - SineGen(samp_rate, harmonic_num = 0, - sine_amp = 0.1, noise_std = 0.003, - voiced_threshold = 0, - flag_for_pulse=False) - samp_rate: sampling rate in Hz - harmonic_num: number of harmonic overtones (default 0) - sine_amp: amplitude of sine-wavefrom (default 0.1) - noise_std: std of Gaussian noise (default 0.003) - voiced_thoreshold: F0 threshold for U/V classification (default 0) - flag_for_pulse: this SinGen is used inside PulseGen (default False) - Note: when flag_for_pulse is True, the first time step of a voiced - segment is always sin(np.pi) or cos(0) - """ - - def __init__( - self, - samp_rate, - harmonic_num=0, - sine_amp=0.1, - noise_std=0.003, - voiced_threshold=0, - flag_for_pulse=False, - ): - super(SineGen, self).__init__() - self.sine_amp = sine_amp - self.noise_std = noise_std - self.harmonic_num = harmonic_num - self.dim = self.harmonic_num + 1 - self.sampling_rate = samp_rate - self.voiced_threshold = voiced_threshold - - def _f02uv(self, f0): - # generate uv signal - uv = torch.ones_like(f0) - uv = uv * (f0 > self.voiced_threshold) - if uv.device.type == "privateuseone": # for DirectML - uv = uv.float() - return uv - - def forward(self, f0, upp): - """sine_tensor, uv = forward(f0) - input F0: tensor(batchsize=1, length, dim=1) - f0 for unvoiced steps should be 0 - output sine_tensor: tensor(batchsize=1, length, dim) - output uv: tensor(batchsize=1, length, 1) - """ - with torch.no_grad(): - f0 = f0[:, None].transpose(1, 2) - f0_buf = torch.zeros(f0.shape[0], f0.shape[1], self.dim, device=f0.device) - # fundamental component - f0_buf[:, :, 0] = f0[:, :, 0] - for idx in np.arange(self.harmonic_num): - f0_buf[:, :, idx + 1] = f0_buf[:, :, 0] * ( - idx + 2 - ) # idx + 2: the (idx+1)-th overtone, (idx+2)-th harmonic - rad_values = (f0_buf / self.sampling_rate) % 1 ###%1意味着n_har的乘积无法后处理优化 - rand_ini = torch.rand( - f0_buf.shape[0], f0_buf.shape[2], device=f0_buf.device - ) - rand_ini[:, 0] = 0 - rad_values[:, 0, :] = rad_values[:, 0, :] + rand_ini - tmp_over_one = torch.cumsum(rad_values, 1) # % 1 #####%1意味着后面的cumsum无法再优化 - tmp_over_one *= upp - tmp_over_one = F.interpolate( - tmp_over_one.transpose(2, 1), - scale_factor=upp, - mode="linear", - align_corners=True, - ).transpose(2, 1) - rad_values = F.interpolate( - rad_values.transpose(2, 1), scale_factor=upp, mode="nearest" - ).transpose( - 2, 1 - ) ####### - tmp_over_one %= 1 - tmp_over_one_idx = (tmp_over_one[:, 1:, :] - tmp_over_one[:, :-1, :]) < 0 - cumsum_shift = torch.zeros_like(rad_values) - cumsum_shift[:, 1:, :] = tmp_over_one_idx * -1.0 - sine_waves = torch.sin( - torch.cumsum(rad_values + cumsum_shift, dim=1) * 2 * np.pi - ) - sine_waves = sine_waves * self.sine_amp - uv = self._f02uv(f0) - uv = F.interpolate( - uv.transpose(2, 1), scale_factor=upp, mode="nearest" - ).transpose(2, 1) - noise_amp = uv * self.noise_std + (1 - uv) * self.sine_amp / 3 - noise = noise_amp * torch.randn_like(sine_waves) - sine_waves = sine_waves * uv + noise - return sine_waves, uv, noise - - -class SourceModuleHnNSF(torch.nn.Module): - """SourceModule for hn-nsf - SourceModule(sampling_rate, harmonic_num=0, sine_amp=0.1, - add_noise_std=0.003, voiced_threshod=0) - sampling_rate: sampling_rate in Hz - harmonic_num: number of harmonic above F0 (default: 0) - sine_amp: amplitude of sine source signal (default: 0.1) - add_noise_std: std of additive Gaussian noise (default: 0.003) - note that amplitude of noise in unvoiced is decided - by sine_amp - voiced_threshold: threhold to set U/V given F0 (default: 0) - Sine_source, noise_source = SourceModuleHnNSF(F0_sampled) - F0_sampled (batchsize, length, 1) - Sine_source (batchsize, length, 1) - noise_source (batchsize, length 1) - uv (batchsize, length, 1) - """ - - def __init__( - self, - sampling_rate, - harmonic_num=0, - sine_amp=0.1, - add_noise_std=0.003, - voiced_threshod=0, - is_half=True, - ): - super(SourceModuleHnNSF, self).__init__() - - self.sine_amp = sine_amp - self.noise_std = add_noise_std - self.is_half = is_half - # to produce sine waveforms - self.l_sin_gen = SineGen( - sampling_rate, harmonic_num, sine_amp, add_noise_std, voiced_threshod - ) - - # to merge source harmonics into a single excitation - self.l_linear = torch.nn.Linear(harmonic_num + 1, 1) - self.l_tanh = torch.nn.Tanh() - - def forward(self, x, upp=None): - sine_wavs, uv, _ = self.l_sin_gen(x, upp) - if self.is_half: - sine_wavs = sine_wavs.half() - sine_merge = self.l_tanh(self.l_linear(sine_wavs)) - return sine_merge, None, None # noise, uv - - -class GeneratorNSF(torch.nn.Module): - def __init__( - self, - initial_channel, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels, - sr, - is_half=False, - ): - super(GeneratorNSF, self).__init__() - self.num_kernels = len(resblock_kernel_sizes) - self.num_upsamples = len(upsample_rates) - - self.f0_upsamp = torch.nn.Upsample(scale_factor=np.prod(upsample_rates)) - self.m_source = SourceModuleHnNSF( - sampling_rate=sr, harmonic_num=0, is_half=is_half - ) - self.noise_convs = nn.ModuleList() - self.conv_pre = Conv1d( - initial_channel, upsample_initial_channel, 7, 1, padding=3 - ) - resblock = modules.ResBlock1 if resblock == "1" else modules.ResBlock2 - - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)): - c_cur = upsample_initial_channel // (2 ** (i + 1)) - self.ups.append( - weight_norm( - ConvTranspose1d( - upsample_initial_channel // (2**i), - upsample_initial_channel // (2 ** (i + 1)), - k, - u, - padding=(k - u) // 2, - ) - ) - ) - if i + 1 < len(upsample_rates): - stride_f0 = np.prod(upsample_rates[i + 1 :]) - self.noise_convs.append( - Conv1d( - 1, - c_cur, - kernel_size=stride_f0 * 2, - stride=stride_f0, - padding=stride_f0 // 2, - ) - ) - else: - self.noise_convs.append(Conv1d(1, c_cur, kernel_size=1)) - - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = upsample_initial_channel // (2 ** (i + 1)) - for j, (k, d) in enumerate( - zip(resblock_kernel_sizes, resblock_dilation_sizes) - ): - self.resblocks.append(resblock(ch, k, d)) - - self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False) - self.ups.apply(init_weights) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1) - - self.upp = np.prod(upsample_rates) - - def forward(self, x, f0, g=None): - har_source, noi_source, uv = self.m_source(f0, self.upp) - har_source = har_source.transpose(1, 2) - x = self.conv_pre(x) - if g is not None: - x = x + self.cond(g) - - for i in range(self.num_upsamples): - x = F.leaky_relu(x, modules.LRELU_SLOPE) - x = self.ups[i](x) - x_source = self.noise_convs[i](har_source) - x = x + x_source - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i * self.num_kernels + j](x) - else: - xs += self.resblocks[i * self.num_kernels + j](x) - x = xs / self.num_kernels - x = F.leaky_relu(x) - x = self.conv_post(x) - x = torch.tanh(x) - return x - - def remove_weight_norm(self): - for l in self.ups: - remove_weight_norm(l) - for l in self.resblocks: - l.remove_weight_norm() - - -sr2sr = { - "32k": 32000, - "40k": 40000, - "48k": 48000, -} - - -class SynthesizerTrnMs256NSFsid(nn.Module): - def __init__( - self, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - spk_embed_dim, - gin_channels, - sr, - **kwargs - ): - super().__init__() - if type(sr) == type("strr"): - sr = sr2sr[sr] - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.gin_channels = gin_channels - # self.hop_length = hop_length# - self.spk_embed_dim = spk_embed_dim - self.enc_p = TextEncoder256( - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - ) - self.dec = GeneratorNSF( - inter_channels, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=gin_channels, - sr=sr, - is_half=kwargs["is_half"], - ) - self.enc_q = PosteriorEncoder( - spec_channels, - inter_channels, - hidden_channels, - 5, - 1, - 16, - gin_channels=gin_channels, - ) - self.flow = ResidualCouplingBlock( - inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels - ) - self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels) - print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim) - - def remove_weight_norm(self): - self.dec.remove_weight_norm() - self.flow.remove_weight_norm() - self.enc_q.remove_weight_norm() - - def forward( - self, phone, phone_lengths, pitch, pitchf, y, y_lengths, ds - ): # 这里ds是id,[bs,1] - # print(1,pitch.shape)#[bs,t] - g = self.emb_g(ds).unsqueeze(-1) # [b, 256, 1]##1是t,广播的 - m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths) - z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g) - z_p = self.flow(z, y_mask, g=g) - z_slice, ids_slice = commons.rand_slice_segments( - z, y_lengths, self.segment_size - ) - # print(-1,pitchf.shape,ids_slice,self.segment_size,self.hop_length,self.segment_size//self.hop_length) - pitchf = commons.slice_segments2(pitchf, ids_slice, self.segment_size) - # print(-2,pitchf.shape,z_slice.shape) - o = self.dec(z_slice, pitchf, g=g) - return o, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q) - - def infer(self, phone, phone_lengths, pitch, nsff0, sid, rate=None): - g = self.emb_g(sid).unsqueeze(-1) - m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths) - z_p = (m_p + torch.exp(logs_p) * torch.randn_like(m_p) * 0.66666) * x_mask - if rate: - head = int(z_p.shape[2] * rate) - z_p = z_p[:, :, -head:] - x_mask = x_mask[:, :, -head:] - nsff0 = nsff0[:, -head:] - z = self.flow(z_p, x_mask, g=g, reverse=True) - o = self.dec(z * x_mask, nsff0, g=g) - return o, x_mask, (z, z_p, m_p, logs_p) - - -class SynthesizerTrnMs768NSFsid(nn.Module): - def __init__( - self, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - spk_embed_dim, - gin_channels, - sr, - **kwargs - ): - super().__init__() - if type(sr) == type("strr"): - sr = sr2sr[sr] - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.gin_channels = gin_channels - # self.hop_length = hop_length# - self.spk_embed_dim = spk_embed_dim - self.enc_p = TextEncoder768( - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - ) - self.dec = GeneratorNSF( - inter_channels, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=gin_channels, - sr=sr, - is_half=kwargs["is_half"], - ) - self.enc_q = PosteriorEncoder( - spec_channels, - inter_channels, - hidden_channels, - 5, - 1, - 16, - gin_channels=gin_channels, - ) - self.flow = ResidualCouplingBlock( - inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels - ) - self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels) - print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim) - - def remove_weight_norm(self): - self.dec.remove_weight_norm() - self.flow.remove_weight_norm() - self.enc_q.remove_weight_norm() - - def forward( - self, phone, phone_lengths, pitch, pitchf, y, y_lengths, ds - ): # 这里ds是id,[bs,1] - # print(1,pitch.shape)#[bs,t] - g = self.emb_g(ds).unsqueeze(-1) # [b, 256, 1]##1是t,广播的 - m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths) - z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g) - z_p = self.flow(z, y_mask, g=g) - z_slice, ids_slice = commons.rand_slice_segments( - z, y_lengths, self.segment_size - ) - # print(-1,pitchf.shape,ids_slice,self.segment_size,self.hop_length,self.segment_size//self.hop_length) - pitchf = commons.slice_segments2(pitchf, ids_slice, self.segment_size) - # print(-2,pitchf.shape,z_slice.shape) - o = self.dec(z_slice, pitchf, g=g) - return o, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q) - - def infer(self, phone, phone_lengths, pitch, nsff0, sid, rate=None): - g = self.emb_g(sid).unsqueeze(-1) - m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths) - z_p = (m_p + torch.exp(logs_p) * torch.randn_like(m_p) * 0.66666) * x_mask - if rate: - head = int(z_p.shape[2] * rate) - z_p = z_p[:, :, -head:] - x_mask = x_mask[:, :, -head:] - nsff0 = nsff0[:, -head:] - z = self.flow(z_p, x_mask, g=g, reverse=True) - o = self.dec(z * x_mask, nsff0, g=g) - return o, x_mask, (z, z_p, m_p, logs_p) - - -class SynthesizerTrnMs256NSFsid_nono(nn.Module): - def __init__( - self, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - spk_embed_dim, - gin_channels, - sr=None, - **kwargs - ): - super().__init__() - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.gin_channels = gin_channels - # self.hop_length = hop_length# - self.spk_embed_dim = spk_embed_dim - self.enc_p = TextEncoder256( - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - f0=False, - ) - self.dec = Generator( - inter_channels, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=gin_channels, - ) - self.enc_q = PosteriorEncoder( - spec_channels, - inter_channels, - hidden_channels, - 5, - 1, - 16, - gin_channels=gin_channels, - ) - self.flow = ResidualCouplingBlock( - inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels - ) - self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels) - print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim) - - def remove_weight_norm(self): - self.dec.remove_weight_norm() - self.flow.remove_weight_norm() - self.enc_q.remove_weight_norm() - - def forward(self, phone, phone_lengths, y, y_lengths, ds): # 这里ds是id,[bs,1] - g = self.emb_g(ds).unsqueeze(-1) # [b, 256, 1]##1是t,广播的 - m_p, logs_p, x_mask = self.enc_p(phone, None, phone_lengths) - z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g) - z_p = self.flow(z, y_mask, g=g) - z_slice, ids_slice = commons.rand_slice_segments( - z, y_lengths, self.segment_size - ) - o = self.dec(z_slice, g=g) - return o, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q) - - def infer(self, phone, phone_lengths, sid, rate=None): - g = self.emb_g(sid).unsqueeze(-1) - m_p, logs_p, x_mask = self.enc_p(phone, None, phone_lengths) - z_p = (m_p + torch.exp(logs_p) * torch.randn_like(m_p) * 0.66666) * x_mask - if rate: - head = int(z_p.shape[2] * rate) - z_p = z_p[:, :, -head:] - x_mask = x_mask[:, :, -head:] - z = self.flow(z_p, x_mask, g=g, reverse=True) - o = self.dec(z * x_mask, g=g) - return o, x_mask, (z, z_p, m_p, logs_p) - - -class SynthesizerTrnMs768NSFsid_nono(nn.Module): - def __init__( - self, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - spk_embed_dim, - gin_channels, - sr=None, - **kwargs - ): - super().__init__() - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.gin_channels = gin_channels - # self.hop_length = hop_length# - self.spk_embed_dim = spk_embed_dim - self.enc_p = TextEncoder768( - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - f0=False, - ) - self.dec = Generator( - inter_channels, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=gin_channels, - ) - self.enc_q = PosteriorEncoder( - spec_channels, - inter_channels, - hidden_channels, - 5, - 1, - 16, - gin_channels=gin_channels, - ) - self.flow = ResidualCouplingBlock( - inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels - ) - self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels) - print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim) - - def remove_weight_norm(self): - self.dec.remove_weight_norm() - self.flow.remove_weight_norm() - self.enc_q.remove_weight_norm() - - def forward(self, phone, phone_lengths, y, y_lengths, ds): # 这里ds是id,[bs,1] - g = self.emb_g(ds).unsqueeze(-1) # [b, 256, 1]##1是t,广播的 - m_p, logs_p, x_mask = self.enc_p(phone, None, phone_lengths) - z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g) - z_p = self.flow(z, y_mask, g=g) - z_slice, ids_slice = commons.rand_slice_segments( - z, y_lengths, self.segment_size - ) - o = self.dec(z_slice, g=g) - return o, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q) - - def infer(self, phone, phone_lengths, sid, rate=None): - g = self.emb_g(sid).unsqueeze(-1) - m_p, logs_p, x_mask = self.enc_p(phone, None, phone_lengths) - z_p = (m_p + torch.exp(logs_p) * torch.randn_like(m_p) * 0.66666) * x_mask - if rate: - head = int(z_p.shape[2] * rate) - z_p = z_p[:, :, -head:] - x_mask = x_mask[:, :, -head:] - z = self.flow(z_p, x_mask, g=g, reverse=True) - o = self.dec(z * x_mask, g=g) - return o, x_mask, (z, z_p, m_p, logs_p) - - -class MultiPeriodDiscriminator(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(MultiPeriodDiscriminator, self).__init__() - periods = [2, 3, 5, 7, 11, 17] - # periods = [3, 5, 7, 11, 17, 23, 37] - - discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)] - discs = discs + [ - DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods - ] - self.discriminators = nn.ModuleList(discs) - - def forward(self, y, y_hat): - y_d_rs = [] # - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for i, d in enumerate(self.discriminators): - y_d_r, fmap_r = d(y) - y_d_g, fmap_g = d(y_hat) - # for j in range(len(fmap_r)): - # print(i,j,y.shape,y_hat.shape,fmap_r[j].shape,fmap_g[j].shape) - y_d_rs.append(y_d_r) - y_d_gs.append(y_d_g) - fmap_rs.append(fmap_r) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - - -class MultiPeriodDiscriminatorV2(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(MultiPeriodDiscriminatorV2, self).__init__() - # periods = [2, 3, 5, 7, 11, 17] - periods = [2, 3, 5, 7, 11, 17, 23, 37] - - discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)] - discs = discs + [ - DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods - ] - self.discriminators = nn.ModuleList(discs) - - def forward(self, y, y_hat): - y_d_rs = [] # - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for i, d in enumerate(self.discriminators): - y_d_r, fmap_r = d(y) - y_d_g, fmap_g = d(y_hat) - # for j in range(len(fmap_r)): - # print(i,j,y.shape,y_hat.shape,fmap_r[j].shape,fmap_g[j].shape) - y_d_rs.append(y_d_r) - y_d_gs.append(y_d_g) - fmap_rs.append(fmap_r) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - - -class DiscriminatorS(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(DiscriminatorS, self).__init__() - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList( - [ - norm_f(Conv1d(1, 16, 15, 1, padding=7)), - norm_f(Conv1d(16, 64, 41, 4, groups=4, padding=20)), - norm_f(Conv1d(64, 256, 41, 4, groups=16, padding=20)), - norm_f(Conv1d(256, 1024, 41, 4, groups=64, padding=20)), - norm_f(Conv1d(1024, 1024, 41, 4, groups=256, padding=20)), - norm_f(Conv1d(1024, 1024, 5, 1, padding=2)), - ] - ) - self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1)) - - def forward(self, x): - fmap = [] - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class DiscriminatorP(torch.nn.Module): - def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False): - super(DiscriminatorP, self).__init__() - self.period = period - self.use_spectral_norm = use_spectral_norm - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList( - [ - norm_f( - Conv2d( - 1, - 32, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 32, - 128, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 128, - 512, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 512, - 1024, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 1024, - 1024, - (kernel_size, 1), - 1, - padding=(get_padding(kernel_size, 1), 0), - ) - ), - ] - ) - self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0))) - - def forward(self, x): - fmap = [] - - # 1d to 2d - b, c, t = x.shape - if t % self.period != 0: # pad first - n_pad = self.period - (t % self.period) - x = F.pad(x, (0, n_pad), "reflect") - t = t + n_pad - x = x.view(b, c, t // self.period, self.period) - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap \ No newline at end of file diff --git a/spaces/AB-TW/team-ai/embedding.py b/spaces/AB-TW/team-ai/embedding.py deleted file mode 100644 index e77a01af88315a3397205b379128099522d6c51f..0000000000000000000000000000000000000000 --- a/spaces/AB-TW/team-ai/embedding.py +++ /dev/null @@ -1,97 +0,0 @@ -from langchain import LLMChain, PromptTemplate -from langchain.document_loaders import NotionDirectoryLoader -from langchain.text_splitter import MarkdownTextSplitter, SpacyTextSplitter -from langchain.embeddings import HuggingFaceEmbeddings -from langchain.vectorstores import FAISS -from langchain.chains import RetrievalQA -from langchain.chains.question_answering import load_qa_chain - -from langchain.document_loaders import NotionDirectoryLoader -from langchain.memory import ConversationBufferMemory -from langchain.chains import ConversationalRetrievalChain -from langchain.agents import initialize_agent, AgentType, Tool, ZeroShotAgent, AgentExecutor - -from models import llm - - -class CustomEmbedding: - notionDirectoryLoader = NotionDirectoryLoader( - "/Users/peichao.dong/Documents/projects/dpc/ABstract/docs/pages") - embeddings = HuggingFaceEmbeddings() - - def calculateEmbedding(self): - documents = self.notionDirectoryLoader.load() - # text_splitter = SpacyTextSplitter( - # chunk_size=2048, pipeline="zh_core_web_sm", chunk_overlap=0) - - text_splitter = MarkdownTextSplitter( - chunk_size=2048, chunk_overlap=0) - texts = text_splitter.split_documents(documents) - - docsearch = FAISS.from_documents(texts, self.embeddings) - docsearch.save_local( - folder_path="./documents/abstract.faiss") - - - - def getFAQChain(self, llm=llm(temperature=0.7)): - memory = ConversationBufferMemory(memory_key="chat_history", return_messages=True) - docsearch = FAISS.load_local( - "./documents/abstract.faiss", self.embeddings) - # retriever = VectorStoreRetriever(vectorstore=docsearch) - _template = """Given the following conversation and a follow up question, rephrase the follow up question to be a chinese standalone question. - - Chat History: - {chat_history} - Follow Up Input: {question} - Standalone question:""" - CONDENSE_QUESTION_PROMPT = PromptTemplate.from_template(_template) - question_generator = LLMChain(llm=llm, prompt=CONDENSE_QUESTION_PROMPT) - - doc_chain = load_qa_chain(llm, chain_type="stuff") - qa = ConversationalRetrievalChain( retriever= docsearch.as_retriever(search_kwargs={"k": 1}), - question_generator=question_generator, - combine_docs_chain=doc_chain, - memory=memory) - return qa - - def faq(self, input): - qa = self.getFAQChain() - response = qa({"question": f"{input}"}) - return response["answer"] - - def getFAQAgent(self): - tools = [Tool(name="ABstract system FAQ", func= self.faq, description="Useful for anwer questions about ABstract system")] - memory = ConversationBufferMemory(memory_key="chat_history") - - prefix = """Have a conversation with a human, answering the following questions as best you can. You have access to the following tools:""" - suffix = """The final Answer should be in Chines! Begin!" - - {chat_history} - Question: {input} - {agent_scratchpad}""" - - prompt = ZeroShotAgent.create_prompt( - tools, - prefix=prefix, - suffix=suffix, - input_variables=["input", "chat_history", "agent_scratchpad"] - ) - - llm_chain = LLMChain(llm=llm(), prompt=prompt) - agent = ZeroShotAgent(llm_chain=llm_chain, tools=tools, verbose=True) - faq_agent = AgentExecutor.from_agent_and_tools(agent=agent, tools=tools, verbose=True, memory=memory) - return faq_agent - # faq_agent = initialize_agent(tools= tools, llm=llm(), agent=AgentType.CHAT_CONVERSATIONAL_REACT_DESCRIPTION, verbose=True) - - -if __name__ == "__main__": - customerEmbedding = CustomEmbedding() - customerEmbedding.calculateEmbedding() -# # customerEmbedding.calculateNotionEmbedding() - -# faq_chain = customerEmbedding.getFAQChain() -# result = faq_chain.run( -# "Smart Domain 分层架构") - -# print(result) diff --git a/spaces/AIFILMS/audioldm-text-to-audio-generation/audioldm/latent_diffusion/attention.py b/spaces/AIFILMS/audioldm-text-to-audio-generation/audioldm/latent_diffusion/attention.py deleted file mode 100644 index 583dd169e7ec9502ee29faeb12689a46494838c0..0000000000000000000000000000000000000000 --- a/spaces/AIFILMS/audioldm-text-to-audio-generation/audioldm/latent_diffusion/attention.py +++ /dev/null @@ -1,468 +0,0 @@ -from inspect import isfunction -import math -import torch -import torch.nn.functional as F -from torch import nn -from einops import rearrange - -from audioldm.latent_diffusion.util import checkpoint - - -def exists(val): - return val is not None - - -def uniq(arr): - return {el: True for el in arr}.keys() - - -def default(val, d): - if exists(val): - return val - return d() if isfunction(d) else d - - -def max_neg_value(t): - return -torch.finfo(t.dtype).max - - -def init_(tensor): - dim = tensor.shape[-1] - std = 1 / math.sqrt(dim) - tensor.uniform_(-std, std) - return tensor - - -# feedforward -class GEGLU(nn.Module): - def __init__(self, dim_in, dim_out): - super().__init__() - self.proj = nn.Linear(dim_in, dim_out * 2) - - def forward(self, x): - x, gate = self.proj(x).chunk(2, dim=-1) - return x * F.gelu(gate) - - -class FeedForward(nn.Module): - def __init__(self, dim, dim_out=None, mult=4, glu=False, dropout=0.0): - super().__init__() - inner_dim = int(dim * mult) - dim_out = default(dim_out, dim) - project_in = ( - nn.Sequential(nn.Linear(dim, inner_dim), nn.GELU()) - if not glu - else GEGLU(dim, inner_dim) - ) - - self.net = nn.Sequential( - project_in, nn.Dropout(dropout), nn.Linear(inner_dim, dim_out) - ) - - def forward(self, x): - return self.net(x) - - -def zero_module(module): - """ - Zero out the parameters of a module and return it. - """ - for p in module.parameters(): - p.detach().zero_() - return module - - -def Normalize(in_channels): - return torch.nn.GroupNorm( - num_groups=32, num_channels=in_channels, eps=1e-6, affine=True - ) - - -class LinearAttention(nn.Module): - def __init__(self, dim, heads=4, dim_head=32): - super().__init__() - self.heads = heads - hidden_dim = dim_head * heads - self.to_qkv = nn.Conv2d(dim, hidden_dim * 3, 1, bias=False) - self.to_out = nn.Conv2d(hidden_dim, dim, 1) - - def forward(self, x): - b, c, h, w = x.shape - qkv = self.to_qkv(x) - q, k, v = rearrange( - qkv, "b (qkv heads c) h w -> qkv b heads c (h w)", heads=self.heads, qkv=3 - ) - k = k.softmax(dim=-1) - context = torch.einsum("bhdn,bhen->bhde", k, v) - out = torch.einsum("bhde,bhdn->bhen", context, q) - out = rearrange( - out, "b heads c (h w) -> b (heads c) h w", heads=self.heads, h=h, w=w - ) - return self.to_out(out) - - -class SpatialSelfAttention(nn.Module): - def __init__(self, in_channels): - super().__init__() - self.in_channels = in_channels - - self.norm = Normalize(in_channels) - self.q = torch.nn.Conv2d( - in_channels, in_channels, kernel_size=1, stride=1, padding=0 - ) - self.k = torch.nn.Conv2d( - in_channels, in_channels, kernel_size=1, stride=1, padding=0 - ) - self.v = torch.nn.Conv2d( - in_channels, in_channels, kernel_size=1, stride=1, padding=0 - ) - self.proj_out = torch.nn.Conv2d( - in_channels, in_channels, kernel_size=1, stride=1, padding=0 - ) - - def forward(self, x): - h_ = x - h_ = self.norm(h_) - q = self.q(h_) - k = self.k(h_) - v = self.v(h_) - - # compute attention - b, c, h, w = q.shape - q = rearrange(q, "b c h w -> b (h w) c") - k = rearrange(k, "b c h w -> b c (h w)") - w_ = torch.einsum("bij,bjk->bik", q, k) - - w_ = w_ * (int(c) ** (-0.5)) - w_ = torch.nn.functional.softmax(w_, dim=2) - - # attend to values - v = rearrange(v, "b c h w -> b c (h w)") - w_ = rearrange(w_, "b i j -> b j i") - h_ = torch.einsum("bij,bjk->bik", v, w_) - h_ = rearrange(h_, "b c (h w) -> b c h w", h=h) - h_ = self.proj_out(h_) - - return x + h_ - - -class CrossAttention(nn.Module): - """ - ### Cross Attention Layer - This falls-back to self-attention when conditional embeddings are not specified. - """ - - # use_flash_attention: bool = True - use_flash_attention: bool = False - def __init__( - self, - query_dim, - context_dim=None, - heads=8, - dim_head=64, - dropout=0.0, - is_inplace: bool = True, - ): - # def __init__(self, d_model: int, d_cond: int, n_heads: int, d_head: int, is_inplace: bool = True): - """ - :param d_model: is the input embedding size - :param n_heads: is the number of attention heads - :param d_head: is the size of a attention head - :param d_cond: is the size of the conditional embeddings - :param is_inplace: specifies whether to perform the attention softmax computation inplace to - save memory - """ - super().__init__() - - self.is_inplace = is_inplace - self.n_heads = heads - self.d_head = dim_head - - # Attention scaling factor - self.scale = dim_head**-0.5 - - # The normal self-attention layer - if context_dim is None: - context_dim = query_dim - - # Query, key and value mappings - d_attn = dim_head * heads - self.to_q = nn.Linear(query_dim, d_attn, bias=False) - self.to_k = nn.Linear(context_dim, d_attn, bias=False) - self.to_v = nn.Linear(context_dim, d_attn, bias=False) - - # Final linear layer - self.to_out = nn.Sequential(nn.Linear(d_attn, query_dim), nn.Dropout(dropout)) - - # Setup [flash attention](https://github.com/HazyResearch/flash-attention). - # Flash attention is only used if it's installed - # and `CrossAttention.use_flash_attention` is set to `True`. - try: - # You can install flash attention by cloning their Github repo, - # [https://github.com/HazyResearch/flash-attention](https://github.com/HazyResearch/flash-attention) - # and then running `python setup.py install` - from flash_attn.flash_attention import FlashAttention - - self.flash = FlashAttention() - # Set the scale for scaled dot-product attention. - self.flash.softmax_scale = self.scale - # Set to `None` if it's not installed - except ImportError: - self.flash = None - - def forward(self, x, context=None, mask=None): - """ - :param x: are the input embeddings of shape `[batch_size, height * width, d_model]` - :param cond: is the conditional embeddings of shape `[batch_size, n_cond, d_cond]` - """ - - # If `cond` is `None` we perform self attention - has_cond = context is not None - if not has_cond: - context = x - - # Get query, key and value vectors - q = self.to_q(x) - k = self.to_k(context) - v = self.to_v(context) - - # Use flash attention if it's available and the head size is less than or equal to `128` - if ( - CrossAttention.use_flash_attention - and self.flash is not None - and not has_cond - and self.d_head <= 128 - ): - return self.flash_attention(q, k, v) - # Otherwise, fallback to normal attention - else: - return self.normal_attention(q, k, v) - - def flash_attention(self, q: torch.Tensor, k: torch.Tensor, v: torch.Tensor): - """ - #### Flash Attention - :param q: are the query vectors before splitting heads, of shape `[batch_size, seq, d_attn]` - :param k: are the query vectors before splitting heads, of shape `[batch_size, seq, d_attn]` - :param v: are the query vectors before splitting heads, of shape `[batch_size, seq, d_attn]` - """ - - # Get batch size and number of elements along sequence axis (`width * height`) - batch_size, seq_len, _ = q.shape - - # Stack `q`, `k`, `v` vectors for flash attention, to get a single tensor of - # shape `[batch_size, seq_len, 3, n_heads * d_head]` - qkv = torch.stack((q, k, v), dim=2) - # Split the heads - qkv = qkv.view(batch_size, seq_len, 3, self.n_heads, self.d_head) - - # Flash attention works for head sizes `32`, `64` and `128`, so we have to pad the heads to - # fit this size. - if self.d_head <= 32: - pad = 32 - self.d_head - elif self.d_head <= 64: - pad = 64 - self.d_head - elif self.d_head <= 128: - pad = 128 - self.d_head - else: - raise ValueError(f"Head size ${self.d_head} too large for Flash Attention") - - # Pad the heads - if pad: - qkv = torch.cat( - (qkv, qkv.new_zeros(batch_size, seq_len, 3, self.n_heads, pad)), dim=-1 - ) - - # Compute attention - # $$\underset{seq}{softmax}\Bigg(\frac{Q K^\top}{\sqrt{d_{key}}}\Bigg)V$$ - # This gives a tensor of shape `[batch_size, seq_len, n_heads, d_padded]` - # TODO here I add the dtype changing - out, _ = self.flash(qkv.type(torch.float16)) - # Truncate the extra head size - out = out[:, :, :, : self.d_head].float() - # Reshape to `[batch_size, seq_len, n_heads * d_head]` - out = out.reshape(batch_size, seq_len, self.n_heads * self.d_head) - - # Map to `[batch_size, height * width, d_model]` with a linear layer - return self.to_out(out) - - def normal_attention(self, q: torch.Tensor, k: torch.Tensor, v: torch.Tensor): - """ - #### Normal Attention - - :param q: are the query vectors before splitting heads, of shape `[batch_size, seq, d_attn]` - :param k: are the query vectors before splitting heads, of shape `[batch_size, seq, d_attn]` - :param v: are the query vectors before splitting heads, of shape `[batch_size, seq, d_attn]` - """ - - # Split them to heads of shape `[batch_size, seq_len, n_heads, d_head]` - q = q.view(*q.shape[:2], self.n_heads, -1) # [bs, 64, 20, 32] - k = k.view(*k.shape[:2], self.n_heads, -1) # [bs, 1, 20, 32] - v = v.view(*v.shape[:2], self.n_heads, -1) - - # Calculate attention $\frac{Q K^\top}{\sqrt{d_{key}}}$ - attn = torch.einsum("bihd,bjhd->bhij", q, k) * self.scale - - # Compute softmax - # $$\underset{seq}{softmax}\Bigg(\frac{Q K^\top}{\sqrt{d_{key}}}\Bigg)$$ - if self.is_inplace: - half = attn.shape[0] // 2 - attn[half:] = attn[half:].softmax(dim=-1) - attn[:half] = attn[:half].softmax(dim=-1) - else: - attn = attn.softmax(dim=-1) - - # Compute attention output - # $$\underset{seq}{softmax}\Bigg(\frac{Q K^\top}{\sqrt{d_{key}}}\Bigg)V$$ - # attn: [bs, 20, 64, 1] - # v: [bs, 1, 20, 32] - out = torch.einsum("bhij,bjhd->bihd", attn, v) - # Reshape to `[batch_size, height * width, n_heads * d_head]` - out = out.reshape(*out.shape[:2], -1) - # Map to `[batch_size, height * width, d_model]` with a linear layer - return self.to_out(out) - - -# class CrossAttention(nn.Module): -# def __init__(self, query_dim, context_dim=None, heads=8, dim_head=64, dropout=0.): -# super().__init__() -# inner_dim = dim_head * heads -# context_dim = default(context_dim, query_dim) - -# self.scale = dim_head ** -0.5 -# self.heads = heads - -# self.to_q = nn.Linear(query_dim, inner_dim, bias=False) -# self.to_k = nn.Linear(context_dim, inner_dim, bias=False) -# self.to_v = nn.Linear(context_dim, inner_dim, bias=False) - -# self.to_out = nn.Sequential( -# nn.Linear(inner_dim, query_dim), -# nn.Dropout(dropout) -# ) - -# def forward(self, x, context=None, mask=None): -# h = self.heads - -# q = self.to_q(x) -# context = default(context, x) -# k = self.to_k(context) -# v = self.to_v(context) - -# q, k, v = map(lambda t: rearrange(t, 'b n (h d) -> (b h) n d', h=h), (q, k, v)) - -# sim = einsum('b i d, b j d -> b i j', q, k) * self.scale - -# if exists(mask): -# mask = rearrange(mask, 'b ... -> b (...)') -# max_neg_value = -torch.finfo(sim.dtype).max -# mask = repeat(mask, 'b j -> (b h) () j', h=h) -# sim.masked_fill_(~mask, max_neg_value) - -# # attention, what we cannot get enough of -# attn = sim.softmax(dim=-1) - -# out = einsum('b i j, b j d -> b i d', attn, v) -# out = rearrange(out, '(b h) n d -> b n (h d)', h=h) -# return self.to_out(out) - - -class BasicTransformerBlock(nn.Module): - def __init__( - self, - dim, - n_heads, - d_head, - dropout=0.0, - context_dim=None, - gated_ff=True, - checkpoint=True, - ): - super().__init__() - self.attn1 = CrossAttention( - query_dim=dim, heads=n_heads, dim_head=d_head, dropout=dropout - ) # is a self-attention - self.ff = FeedForward(dim, dropout=dropout, glu=gated_ff) - self.attn2 = CrossAttention( - query_dim=dim, - context_dim=context_dim, - heads=n_heads, - dim_head=d_head, - dropout=dropout, - ) # is self-attn if context is none - self.norm1 = nn.LayerNorm(dim) - self.norm2 = nn.LayerNorm(dim) - self.norm3 = nn.LayerNorm(dim) - self.checkpoint = checkpoint - - def forward(self, x, context=None): - if context is None: - return checkpoint(self._forward, (x,), self.parameters(), self.checkpoint) - else: - return checkpoint( - self._forward, (x, context), self.parameters(), self.checkpoint - ) - - def _forward(self, x, context=None): - x = self.attn1(self.norm1(x)) + x - x = self.attn2(self.norm2(x), context=context) + x - x = self.ff(self.norm3(x)) + x - return x - - -class SpatialTransformer(nn.Module): - """ - Transformer block for image-like data. - First, project the input (aka embedding) - and reshape to b, t, d. - Then apply standard transformer action. - Finally, reshape to image - """ - - def __init__( - self, - in_channels, - n_heads, - d_head, - depth=1, - dropout=0.0, - context_dim=None, - no_context=False, - ): - super().__init__() - - if no_context: - context_dim = None - - self.in_channels = in_channels - inner_dim = n_heads * d_head - self.norm = Normalize(in_channels) - - self.proj_in = nn.Conv2d( - in_channels, inner_dim, kernel_size=1, stride=1, padding=0 - ) - - self.transformer_blocks = nn.ModuleList( - [ - BasicTransformerBlock( - inner_dim, n_heads, d_head, dropout=dropout, context_dim=context_dim - ) - for d in range(depth) - ] - ) - - self.proj_out = zero_module( - nn.Conv2d(inner_dim, in_channels, kernel_size=1, stride=1, padding=0) - ) - - def forward(self, x, context=None): - # note: if no context is given, cross-attention defaults to self-attention - b, c, h, w = x.shape - x_in = x - x = self.norm(x) - x = self.proj_in(x) - x = rearrange(x, "b c h w -> b (h w) c") - for block in self.transformer_blocks: - x = block(x, context=context) - x = rearrange(x, "b (h w) c -> b c h w", h=h, w=w) - x = self.proj_out(x) - return x + x_in diff --git a/spaces/AIGC-Audio/AudioGPT/NeuralSeq/modules/GenerSpeech/model/wavenet.py b/spaces/AIGC-Audio/AudioGPT/NeuralSeq/modules/GenerSpeech/model/wavenet.py deleted file mode 100644 index 481c02d9cc776eba40e578e1b2549bf352357be8..0000000000000000000000000000000000000000 --- a/spaces/AIGC-Audio/AudioGPT/NeuralSeq/modules/GenerSpeech/model/wavenet.py +++ /dev/null @@ -1,87 +0,0 @@ -from modules.commons.common_layers import * - - -# @torch.jit.script -def fused_add_tanh_sigmoid_multiply(input_a, input_b, n_channels): - n_channels_int = n_channels[0] - in_act = input_a + input_b - t_act = torch.tanh(in_act[:, :n_channels_int, :]) - s_act = torch.sigmoid(in_act[:, n_channels_int:, :]) - acts = t_act * s_act - return acts - - -class WN(torch.nn.Module): - def __init__(self, hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=0, - p_dropout=0, share_cond_layers=False): - super(WN, self).__init__() - assert (kernel_size % 2 == 1) - assert (hidden_channels % 2 == 0) - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - self.p_dropout = p_dropout - self.share_cond_layers = share_cond_layers - - self.in_layers = torch.nn.ModuleList() - self.res_skip_layers = torch.nn.ModuleList() - self.drop = nn.Dropout(p_dropout) - - if gin_channels != 0 and not share_cond_layers: - cond_layer = torch.nn.Conv1d(gin_channels, 2 * hidden_channels * n_layers, 1) - self.cond_layer = torch.nn.utils.weight_norm(cond_layer, name='weight') - - for i in range(n_layers): - dilation = dilation_rate ** i - padding = int((kernel_size * dilation - dilation) / 2) - in_layer = torch.nn.Conv1d(hidden_channels, 2 * hidden_channels, kernel_size, - dilation=dilation, padding=padding) - in_layer = torch.nn.utils.weight_norm(in_layer, name='weight') - self.in_layers.append(in_layer) - - # last one is not necessary - if i < n_layers - 1: - res_skip_channels = 2 * hidden_channels - else: - res_skip_channels = hidden_channels - - res_skip_layer = torch.nn.Conv1d(hidden_channels, res_skip_channels, 1) - res_skip_layer = torch.nn.utils.weight_norm(res_skip_layer, name='weight') - self.res_skip_layers.append(res_skip_layer) - - def forward(self, x, x_mask=None, g=None, **kwargs): - output = torch.zeros_like(x) - n_channels_tensor = torch.IntTensor([self.hidden_channels]) - - if g is not None and not self.share_cond_layers: - g = self.cond_layer(g) - - for i in range(self.n_layers): - x_in = self.in_layers[i](x) - x_in = self.drop(x_in) - if g is not None: - cond_offset = i * 2 * self.hidden_channels - g_l = g[:, cond_offset:cond_offset + 2 * self.hidden_channels, :] - else: - g_l = torch.zeros_like(x_in) - - acts = fused_add_tanh_sigmoid_multiply(x_in, g_l, n_channels_tensor) - - res_skip_acts = self.res_skip_layers[i](acts) - if i < self.n_layers - 1: - x = (x + res_skip_acts[:, :self.hidden_channels, :]) * x_mask - output = output + res_skip_acts[:, self.hidden_channels:, :] - else: - output = output + res_skip_acts - return output * x_mask - - def remove_weight_norm(self): - def remove_weight_norm(m): - try: - nn.utils.remove_weight_norm(m) - except ValueError: # this module didn't have weight norm - return - - self.apply(remove_weight_norm) diff --git a/spaces/AIML-TUDA/unsafe-vs-safe-stable-diffusion/share_btn.py b/spaces/AIML-TUDA/unsafe-vs-safe-stable-diffusion/share_btn.py deleted file mode 100644 index f9385340e3e30786c193cebedf8fb1de0c5a3286..0000000000000000000000000000000000000000 --- a/spaces/AIML-TUDA/unsafe-vs-safe-stable-diffusion/share_btn.py +++ /dev/null @@ -1,68 +0,0 @@ -community_icon_html = """""" - -loading_icon_html = """""" - -share_js = """async () => { - async function uploadFile(file){ - const UPLOAD_URL = 'https://huggingface.co/uploads'; - const response = await fetch(UPLOAD_URL, { - method: 'POST', - headers: { - 'Content-Type': file.type, - 'X-Requested-With': 'XMLHttpRequest', - }, - body: file, /// <- File inherits from Blob - }); - const url = await response.text(); - return url; - } - - const gradioEl = document.querySelector('body > gradio-app'); - const imgEls = gradioEl.querySelectorAll('#gallery img'); - const promptTxt = gradioEl.querySelector('#prompt-text-input input').value; - const shareBtnEl = gradioEl.querySelector('#share-btn'); - const shareIconEl = gradioEl.querySelector('#share-btn-share-icon'); - const loadingIconEl = gradioEl.querySelector('#share-btn-loading-icon'); - - if(!imgEls.length){ - return; - }; - - shareBtnEl.style.pointerEvents = 'none'; - shareIconEl.style.display = 'none'; - loadingIconEl.style.removeProperty('display'); - - const files = await Promise.all( - [...imgEls].map(async (imgEl) => { - const res = await fetch(imgEl.src); - const blob = await res.blob(); - const imgId = Date.now() % 200; - const fileName = `diffuse-the-rest-${{imgId}}.jpg`; - return new File([blob], fileName, { type: 'image/jpeg' }); - }) - ); - - const urls = await Promise.all(files.map((f) => uploadFile(f))); - const htmlImgs = urls.map(url => ``); - const descriptionMd = `
-${htmlImgs.join(`\n`)} -
`; - - const params = new URLSearchParams({ - title: promptTxt, - description: descriptionMd, - }); - - const paramsStr = params.toString(); - window.open(`https://huggingface.co/spaces/stabilityai/stable-diffusion/discussions/new?${paramsStr}`, '_blank'); - - shareBtnEl.style.removeProperty('pointer-events'); - shareIconEl.style.removeProperty('display'); - loadingIconEl.style.display = 'none'; -}""" \ No newline at end of file diff --git a/spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_2_ProfileRecogition/mmpretrain/configs/resnet/resnet50_32xb64-warmup-coslr_in1k.py b/spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_2_ProfileRecogition/mmpretrain/configs/resnet/resnet50_32xb64-warmup-coslr_in1k.py deleted file mode 100644 index c26245ef53a736c22c0ef7d4e9d8b7876509fe2e..0000000000000000000000000000000000000000 --- a/spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_2_ProfileRecogition/mmpretrain/configs/resnet/resnet50_32xb64-warmup-coslr_in1k.py +++ /dev/null @@ -1,5 +0,0 @@ -_base_ = [ - '../_base_/models/resnet50.py', '../_base_/datasets/imagenet_bs64.py', - '../_base_/schedules/imagenet_bs2048_coslr.py', - '../_base_/default_runtime.py' -] diff --git a/spaces/Adithedev/Keyword-Extractor/app.py b/spaces/Adithedev/Keyword-Extractor/app.py deleted file mode 100644 index 8e237ce7c0e59f366d8c392d9a559b6551c29a5a..0000000000000000000000000000000000000000 --- a/spaces/Adithedev/Keyword-Extractor/app.py +++ /dev/null @@ -1,31 +0,0 @@ -from model import KeywordExtraction -import streamlit as st - - -Model = KeywordExtraction() -st.title("Keyword Extractor") - -with st.form(key = "clf_form"): - text_input_area = st.text_area("Type your text here: ") - submit_btn = st.form_submit_button(label = "Submit") - countOfWords = len(text_input_area.split()) - - if submit_btn: - if text_input_area == "": - st.error("Enter something in order to Extract the keywords of it.",icon="⛔️") - else: - if countOfWords<=50: - st.warning("Pls enter more than 100 words in order to extract keywords of it.",icon="⚠️") - else: - st.subheader("Output: ") - col1,col2 = st.columns(2) - f1 = Model.fit(text=text_input_area) - f2 = [f1] - output = Model.train(f2,top_n= 5) - with col1: - st.info("Text: ") - st.write(text_input_area) - - with col2: - st.info("Keywords Generated: ") - st.write(output) diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/spinner/puff/Puff.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/spinner/puff/Puff.js deleted file mode 100644 index 268eb96977d72caab29a350306bdde1d5273ecc1..0000000000000000000000000000000000000000 --- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/spinner/puff/Puff.js +++ /dev/null @@ -1,31 +0,0 @@ -import Base from '../base/Base.js'; -import { Circle } from '../utils/Geoms.js'; -import Yoyo from '../utils/Yoyo.js'; - - -class Puff extends Base { - constructor(scene, config) { - super(scene, config); - this.type = 'rexSpinnerPuff'; - } - - buildShapes() { - this.addShape(new Circle()); - } - - updateShapes() { - var centerX = this.centerX; - var centerY = this.centerY; - var radius = this.radius; - var puffRadius = radius * this.value; - var lineWidth = Math.ceil(radius / 25); - var alpha = Yoyo(this.value); - - this.getShapes()[0] - .lineStyle(lineWidth, this.color, alpha) - .setRadius(puffRadius) - .setCenterPosition(centerX, centerY) - } -} - -export default Puff; \ No newline at end of file diff --git a/spaces/Alycer/VITS-Umamusume-voice-synthesizer/text/__init__.py b/spaces/Alycer/VITS-Umamusume-voice-synthesizer/text/__init__.py deleted file mode 100644 index 4e69c354dd24e3243980236eca962cd5945a92fc..0000000000000000000000000000000000000000 --- a/spaces/Alycer/VITS-Umamusume-voice-synthesizer/text/__init__.py +++ /dev/null @@ -1,32 +0,0 @@ -""" from https://github.com/keithito/tacotron """ -from text import cleaners - - -def text_to_sequence(text, symbols, cleaner_names): - '''Converts a string of text to a sequence of IDs corresponding to the symbols in the text. - Args: - text: string to convert to a sequence - cleaner_names: names of the cleaner functions to run the text through - Returns: - List of integers corresponding to the symbols in the text - ''' - _symbol_to_id = {s: i for i, s in enumerate(symbols)} - - sequence = [] - - clean_text = _clean_text(text, cleaner_names) - for symbol in clean_text: - if symbol not in _symbol_to_id.keys(): - continue - symbol_id = _symbol_to_id[symbol] - sequence += [symbol_id] - return sequence - - -def _clean_text(text, cleaner_names): - for name in cleaner_names: - cleaner = getattr(cleaners, name) - if not cleaner: - raise Exception('Unknown cleaner: %s' % name) - text = cleaner(text) - return text diff --git a/spaces/AnTo2209/3D_Zeroshot_Neural_Style_Transfer/src/dataset/__init__.py b/spaces/AnTo2209/3D_Zeroshot_Neural_Style_Transfer/src/dataset/__init__.py deleted file mode 100644 index 2d9e32ce28659bf8057a502e127414f730c74867..0000000000000000000000000000000000000000 --- a/spaces/AnTo2209/3D_Zeroshot_Neural_Style_Transfer/src/dataset/__init__.py +++ /dev/null @@ -1,10 +0,0 @@ -from src.dataset.blender_dataset import BlenderDataset -from src.dataset.llff_dataset import LLFFDataset -from src.dataset.style_dataset import StyleDataset -from src.utils.registry import Registry - -DATASET_REGISTRY = Registry("DATASET") - -DATASET_REGISTRY.register(BlenderDataset) -DATASET_REGISTRY.register(LLFFDataset) -DATASET_REGISTRY.register(StyleDataset) \ No newline at end of file diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/scripts/convert_kandinsky_to_diffusers.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/scripts/convert_kandinsky_to_diffusers.py deleted file mode 100644 index 1b5722f5d5f3ef9af36596ea1301583ee789c364..0000000000000000000000000000000000000000 --- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/scripts/convert_kandinsky_to_diffusers.py +++ /dev/null @@ -1,1411 +0,0 @@ -import argparse -import os -import tempfile - -import torch -from accelerate import load_checkpoint_and_dispatch - -from diffusers import UNet2DConditionModel -from diffusers.models.prior_transformer import PriorTransformer -from diffusers.models.vq_model import VQModel - - -""" -Example - From the diffusers root directory: - -Download weights: -```sh -$ wget https://huggingface.co/ai-forever/Kandinsky_2.1/blob/main/prior_fp16.ckpt -``` - -Convert the model: -```sh -python scripts/convert_kandinsky_to_diffusers.py \ - --prior_checkpoint_path /home/yiyi_huggingface_co/Kandinsky-2/checkpoints_Kandinsky_2.1/prior_fp16.ckpt \ - --clip_stat_path /home/yiyi_huggingface_co/Kandinsky-2/checkpoints_Kandinsky_2.1/ViT-L-14_stats.th \ - --text2img_checkpoint_path /home/yiyi_huggingface_co/Kandinsky-2/checkpoints_Kandinsky_2.1/decoder_fp16.ckpt \ - --inpaint_text2img_checkpoint_path /home/yiyi_huggingface_co/Kandinsky-2/checkpoints_Kandinsky_2.1/inpainting_fp16.ckpt \ - --movq_checkpoint_path /home/yiyi_huggingface_co/Kandinsky-2/checkpoints_Kandinsky_2.1/movq_final.ckpt \ - --dump_path /home/yiyi_huggingface_co/dump \ - --debug decoder -``` -""" - - -# prior - -PRIOR_ORIGINAL_PREFIX = "model" - -# Uses default arguments -PRIOR_CONFIG = {} - - -def prior_model_from_original_config(): - model = PriorTransformer(**PRIOR_CONFIG) - - return model - - -def prior_original_checkpoint_to_diffusers_checkpoint(model, checkpoint, clip_stats_checkpoint): - diffusers_checkpoint = {} - - # .time_embed.0 -> .time_embedding.linear_1 - diffusers_checkpoint.update( - { - "time_embedding.linear_1.weight": checkpoint[f"{PRIOR_ORIGINAL_PREFIX}.time_embed.0.weight"], - "time_embedding.linear_1.bias": checkpoint[f"{PRIOR_ORIGINAL_PREFIX}.time_embed.0.bias"], - } - ) - - # .clip_img_proj -> .proj_in - diffusers_checkpoint.update( - { - "proj_in.weight": checkpoint[f"{PRIOR_ORIGINAL_PREFIX}.clip_img_proj.weight"], - "proj_in.bias": checkpoint[f"{PRIOR_ORIGINAL_PREFIX}.clip_img_proj.bias"], - } - ) - - # .text_emb_proj -> .embedding_proj - diffusers_checkpoint.update( - { - "embedding_proj.weight": checkpoint[f"{PRIOR_ORIGINAL_PREFIX}.text_emb_proj.weight"], - "embedding_proj.bias": checkpoint[f"{PRIOR_ORIGINAL_PREFIX}.text_emb_proj.bias"], - } - ) - - # .text_enc_proj -> .encoder_hidden_states_proj - diffusers_checkpoint.update( - { - "encoder_hidden_states_proj.weight": checkpoint[f"{PRIOR_ORIGINAL_PREFIX}.text_enc_proj.weight"], - "encoder_hidden_states_proj.bias": checkpoint[f"{PRIOR_ORIGINAL_PREFIX}.text_enc_proj.bias"], - } - ) - - # .positional_embedding -> .positional_embedding - diffusers_checkpoint.update({"positional_embedding": checkpoint[f"{PRIOR_ORIGINAL_PREFIX}.positional_embedding"]}) - - # .prd_emb -> .prd_embedding - diffusers_checkpoint.update({"prd_embedding": checkpoint[f"{PRIOR_ORIGINAL_PREFIX}.prd_emb"]}) - - # .time_embed.2 -> .time_embedding.linear_2 - diffusers_checkpoint.update( - { - "time_embedding.linear_2.weight": checkpoint[f"{PRIOR_ORIGINAL_PREFIX}.time_embed.2.weight"], - "time_embedding.linear_2.bias": checkpoint[f"{PRIOR_ORIGINAL_PREFIX}.time_embed.2.bias"], - } - ) - - # .resblocks. -> .transformer_blocks. - for idx in range(len(model.transformer_blocks)): - diffusers_transformer_prefix = f"transformer_blocks.{idx}" - original_transformer_prefix = f"{PRIOR_ORIGINAL_PREFIX}.transformer.resblocks.{idx}" - - # .attn -> .attn1 - diffusers_attention_prefix = f"{diffusers_transformer_prefix}.attn1" - original_attention_prefix = f"{original_transformer_prefix}.attn" - diffusers_checkpoint.update( - prior_attention_to_diffusers( - checkpoint, - diffusers_attention_prefix=diffusers_attention_prefix, - original_attention_prefix=original_attention_prefix, - attention_head_dim=model.attention_head_dim, - ) - ) - - # .mlp -> .ff - diffusers_ff_prefix = f"{diffusers_transformer_prefix}.ff" - original_ff_prefix = f"{original_transformer_prefix}.mlp" - diffusers_checkpoint.update( - prior_ff_to_diffusers( - checkpoint, diffusers_ff_prefix=diffusers_ff_prefix, original_ff_prefix=original_ff_prefix - ) - ) - - # .ln_1 -> .norm1 - diffusers_checkpoint.update( - { - f"{diffusers_transformer_prefix}.norm1.weight": checkpoint[ - f"{original_transformer_prefix}.ln_1.weight" - ], - f"{diffusers_transformer_prefix}.norm1.bias": checkpoint[f"{original_transformer_prefix}.ln_1.bias"], - } - ) - - # .ln_2 -> .norm3 - diffusers_checkpoint.update( - { - f"{diffusers_transformer_prefix}.norm3.weight": checkpoint[ - f"{original_transformer_prefix}.ln_2.weight" - ], - f"{diffusers_transformer_prefix}.norm3.bias": checkpoint[f"{original_transformer_prefix}.ln_2.bias"], - } - ) - - # .final_ln -> .norm_out - diffusers_checkpoint.update( - { - "norm_out.weight": checkpoint[f"{PRIOR_ORIGINAL_PREFIX}.final_ln.weight"], - "norm_out.bias": checkpoint[f"{PRIOR_ORIGINAL_PREFIX}.final_ln.bias"], - } - ) - - # .out_proj -> .proj_to_clip_embeddings - diffusers_checkpoint.update( - { - "proj_to_clip_embeddings.weight": checkpoint[f"{PRIOR_ORIGINAL_PREFIX}.out_proj.weight"], - "proj_to_clip_embeddings.bias": checkpoint[f"{PRIOR_ORIGINAL_PREFIX}.out_proj.bias"], - } - ) - - # clip stats - clip_mean, clip_std = clip_stats_checkpoint - clip_mean = clip_mean[None, :] - clip_std = clip_std[None, :] - - diffusers_checkpoint.update({"clip_mean": clip_mean, "clip_std": clip_std}) - - return diffusers_checkpoint - - -def prior_attention_to_diffusers( - checkpoint, *, diffusers_attention_prefix, original_attention_prefix, attention_head_dim -): - diffusers_checkpoint = {} - - # .c_qkv -> .{to_q, to_k, to_v} - [q_weight, k_weight, v_weight], [q_bias, k_bias, v_bias] = split_attentions( - weight=checkpoint[f"{original_attention_prefix}.c_qkv.weight"], - bias=checkpoint[f"{original_attention_prefix}.c_qkv.bias"], - split=3, - chunk_size=attention_head_dim, - ) - - diffusers_checkpoint.update( - { - f"{diffusers_attention_prefix}.to_q.weight": q_weight, - f"{diffusers_attention_prefix}.to_q.bias": q_bias, - f"{diffusers_attention_prefix}.to_k.weight": k_weight, - f"{diffusers_attention_prefix}.to_k.bias": k_bias, - f"{diffusers_attention_prefix}.to_v.weight": v_weight, - f"{diffusers_attention_prefix}.to_v.bias": v_bias, - } - ) - - # .c_proj -> .to_out.0 - diffusers_checkpoint.update( - { - f"{diffusers_attention_prefix}.to_out.0.weight": checkpoint[f"{original_attention_prefix}.c_proj.weight"], - f"{diffusers_attention_prefix}.to_out.0.bias": checkpoint[f"{original_attention_prefix}.c_proj.bias"], - } - ) - - return diffusers_checkpoint - - -def prior_ff_to_diffusers(checkpoint, *, diffusers_ff_prefix, original_ff_prefix): - diffusers_checkpoint = { - # .c_fc -> .net.0.proj - f"{diffusers_ff_prefix}.net.{0}.proj.weight": checkpoint[f"{original_ff_prefix}.c_fc.weight"], - f"{diffusers_ff_prefix}.net.{0}.proj.bias": checkpoint[f"{original_ff_prefix}.c_fc.bias"], - # .c_proj -> .net.2 - f"{diffusers_ff_prefix}.net.{2}.weight": checkpoint[f"{original_ff_prefix}.c_proj.weight"], - f"{diffusers_ff_prefix}.net.{2}.bias": checkpoint[f"{original_ff_prefix}.c_proj.bias"], - } - - return diffusers_checkpoint - - -# done prior - -# unet - -# We are hardcoding the model configuration for now. If we need to generalize to more model configurations, we can -# update then. - -UNET_CONFIG = { - "act_fn": "silu", - "addition_embed_type": "text_image", - "addition_embed_type_num_heads": 64, - "attention_head_dim": 64, - "block_out_channels": [384, 768, 1152, 1536], - "center_input_sample": False, - "class_embed_type": None, - "class_embeddings_concat": False, - "conv_in_kernel": 3, - "conv_out_kernel": 3, - "cross_attention_dim": 768, - "cross_attention_norm": None, - "down_block_types": [ - "ResnetDownsampleBlock2D", - "SimpleCrossAttnDownBlock2D", - "SimpleCrossAttnDownBlock2D", - "SimpleCrossAttnDownBlock2D", - ], - "downsample_padding": 1, - "dual_cross_attention": False, - "encoder_hid_dim": 1024, - "encoder_hid_dim_type": "text_image_proj", - "flip_sin_to_cos": True, - "freq_shift": 0, - "in_channels": 4, - "layers_per_block": 3, - "mid_block_only_cross_attention": None, - "mid_block_scale_factor": 1, - "mid_block_type": "UNetMidBlock2DSimpleCrossAttn", - "norm_eps": 1e-05, - "norm_num_groups": 32, - "num_class_embeds": None, - "only_cross_attention": False, - "out_channels": 8, - "projection_class_embeddings_input_dim": None, - "resnet_out_scale_factor": 1.0, - "resnet_skip_time_act": False, - "resnet_time_scale_shift": "scale_shift", - "sample_size": 64, - "time_cond_proj_dim": None, - "time_embedding_act_fn": None, - "time_embedding_dim": None, - "time_embedding_type": "positional", - "timestep_post_act": None, - "up_block_types": [ - "SimpleCrossAttnUpBlock2D", - "SimpleCrossAttnUpBlock2D", - "SimpleCrossAttnUpBlock2D", - "ResnetUpsampleBlock2D", - ], - "upcast_attention": False, - "use_linear_projection": False, -} - - -def unet_model_from_original_config(): - model = UNet2DConditionModel(**UNET_CONFIG) - - return model - - -def unet_original_checkpoint_to_diffusers_checkpoint(model, checkpoint): - diffusers_checkpoint = {} - - num_head_channels = UNET_CONFIG["attention_head_dim"] - - diffusers_checkpoint.update(unet_time_embeddings(checkpoint)) - diffusers_checkpoint.update(unet_conv_in(checkpoint)) - diffusers_checkpoint.update(unet_add_embedding(checkpoint)) - diffusers_checkpoint.update(unet_encoder_hid_proj(checkpoint)) - - # .input_blocks -> .down_blocks - - original_down_block_idx = 1 - - for diffusers_down_block_idx in range(len(model.down_blocks)): - checkpoint_update, num_original_down_blocks = unet_downblock_to_diffusers_checkpoint( - model, - checkpoint, - diffusers_down_block_idx=diffusers_down_block_idx, - original_down_block_idx=original_down_block_idx, - num_head_channels=num_head_channels, - ) - - original_down_block_idx += num_original_down_blocks - - diffusers_checkpoint.update(checkpoint_update) - - # done .input_blocks -> .down_blocks - - diffusers_checkpoint.update( - unet_midblock_to_diffusers_checkpoint( - model, - checkpoint, - num_head_channels=num_head_channels, - ) - ) - - # .output_blocks -> .up_blocks - - original_up_block_idx = 0 - - for diffusers_up_block_idx in range(len(model.up_blocks)): - checkpoint_update, num_original_up_blocks = unet_upblock_to_diffusers_checkpoint( - model, - checkpoint, - diffusers_up_block_idx=diffusers_up_block_idx, - original_up_block_idx=original_up_block_idx, - num_head_channels=num_head_channels, - ) - - original_up_block_idx += num_original_up_blocks - - diffusers_checkpoint.update(checkpoint_update) - - # done .output_blocks -> .up_blocks - - diffusers_checkpoint.update(unet_conv_norm_out(checkpoint)) - diffusers_checkpoint.update(unet_conv_out(checkpoint)) - - return diffusers_checkpoint - - -# done unet - -# inpaint unet - -# We are hardcoding the model configuration for now. If we need to generalize to more model configurations, we can -# update then. - -INPAINT_UNET_CONFIG = { - "act_fn": "silu", - "addition_embed_type": "text_image", - "addition_embed_type_num_heads": 64, - "attention_head_dim": 64, - "block_out_channels": [384, 768, 1152, 1536], - "center_input_sample": False, - "class_embed_type": None, - "class_embeddings_concat": None, - "conv_in_kernel": 3, - "conv_out_kernel": 3, - "cross_attention_dim": 768, - "cross_attention_norm": None, - "down_block_types": [ - "ResnetDownsampleBlock2D", - "SimpleCrossAttnDownBlock2D", - "SimpleCrossAttnDownBlock2D", - "SimpleCrossAttnDownBlock2D", - ], - "downsample_padding": 1, - "dual_cross_attention": False, - "encoder_hid_dim": 1024, - "encoder_hid_dim_type": "text_image_proj", - "flip_sin_to_cos": True, - "freq_shift": 0, - "in_channels": 9, - "layers_per_block": 3, - "mid_block_only_cross_attention": None, - "mid_block_scale_factor": 1, - "mid_block_type": "UNetMidBlock2DSimpleCrossAttn", - "norm_eps": 1e-05, - "norm_num_groups": 32, - "num_class_embeds": None, - "only_cross_attention": False, - "out_channels": 8, - "projection_class_embeddings_input_dim": None, - "resnet_out_scale_factor": 1.0, - "resnet_skip_time_act": False, - "resnet_time_scale_shift": "scale_shift", - "sample_size": 64, - "time_cond_proj_dim": None, - "time_embedding_act_fn": None, - "time_embedding_dim": None, - "time_embedding_type": "positional", - "timestep_post_act": None, - "up_block_types": [ - "SimpleCrossAttnUpBlock2D", - "SimpleCrossAttnUpBlock2D", - "SimpleCrossAttnUpBlock2D", - "ResnetUpsampleBlock2D", - ], - "upcast_attention": False, - "use_linear_projection": False, -} - - -def inpaint_unet_model_from_original_config(): - model = UNet2DConditionModel(**INPAINT_UNET_CONFIG) - - return model - - -def inpaint_unet_original_checkpoint_to_diffusers_checkpoint(model, checkpoint): - diffusers_checkpoint = {} - - num_head_channels = INPAINT_UNET_CONFIG["attention_head_dim"] - - diffusers_checkpoint.update(unet_time_embeddings(checkpoint)) - diffusers_checkpoint.update(unet_conv_in(checkpoint)) - diffusers_checkpoint.update(unet_add_embedding(checkpoint)) - diffusers_checkpoint.update(unet_encoder_hid_proj(checkpoint)) - - # .input_blocks -> .down_blocks - - original_down_block_idx = 1 - - for diffusers_down_block_idx in range(len(model.down_blocks)): - checkpoint_update, num_original_down_blocks = unet_downblock_to_diffusers_checkpoint( - model, - checkpoint, - diffusers_down_block_idx=diffusers_down_block_idx, - original_down_block_idx=original_down_block_idx, - num_head_channels=num_head_channels, - ) - - original_down_block_idx += num_original_down_blocks - - diffusers_checkpoint.update(checkpoint_update) - - # done .input_blocks -> .down_blocks - - diffusers_checkpoint.update( - unet_midblock_to_diffusers_checkpoint( - model, - checkpoint, - num_head_channels=num_head_channels, - ) - ) - - # .output_blocks -> .up_blocks - - original_up_block_idx = 0 - - for diffusers_up_block_idx in range(len(model.up_blocks)): - checkpoint_update, num_original_up_blocks = unet_upblock_to_diffusers_checkpoint( - model, - checkpoint, - diffusers_up_block_idx=diffusers_up_block_idx, - original_up_block_idx=original_up_block_idx, - num_head_channels=num_head_channels, - ) - - original_up_block_idx += num_original_up_blocks - - diffusers_checkpoint.update(checkpoint_update) - - # done .output_blocks -> .up_blocks - - diffusers_checkpoint.update(unet_conv_norm_out(checkpoint)) - diffusers_checkpoint.update(unet_conv_out(checkpoint)) - - return diffusers_checkpoint - - -# done inpaint unet - - -# unet utils - - -# .time_embed -> .time_embedding -def unet_time_embeddings(checkpoint): - diffusers_checkpoint = {} - - diffusers_checkpoint.update( - { - "time_embedding.linear_1.weight": checkpoint["time_embed.0.weight"], - "time_embedding.linear_1.bias": checkpoint["time_embed.0.bias"], - "time_embedding.linear_2.weight": checkpoint["time_embed.2.weight"], - "time_embedding.linear_2.bias": checkpoint["time_embed.2.bias"], - } - ) - - return diffusers_checkpoint - - -# .input_blocks.0 -> .conv_in -def unet_conv_in(checkpoint): - diffusers_checkpoint = {} - - diffusers_checkpoint.update( - { - "conv_in.weight": checkpoint["input_blocks.0.0.weight"], - "conv_in.bias": checkpoint["input_blocks.0.0.bias"], - } - ) - - return diffusers_checkpoint - - -def unet_add_embedding(checkpoint): - diffusers_checkpoint = {} - - diffusers_checkpoint.update( - { - "add_embedding.text_norm.weight": checkpoint["ln_model_n.weight"], - "add_embedding.text_norm.bias": checkpoint["ln_model_n.bias"], - "add_embedding.text_proj.weight": checkpoint["proj_n.weight"], - "add_embedding.text_proj.bias": checkpoint["proj_n.bias"], - "add_embedding.image_proj.weight": checkpoint["img_layer.weight"], - "add_embedding.image_proj.bias": checkpoint["img_layer.bias"], - } - ) - - return diffusers_checkpoint - - -def unet_encoder_hid_proj(checkpoint): - diffusers_checkpoint = {} - - diffusers_checkpoint.update( - { - "encoder_hid_proj.image_embeds.weight": checkpoint["clip_to_seq.weight"], - "encoder_hid_proj.image_embeds.bias": checkpoint["clip_to_seq.bias"], - "encoder_hid_proj.text_proj.weight": checkpoint["to_model_dim_n.weight"], - "encoder_hid_proj.text_proj.bias": checkpoint["to_model_dim_n.bias"], - } - ) - - return diffusers_checkpoint - - -# .out.0 -> .conv_norm_out -def unet_conv_norm_out(checkpoint): - diffusers_checkpoint = {} - - diffusers_checkpoint.update( - { - "conv_norm_out.weight": checkpoint["out.0.weight"], - "conv_norm_out.bias": checkpoint["out.0.bias"], - } - ) - - return diffusers_checkpoint - - -# .out.2 -> .conv_out -def unet_conv_out(checkpoint): - diffusers_checkpoint = {} - - diffusers_checkpoint.update( - { - "conv_out.weight": checkpoint["out.2.weight"], - "conv_out.bias": checkpoint["out.2.bias"], - } - ) - - return diffusers_checkpoint - - -# .input_blocks -> .down_blocks -def unet_downblock_to_diffusers_checkpoint( - model, checkpoint, *, diffusers_down_block_idx, original_down_block_idx, num_head_channels -): - diffusers_checkpoint = {} - - diffusers_resnet_prefix = f"down_blocks.{diffusers_down_block_idx}.resnets" - original_down_block_prefix = "input_blocks" - - down_block = model.down_blocks[diffusers_down_block_idx] - - num_resnets = len(down_block.resnets) - - if down_block.downsamplers is None: - downsampler = False - else: - assert len(down_block.downsamplers) == 1 - downsampler = True - # The downsample block is also a resnet - num_resnets += 1 - - for resnet_idx_inc in range(num_resnets): - full_resnet_prefix = f"{original_down_block_prefix}.{original_down_block_idx + resnet_idx_inc}.0" - - if downsampler and resnet_idx_inc == num_resnets - 1: - # this is a downsample block - full_diffusers_resnet_prefix = f"down_blocks.{diffusers_down_block_idx}.downsamplers.0" - else: - # this is a regular resnet block - full_diffusers_resnet_prefix = f"{diffusers_resnet_prefix}.{resnet_idx_inc}" - - diffusers_checkpoint.update( - resnet_to_diffusers_checkpoint( - checkpoint, resnet_prefix=full_resnet_prefix, diffusers_resnet_prefix=full_diffusers_resnet_prefix - ) - ) - - if hasattr(down_block, "attentions"): - num_attentions = len(down_block.attentions) - diffusers_attention_prefix = f"down_blocks.{diffusers_down_block_idx}.attentions" - - for attention_idx_inc in range(num_attentions): - full_attention_prefix = f"{original_down_block_prefix}.{original_down_block_idx + attention_idx_inc}.1" - full_diffusers_attention_prefix = f"{diffusers_attention_prefix}.{attention_idx_inc}" - - diffusers_checkpoint.update( - attention_to_diffusers_checkpoint( - checkpoint, - attention_prefix=full_attention_prefix, - diffusers_attention_prefix=full_diffusers_attention_prefix, - num_head_channels=num_head_channels, - ) - ) - - num_original_down_blocks = num_resnets - - return diffusers_checkpoint, num_original_down_blocks - - -# .middle_block -> .mid_block -def unet_midblock_to_diffusers_checkpoint(model, checkpoint, *, num_head_channels): - diffusers_checkpoint = {} - - # block 0 - - original_block_idx = 0 - - diffusers_checkpoint.update( - resnet_to_diffusers_checkpoint( - checkpoint, - diffusers_resnet_prefix="mid_block.resnets.0", - resnet_prefix=f"middle_block.{original_block_idx}", - ) - ) - - original_block_idx += 1 - - # optional block 1 - - if hasattr(model.mid_block, "attentions") and model.mid_block.attentions[0] is not None: - diffusers_checkpoint.update( - attention_to_diffusers_checkpoint( - checkpoint, - diffusers_attention_prefix="mid_block.attentions.0", - attention_prefix=f"middle_block.{original_block_idx}", - num_head_channels=num_head_channels, - ) - ) - original_block_idx += 1 - - # block 1 or block 2 - - diffusers_checkpoint.update( - resnet_to_diffusers_checkpoint( - checkpoint, - diffusers_resnet_prefix="mid_block.resnets.1", - resnet_prefix=f"middle_block.{original_block_idx}", - ) - ) - - return diffusers_checkpoint - - -# .output_blocks -> .up_blocks -def unet_upblock_to_diffusers_checkpoint( - model, checkpoint, *, diffusers_up_block_idx, original_up_block_idx, num_head_channels -): - diffusers_checkpoint = {} - - diffusers_resnet_prefix = f"up_blocks.{diffusers_up_block_idx}.resnets" - original_up_block_prefix = "output_blocks" - - up_block = model.up_blocks[diffusers_up_block_idx] - - num_resnets = len(up_block.resnets) - - if up_block.upsamplers is None: - upsampler = False - else: - assert len(up_block.upsamplers) == 1 - upsampler = True - # The upsample block is also a resnet - num_resnets += 1 - - has_attentions = hasattr(up_block, "attentions") - - for resnet_idx_inc in range(num_resnets): - if upsampler and resnet_idx_inc == num_resnets - 1: - # this is an upsample block - if has_attentions: - # There is a middle attention block that we skip - original_resnet_block_idx = 2 - else: - original_resnet_block_idx = 1 - - # we add the `minus 1` because the last two resnets are stuck together in the same output block - full_resnet_prefix = ( - f"{original_up_block_prefix}.{original_up_block_idx + resnet_idx_inc - 1}.{original_resnet_block_idx}" - ) - - full_diffusers_resnet_prefix = f"up_blocks.{diffusers_up_block_idx}.upsamplers.0" - else: - # this is a regular resnet block - full_resnet_prefix = f"{original_up_block_prefix}.{original_up_block_idx + resnet_idx_inc}.0" - full_diffusers_resnet_prefix = f"{diffusers_resnet_prefix}.{resnet_idx_inc}" - - diffusers_checkpoint.update( - resnet_to_diffusers_checkpoint( - checkpoint, resnet_prefix=full_resnet_prefix, diffusers_resnet_prefix=full_diffusers_resnet_prefix - ) - ) - - if has_attentions: - num_attentions = len(up_block.attentions) - diffusers_attention_prefix = f"up_blocks.{diffusers_up_block_idx}.attentions" - - for attention_idx_inc in range(num_attentions): - full_attention_prefix = f"{original_up_block_prefix}.{original_up_block_idx + attention_idx_inc}.1" - full_diffusers_attention_prefix = f"{diffusers_attention_prefix}.{attention_idx_inc}" - - diffusers_checkpoint.update( - attention_to_diffusers_checkpoint( - checkpoint, - attention_prefix=full_attention_prefix, - diffusers_attention_prefix=full_diffusers_attention_prefix, - num_head_channels=num_head_channels, - ) - ) - - num_original_down_blocks = num_resnets - 1 if upsampler else num_resnets - - return diffusers_checkpoint, num_original_down_blocks - - -def resnet_to_diffusers_checkpoint(checkpoint, *, diffusers_resnet_prefix, resnet_prefix): - diffusers_checkpoint = { - f"{diffusers_resnet_prefix}.norm1.weight": checkpoint[f"{resnet_prefix}.in_layers.0.weight"], - f"{diffusers_resnet_prefix}.norm1.bias": checkpoint[f"{resnet_prefix}.in_layers.0.bias"], - f"{diffusers_resnet_prefix}.conv1.weight": checkpoint[f"{resnet_prefix}.in_layers.2.weight"], - f"{diffusers_resnet_prefix}.conv1.bias": checkpoint[f"{resnet_prefix}.in_layers.2.bias"], - f"{diffusers_resnet_prefix}.time_emb_proj.weight": checkpoint[f"{resnet_prefix}.emb_layers.1.weight"], - f"{diffusers_resnet_prefix}.time_emb_proj.bias": checkpoint[f"{resnet_prefix}.emb_layers.1.bias"], - f"{diffusers_resnet_prefix}.norm2.weight": checkpoint[f"{resnet_prefix}.out_layers.0.weight"], - f"{diffusers_resnet_prefix}.norm2.bias": checkpoint[f"{resnet_prefix}.out_layers.0.bias"], - f"{diffusers_resnet_prefix}.conv2.weight": checkpoint[f"{resnet_prefix}.out_layers.3.weight"], - f"{diffusers_resnet_prefix}.conv2.bias": checkpoint[f"{resnet_prefix}.out_layers.3.bias"], - } - - skip_connection_prefix = f"{resnet_prefix}.skip_connection" - - if f"{skip_connection_prefix}.weight" in checkpoint: - diffusers_checkpoint.update( - { - f"{diffusers_resnet_prefix}.conv_shortcut.weight": checkpoint[f"{skip_connection_prefix}.weight"], - f"{diffusers_resnet_prefix}.conv_shortcut.bias": checkpoint[f"{skip_connection_prefix}.bias"], - } - ) - - return diffusers_checkpoint - - -def attention_to_diffusers_checkpoint(checkpoint, *, diffusers_attention_prefix, attention_prefix, num_head_channels): - diffusers_checkpoint = {} - - # .norm -> .group_norm - diffusers_checkpoint.update( - { - f"{diffusers_attention_prefix}.group_norm.weight": checkpoint[f"{attention_prefix}.norm.weight"], - f"{diffusers_attention_prefix}.group_norm.bias": checkpoint[f"{attention_prefix}.norm.bias"], - } - ) - - # .qkv -> .{query, key, value} - [q_weight, k_weight, v_weight], [q_bias, k_bias, v_bias] = split_attentions( - weight=checkpoint[f"{attention_prefix}.qkv.weight"][:, :, 0], - bias=checkpoint[f"{attention_prefix}.qkv.bias"], - split=3, - chunk_size=num_head_channels, - ) - - diffusers_checkpoint.update( - { - f"{diffusers_attention_prefix}.to_q.weight": q_weight, - f"{diffusers_attention_prefix}.to_q.bias": q_bias, - f"{diffusers_attention_prefix}.to_k.weight": k_weight, - f"{diffusers_attention_prefix}.to_k.bias": k_bias, - f"{diffusers_attention_prefix}.to_v.weight": v_weight, - f"{diffusers_attention_prefix}.to_v.bias": v_bias, - } - ) - - # .encoder_kv -> .{context_key, context_value} - [encoder_k_weight, encoder_v_weight], [encoder_k_bias, encoder_v_bias] = split_attentions( - weight=checkpoint[f"{attention_prefix}.encoder_kv.weight"][:, :, 0], - bias=checkpoint[f"{attention_prefix}.encoder_kv.bias"], - split=2, - chunk_size=num_head_channels, - ) - - diffusers_checkpoint.update( - { - f"{diffusers_attention_prefix}.add_k_proj.weight": encoder_k_weight, - f"{diffusers_attention_prefix}.add_k_proj.bias": encoder_k_bias, - f"{diffusers_attention_prefix}.add_v_proj.weight": encoder_v_weight, - f"{diffusers_attention_prefix}.add_v_proj.bias": encoder_v_bias, - } - ) - - # .proj_out (1d conv) -> .proj_attn (linear) - diffusers_checkpoint.update( - { - f"{diffusers_attention_prefix}.to_out.0.weight": checkpoint[f"{attention_prefix}.proj_out.weight"][ - :, :, 0 - ], - f"{diffusers_attention_prefix}.to_out.0.bias": checkpoint[f"{attention_prefix}.proj_out.bias"], - } - ) - - return diffusers_checkpoint - - -# TODO maybe document and/or can do more efficiently (build indices in for loop and extract once for each split?) -def split_attentions(*, weight, bias, split, chunk_size): - weights = [None] * split - biases = [None] * split - - weights_biases_idx = 0 - - for starting_row_index in range(0, weight.shape[0], chunk_size): - row_indices = torch.arange(starting_row_index, starting_row_index + chunk_size) - - weight_rows = weight[row_indices, :] - bias_rows = bias[row_indices] - - if weights[weights_biases_idx] is None: - assert weights[weights_biases_idx] is None - weights[weights_biases_idx] = weight_rows - biases[weights_biases_idx] = bias_rows - else: - assert weights[weights_biases_idx] is not None - weights[weights_biases_idx] = torch.concat([weights[weights_biases_idx], weight_rows]) - biases[weights_biases_idx] = torch.concat([biases[weights_biases_idx], bias_rows]) - - weights_biases_idx = (weights_biases_idx + 1) % split - - return weights, biases - - -# done unet utils - - -def prior(*, args, checkpoint_map_location): - print("loading prior") - - prior_checkpoint = torch.load(args.prior_checkpoint_path, map_location=checkpoint_map_location) - - clip_stats_checkpoint = torch.load(args.clip_stat_path, map_location=checkpoint_map_location) - - prior_model = prior_model_from_original_config() - - prior_diffusers_checkpoint = prior_original_checkpoint_to_diffusers_checkpoint( - prior_model, prior_checkpoint, clip_stats_checkpoint - ) - - del prior_checkpoint - del clip_stats_checkpoint - - load_checkpoint_to_model(prior_diffusers_checkpoint, prior_model, strict=True) - - print("done loading prior") - - return prior_model - - -def text2img(*, args, checkpoint_map_location): - print("loading text2img") - - text2img_checkpoint = torch.load(args.text2img_checkpoint_path, map_location=checkpoint_map_location) - - unet_model = unet_model_from_original_config() - - unet_diffusers_checkpoint = unet_original_checkpoint_to_diffusers_checkpoint(unet_model, text2img_checkpoint) - - del text2img_checkpoint - - load_checkpoint_to_model(unet_diffusers_checkpoint, unet_model, strict=True) - - print("done loading text2img") - - return unet_model - - -def inpaint_text2img(*, args, checkpoint_map_location): - print("loading inpaint text2img") - - inpaint_text2img_checkpoint = torch.load( - args.inpaint_text2img_checkpoint_path, map_location=checkpoint_map_location - ) - - inpaint_unet_model = inpaint_unet_model_from_original_config() - - inpaint_unet_diffusers_checkpoint = inpaint_unet_original_checkpoint_to_diffusers_checkpoint( - inpaint_unet_model, inpaint_text2img_checkpoint - ) - - del inpaint_text2img_checkpoint - - load_checkpoint_to_model(inpaint_unet_diffusers_checkpoint, inpaint_unet_model, strict=True) - - print("done loading inpaint text2img") - - return inpaint_unet_model - - -# movq - -MOVQ_CONFIG = { - "in_channels": 3, - "out_channels": 3, - "latent_channels": 4, - "down_block_types": ("DownEncoderBlock2D", "DownEncoderBlock2D", "DownEncoderBlock2D", "AttnDownEncoderBlock2D"), - "up_block_types": ("AttnUpDecoderBlock2D", "UpDecoderBlock2D", "UpDecoderBlock2D", "UpDecoderBlock2D"), - "num_vq_embeddings": 16384, - "block_out_channels": (128, 256, 256, 512), - "vq_embed_dim": 4, - "layers_per_block": 2, - "norm_type": "spatial", -} - - -def movq_model_from_original_config(): - movq = VQModel(**MOVQ_CONFIG) - return movq - - -def movq_encoder_to_diffusers_checkpoint(model, checkpoint): - diffusers_checkpoint = {} - - # conv_in - diffusers_checkpoint.update( - { - "encoder.conv_in.weight": checkpoint["encoder.conv_in.weight"], - "encoder.conv_in.bias": checkpoint["encoder.conv_in.bias"], - } - ) - - # down_blocks - for down_block_idx, down_block in enumerate(model.encoder.down_blocks): - diffusers_down_block_prefix = f"encoder.down_blocks.{down_block_idx}" - down_block_prefix = f"encoder.down.{down_block_idx}" - - # resnets - for resnet_idx, resnet in enumerate(down_block.resnets): - diffusers_resnet_prefix = f"{diffusers_down_block_prefix}.resnets.{resnet_idx}" - resnet_prefix = f"{down_block_prefix}.block.{resnet_idx}" - - diffusers_checkpoint.update( - movq_resnet_to_diffusers_checkpoint( - resnet, checkpoint, diffusers_resnet_prefix=diffusers_resnet_prefix, resnet_prefix=resnet_prefix - ) - ) - - # downsample - - # do not include the downsample when on the last down block - # There is no downsample on the last down block - if down_block_idx != len(model.encoder.down_blocks) - 1: - # There's a single downsample in the original checkpoint but a list of downsamples - # in the diffusers model. - diffusers_downsample_prefix = f"{diffusers_down_block_prefix}.downsamplers.0.conv" - downsample_prefix = f"{down_block_prefix}.downsample.conv" - diffusers_checkpoint.update( - { - f"{diffusers_downsample_prefix}.weight": checkpoint[f"{downsample_prefix}.weight"], - f"{diffusers_downsample_prefix}.bias": checkpoint[f"{downsample_prefix}.bias"], - } - ) - - # attentions - - if hasattr(down_block, "attentions"): - for attention_idx, _ in enumerate(down_block.attentions): - diffusers_attention_prefix = f"{diffusers_down_block_prefix}.attentions.{attention_idx}" - attention_prefix = f"{down_block_prefix}.attn.{attention_idx}" - diffusers_checkpoint.update( - movq_attention_to_diffusers_checkpoint( - checkpoint, - diffusers_attention_prefix=diffusers_attention_prefix, - attention_prefix=attention_prefix, - ) - ) - - # mid block - - # mid block attentions - - # There is a single hardcoded attention block in the middle of the VQ-diffusion encoder - diffusers_attention_prefix = "encoder.mid_block.attentions.0" - attention_prefix = "encoder.mid.attn_1" - diffusers_checkpoint.update( - movq_attention_to_diffusers_checkpoint( - checkpoint, diffusers_attention_prefix=diffusers_attention_prefix, attention_prefix=attention_prefix - ) - ) - - # mid block resnets - - for diffusers_resnet_idx, resnet in enumerate(model.encoder.mid_block.resnets): - diffusers_resnet_prefix = f"encoder.mid_block.resnets.{diffusers_resnet_idx}" - - # the hardcoded prefixes to `block_` are 1 and 2 - orig_resnet_idx = diffusers_resnet_idx + 1 - # There are two hardcoded resnets in the middle of the VQ-diffusion encoder - resnet_prefix = f"encoder.mid.block_{orig_resnet_idx}" - - diffusers_checkpoint.update( - movq_resnet_to_diffusers_checkpoint( - resnet, checkpoint, diffusers_resnet_prefix=diffusers_resnet_prefix, resnet_prefix=resnet_prefix - ) - ) - - diffusers_checkpoint.update( - { - # conv_norm_out - "encoder.conv_norm_out.weight": checkpoint["encoder.norm_out.weight"], - "encoder.conv_norm_out.bias": checkpoint["encoder.norm_out.bias"], - # conv_out - "encoder.conv_out.weight": checkpoint["encoder.conv_out.weight"], - "encoder.conv_out.bias": checkpoint["encoder.conv_out.bias"], - } - ) - - return diffusers_checkpoint - - -def movq_decoder_to_diffusers_checkpoint(model, checkpoint): - diffusers_checkpoint = {} - - # conv in - diffusers_checkpoint.update( - { - "decoder.conv_in.weight": checkpoint["decoder.conv_in.weight"], - "decoder.conv_in.bias": checkpoint["decoder.conv_in.bias"], - } - ) - - # up_blocks - - for diffusers_up_block_idx, up_block in enumerate(model.decoder.up_blocks): - # up_blocks are stored in reverse order in the VQ-diffusion checkpoint - orig_up_block_idx = len(model.decoder.up_blocks) - 1 - diffusers_up_block_idx - - diffusers_up_block_prefix = f"decoder.up_blocks.{diffusers_up_block_idx}" - up_block_prefix = f"decoder.up.{orig_up_block_idx}" - - # resnets - for resnet_idx, resnet in enumerate(up_block.resnets): - diffusers_resnet_prefix = f"{diffusers_up_block_prefix}.resnets.{resnet_idx}" - resnet_prefix = f"{up_block_prefix}.block.{resnet_idx}" - - diffusers_checkpoint.update( - movq_resnet_to_diffusers_checkpoint_spatial_norm( - resnet, checkpoint, diffusers_resnet_prefix=diffusers_resnet_prefix, resnet_prefix=resnet_prefix - ) - ) - - # upsample - - # there is no up sample on the last up block - if diffusers_up_block_idx != len(model.decoder.up_blocks) - 1: - # There's a single upsample in the VQ-diffusion checkpoint but a list of downsamples - # in the diffusers model. - diffusers_downsample_prefix = f"{diffusers_up_block_prefix}.upsamplers.0.conv" - downsample_prefix = f"{up_block_prefix}.upsample.conv" - diffusers_checkpoint.update( - { - f"{diffusers_downsample_prefix}.weight": checkpoint[f"{downsample_prefix}.weight"], - f"{diffusers_downsample_prefix}.bias": checkpoint[f"{downsample_prefix}.bias"], - } - ) - - # attentions - - if hasattr(up_block, "attentions"): - for attention_idx, _ in enumerate(up_block.attentions): - diffusers_attention_prefix = f"{diffusers_up_block_prefix}.attentions.{attention_idx}" - attention_prefix = f"{up_block_prefix}.attn.{attention_idx}" - diffusers_checkpoint.update( - movq_attention_to_diffusers_checkpoint_spatial_norm( - checkpoint, - diffusers_attention_prefix=diffusers_attention_prefix, - attention_prefix=attention_prefix, - ) - ) - - # mid block - - # mid block attentions - - # There is a single hardcoded attention block in the middle of the VQ-diffusion decoder - diffusers_attention_prefix = "decoder.mid_block.attentions.0" - attention_prefix = "decoder.mid.attn_1" - diffusers_checkpoint.update( - movq_attention_to_diffusers_checkpoint_spatial_norm( - checkpoint, diffusers_attention_prefix=diffusers_attention_prefix, attention_prefix=attention_prefix - ) - ) - - # mid block resnets - - for diffusers_resnet_idx, resnet in enumerate(model.encoder.mid_block.resnets): - diffusers_resnet_prefix = f"decoder.mid_block.resnets.{diffusers_resnet_idx}" - - # the hardcoded prefixes to `block_` are 1 and 2 - orig_resnet_idx = diffusers_resnet_idx + 1 - # There are two hardcoded resnets in the middle of the VQ-diffusion decoder - resnet_prefix = f"decoder.mid.block_{orig_resnet_idx}" - - diffusers_checkpoint.update( - movq_resnet_to_diffusers_checkpoint_spatial_norm( - resnet, checkpoint, diffusers_resnet_prefix=diffusers_resnet_prefix, resnet_prefix=resnet_prefix - ) - ) - - diffusers_checkpoint.update( - { - # conv_norm_out - "decoder.conv_norm_out.norm_layer.weight": checkpoint["decoder.norm_out.norm_layer.weight"], - "decoder.conv_norm_out.norm_layer.bias": checkpoint["decoder.norm_out.norm_layer.bias"], - "decoder.conv_norm_out.conv_y.weight": checkpoint["decoder.norm_out.conv_y.weight"], - "decoder.conv_norm_out.conv_y.bias": checkpoint["decoder.norm_out.conv_y.bias"], - "decoder.conv_norm_out.conv_b.weight": checkpoint["decoder.norm_out.conv_b.weight"], - "decoder.conv_norm_out.conv_b.bias": checkpoint["decoder.norm_out.conv_b.bias"], - # conv_out - "decoder.conv_out.weight": checkpoint["decoder.conv_out.weight"], - "decoder.conv_out.bias": checkpoint["decoder.conv_out.bias"], - } - ) - - return diffusers_checkpoint - - -def movq_resnet_to_diffusers_checkpoint(resnet, checkpoint, *, diffusers_resnet_prefix, resnet_prefix): - rv = { - # norm1 - f"{diffusers_resnet_prefix}.norm1.weight": checkpoint[f"{resnet_prefix}.norm1.weight"], - f"{diffusers_resnet_prefix}.norm1.bias": checkpoint[f"{resnet_prefix}.norm1.bias"], - # conv1 - f"{diffusers_resnet_prefix}.conv1.weight": checkpoint[f"{resnet_prefix}.conv1.weight"], - f"{diffusers_resnet_prefix}.conv1.bias": checkpoint[f"{resnet_prefix}.conv1.bias"], - # norm2 - f"{diffusers_resnet_prefix}.norm2.weight": checkpoint[f"{resnet_prefix}.norm2.weight"], - f"{diffusers_resnet_prefix}.norm2.bias": checkpoint[f"{resnet_prefix}.norm2.bias"], - # conv2 - f"{diffusers_resnet_prefix}.conv2.weight": checkpoint[f"{resnet_prefix}.conv2.weight"], - f"{diffusers_resnet_prefix}.conv2.bias": checkpoint[f"{resnet_prefix}.conv2.bias"], - } - - if resnet.conv_shortcut is not None: - rv.update( - { - f"{diffusers_resnet_prefix}.conv_shortcut.weight": checkpoint[f"{resnet_prefix}.nin_shortcut.weight"], - f"{diffusers_resnet_prefix}.conv_shortcut.bias": checkpoint[f"{resnet_prefix}.nin_shortcut.bias"], - } - ) - - return rv - - -def movq_resnet_to_diffusers_checkpoint_spatial_norm(resnet, checkpoint, *, diffusers_resnet_prefix, resnet_prefix): - rv = { - # norm1 - f"{diffusers_resnet_prefix}.norm1.norm_layer.weight": checkpoint[f"{resnet_prefix}.norm1.norm_layer.weight"], - f"{diffusers_resnet_prefix}.norm1.norm_layer.bias": checkpoint[f"{resnet_prefix}.norm1.norm_layer.bias"], - f"{diffusers_resnet_prefix}.norm1.conv_y.weight": checkpoint[f"{resnet_prefix}.norm1.conv_y.weight"], - f"{diffusers_resnet_prefix}.norm1.conv_y.bias": checkpoint[f"{resnet_prefix}.norm1.conv_y.bias"], - f"{diffusers_resnet_prefix}.norm1.conv_b.weight": checkpoint[f"{resnet_prefix}.norm1.conv_b.weight"], - f"{diffusers_resnet_prefix}.norm1.conv_b.bias": checkpoint[f"{resnet_prefix}.norm1.conv_b.bias"], - # conv1 - f"{diffusers_resnet_prefix}.conv1.weight": checkpoint[f"{resnet_prefix}.conv1.weight"], - f"{diffusers_resnet_prefix}.conv1.bias": checkpoint[f"{resnet_prefix}.conv1.bias"], - # norm2 - f"{diffusers_resnet_prefix}.norm2.norm_layer.weight": checkpoint[f"{resnet_prefix}.norm2.norm_layer.weight"], - f"{diffusers_resnet_prefix}.norm2.norm_layer.bias": checkpoint[f"{resnet_prefix}.norm2.norm_layer.bias"], - f"{diffusers_resnet_prefix}.norm2.conv_y.weight": checkpoint[f"{resnet_prefix}.norm2.conv_y.weight"], - f"{diffusers_resnet_prefix}.norm2.conv_y.bias": checkpoint[f"{resnet_prefix}.norm2.conv_y.bias"], - f"{diffusers_resnet_prefix}.norm2.conv_b.weight": checkpoint[f"{resnet_prefix}.norm2.conv_b.weight"], - f"{diffusers_resnet_prefix}.norm2.conv_b.bias": checkpoint[f"{resnet_prefix}.norm2.conv_b.bias"], - # conv2 - f"{diffusers_resnet_prefix}.conv2.weight": checkpoint[f"{resnet_prefix}.conv2.weight"], - f"{diffusers_resnet_prefix}.conv2.bias": checkpoint[f"{resnet_prefix}.conv2.bias"], - } - - if resnet.conv_shortcut is not None: - rv.update( - { - f"{diffusers_resnet_prefix}.conv_shortcut.weight": checkpoint[f"{resnet_prefix}.nin_shortcut.weight"], - f"{diffusers_resnet_prefix}.conv_shortcut.bias": checkpoint[f"{resnet_prefix}.nin_shortcut.bias"], - } - ) - - return rv - - -def movq_attention_to_diffusers_checkpoint(checkpoint, *, diffusers_attention_prefix, attention_prefix): - return { - # norm - f"{diffusers_attention_prefix}.group_norm.weight": checkpoint[f"{attention_prefix}.norm.weight"], - f"{diffusers_attention_prefix}.group_norm.bias": checkpoint[f"{attention_prefix}.norm.bias"], - # query - f"{diffusers_attention_prefix}.to_q.weight": checkpoint[f"{attention_prefix}.q.weight"][:, :, 0, 0], - f"{diffusers_attention_prefix}.to_q.bias": checkpoint[f"{attention_prefix}.q.bias"], - # key - f"{diffusers_attention_prefix}.to_k.weight": checkpoint[f"{attention_prefix}.k.weight"][:, :, 0, 0], - f"{diffusers_attention_prefix}.to_k.bias": checkpoint[f"{attention_prefix}.k.bias"], - # value - f"{diffusers_attention_prefix}.to_v.weight": checkpoint[f"{attention_prefix}.v.weight"][:, :, 0, 0], - f"{diffusers_attention_prefix}.to_v.bias": checkpoint[f"{attention_prefix}.v.bias"], - # proj_attn - f"{diffusers_attention_prefix}.to_out.0.weight": checkpoint[f"{attention_prefix}.proj_out.weight"][:, :, 0, 0], - f"{diffusers_attention_prefix}.to_out.0.bias": checkpoint[f"{attention_prefix}.proj_out.bias"], - } - - -def movq_attention_to_diffusers_checkpoint_spatial_norm(checkpoint, *, diffusers_attention_prefix, attention_prefix): - return { - # norm - f"{diffusers_attention_prefix}.spatial_norm.norm_layer.weight": checkpoint[ - f"{attention_prefix}.norm.norm_layer.weight" - ], - f"{diffusers_attention_prefix}.spatial_norm.norm_layer.bias": checkpoint[ - f"{attention_prefix}.norm.norm_layer.bias" - ], - f"{diffusers_attention_prefix}.spatial_norm.conv_y.weight": checkpoint[ - f"{attention_prefix}.norm.conv_y.weight" - ], - f"{diffusers_attention_prefix}.spatial_norm.conv_y.bias": checkpoint[f"{attention_prefix}.norm.conv_y.bias"], - f"{diffusers_attention_prefix}.spatial_norm.conv_b.weight": checkpoint[ - f"{attention_prefix}.norm.conv_b.weight" - ], - f"{diffusers_attention_prefix}.spatial_norm.conv_b.bias": checkpoint[f"{attention_prefix}.norm.conv_b.bias"], - # query - f"{diffusers_attention_prefix}.to_q.weight": checkpoint[f"{attention_prefix}.q.weight"][:, :, 0, 0], - f"{diffusers_attention_prefix}.to_q.bias": checkpoint[f"{attention_prefix}.q.bias"], - # key - f"{diffusers_attention_prefix}.to_k.weight": checkpoint[f"{attention_prefix}.k.weight"][:, :, 0, 0], - f"{diffusers_attention_prefix}.to_k.bias": checkpoint[f"{attention_prefix}.k.bias"], - # value - f"{diffusers_attention_prefix}.to_v.weight": checkpoint[f"{attention_prefix}.v.weight"][:, :, 0, 0], - f"{diffusers_attention_prefix}.to_v.bias": checkpoint[f"{attention_prefix}.v.bias"], - # proj_attn - f"{diffusers_attention_prefix}.to_out.0.weight": checkpoint[f"{attention_prefix}.proj_out.weight"][:, :, 0, 0], - f"{diffusers_attention_prefix}.to_out.0.bias": checkpoint[f"{attention_prefix}.proj_out.bias"], - } - - -def movq_original_checkpoint_to_diffusers_checkpoint(model, checkpoint): - diffusers_checkpoint = {} - diffusers_checkpoint.update(movq_encoder_to_diffusers_checkpoint(model, checkpoint)) - - # quant_conv - - diffusers_checkpoint.update( - { - "quant_conv.weight": checkpoint["quant_conv.weight"], - "quant_conv.bias": checkpoint["quant_conv.bias"], - } - ) - - # quantize - diffusers_checkpoint.update({"quantize.embedding.weight": checkpoint["quantize.embedding.weight"]}) - - # post_quant_conv - diffusers_checkpoint.update( - { - "post_quant_conv.weight": checkpoint["post_quant_conv.weight"], - "post_quant_conv.bias": checkpoint["post_quant_conv.bias"], - } - ) - - # decoder - diffusers_checkpoint.update(movq_decoder_to_diffusers_checkpoint(model, checkpoint)) - - return diffusers_checkpoint - - -def movq(*, args, checkpoint_map_location): - print("loading movq") - - movq_checkpoint = torch.load(args.movq_checkpoint_path, map_location=checkpoint_map_location) - - movq_model = movq_model_from_original_config() - - movq_diffusers_checkpoint = movq_original_checkpoint_to_diffusers_checkpoint(movq_model, movq_checkpoint) - - del movq_checkpoint - - load_checkpoint_to_model(movq_diffusers_checkpoint, movq_model, strict=True) - - print("done loading movq") - - return movq_model - - -def load_checkpoint_to_model(checkpoint, model, strict=False): - with tempfile.NamedTemporaryFile(delete=False) as file: - torch.save(checkpoint, file.name) - del checkpoint - if strict: - model.load_state_dict(torch.load(file.name), strict=True) - else: - load_checkpoint_and_dispatch(model, file.name, device_map="auto") - os.remove(file.name) - - -if __name__ == "__main__": - parser = argparse.ArgumentParser() - - parser.add_argument("--dump_path", default=None, type=str, required=True, help="Path to the output model.") - - parser.add_argument( - "--prior_checkpoint_path", - default=None, - type=str, - required=False, - help="Path to the prior checkpoint to convert.", - ) - parser.add_argument( - "--clip_stat_path", - default=None, - type=str, - required=False, - help="Path to the clip stats checkpoint to convert.", - ) - parser.add_argument( - "--text2img_checkpoint_path", - default=None, - type=str, - required=False, - help="Path to the text2img checkpoint to convert.", - ) - parser.add_argument( - "--movq_checkpoint_path", - default=None, - type=str, - required=False, - help="Path to the text2img checkpoint to convert.", - ) - parser.add_argument( - "--inpaint_text2img_checkpoint_path", - default=None, - type=str, - required=False, - help="Path to the inpaint text2img checkpoint to convert.", - ) - parser.add_argument( - "--checkpoint_load_device", - default="cpu", - type=str, - required=False, - help="The device passed to `map_location` when loading checkpoints.", - ) - - parser.add_argument( - "--debug", - default=None, - type=str, - required=False, - help="Only run a specific stage of the convert script. Used for debugging", - ) - - args = parser.parse_args() - - print(f"loading checkpoints to {args.checkpoint_load_device}") - - checkpoint_map_location = torch.device(args.checkpoint_load_device) - - if args.debug is not None: - print(f"debug: only executing {args.debug}") - - if args.debug is None: - print("to-do") - elif args.debug == "prior": - prior_model = prior(args=args, checkpoint_map_location=checkpoint_map_location) - prior_model.save_pretrained(args.dump_path) - elif args.debug == "text2img": - unet_model = text2img(args=args, checkpoint_map_location=checkpoint_map_location) - unet_model.save_pretrained(f"{args.dump_path}/unet") - elif args.debug == "inpaint_text2img": - inpaint_unet_model = inpaint_text2img(args=args, checkpoint_map_location=checkpoint_map_location) - inpaint_unet_model.save_pretrained(f"{args.dump_path}/inpaint_unet") - elif args.debug == "decoder": - decoder = movq(args=args, checkpoint_map_location=checkpoint_map_location) - decoder.save_pretrained(f"{args.dump_path}/decoder") - else: - raise ValueError(f"unknown debug value : {args.debug}") diff --git a/spaces/Andy1621/uniformer_image_segmentation/configs/hrnet/fcn_hr18_512x512_40k_voc12aug.py b/spaces/Andy1621/uniformer_image_segmentation/configs/hrnet/fcn_hr18_512x512_40k_voc12aug.py deleted file mode 100644 index d74e95943afca04ba4073e411e0b713985384129..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_segmentation/configs/hrnet/fcn_hr18_512x512_40k_voc12aug.py +++ /dev/null @@ -1,5 +0,0 @@ -_base_ = [ - '../_base_/models/fcn_hr18.py', '../_base_/datasets/pascal_voc12_aug.py', - '../_base_/default_runtime.py', '../_base_/schedules/schedule_40k.py' -] -model = dict(decode_head=dict(num_classes=21)) diff --git a/spaces/Anonymous-sub/Rerender/ControlNet/ldm/modules/midas/midas/vit.py b/spaces/Anonymous-sub/Rerender/ControlNet/ldm/modules/midas/midas/vit.py deleted file mode 100644 index ea46b1be88b261b0dec04f3da0256f5f66f88a74..0000000000000000000000000000000000000000 --- a/spaces/Anonymous-sub/Rerender/ControlNet/ldm/modules/midas/midas/vit.py +++ /dev/null @@ -1,491 +0,0 @@ -import torch -import torch.nn as nn -import timm -import types -import math -import torch.nn.functional as F - - -class Slice(nn.Module): - def __init__(self, start_index=1): - super(Slice, self).__init__() - self.start_index = start_index - - def forward(self, x): - return x[:, self.start_index :] - - -class AddReadout(nn.Module): - def __init__(self, start_index=1): - super(AddReadout, self).__init__() - self.start_index = start_index - - def forward(self, x): - if self.start_index == 2: - readout = (x[:, 0] + x[:, 1]) / 2 - else: - readout = x[:, 0] - return x[:, self.start_index :] + readout.unsqueeze(1) - - -class ProjectReadout(nn.Module): - def __init__(self, in_features, start_index=1): - super(ProjectReadout, self).__init__() - self.start_index = start_index - - self.project = nn.Sequential(nn.Linear(2 * in_features, in_features), nn.GELU()) - - def forward(self, x): - readout = x[:, 0].unsqueeze(1).expand_as(x[:, self.start_index :]) - features = torch.cat((x[:, self.start_index :], readout), -1) - - return self.project(features) - - -class Transpose(nn.Module): - def __init__(self, dim0, dim1): - super(Transpose, self).__init__() - self.dim0 = dim0 - self.dim1 = dim1 - - def forward(self, x): - x = x.transpose(self.dim0, self.dim1) - return x - - -def forward_vit(pretrained, x): - b, c, h, w = x.shape - - glob = pretrained.model.forward_flex(x) - - layer_1 = pretrained.activations["1"] - layer_2 = pretrained.activations["2"] - layer_3 = pretrained.activations["3"] - layer_4 = pretrained.activations["4"] - - layer_1 = pretrained.act_postprocess1[0:2](layer_1) - layer_2 = pretrained.act_postprocess2[0:2](layer_2) - layer_3 = pretrained.act_postprocess3[0:2](layer_3) - layer_4 = pretrained.act_postprocess4[0:2](layer_4) - - unflatten = nn.Sequential( - nn.Unflatten( - 2, - torch.Size( - [ - h // pretrained.model.patch_size[1], - w // pretrained.model.patch_size[0], - ] - ), - ) - ) - - if layer_1.ndim == 3: - layer_1 = unflatten(layer_1) - if layer_2.ndim == 3: - layer_2 = unflatten(layer_2) - if layer_3.ndim == 3: - layer_3 = unflatten(layer_3) - if layer_4.ndim == 3: - layer_4 = unflatten(layer_4) - - layer_1 = pretrained.act_postprocess1[3 : len(pretrained.act_postprocess1)](layer_1) - layer_2 = pretrained.act_postprocess2[3 : len(pretrained.act_postprocess2)](layer_2) - layer_3 = pretrained.act_postprocess3[3 : len(pretrained.act_postprocess3)](layer_3) - layer_4 = pretrained.act_postprocess4[3 : len(pretrained.act_postprocess4)](layer_4) - - return layer_1, layer_2, layer_3, layer_4 - - -def _resize_pos_embed(self, posemb, gs_h, gs_w): - posemb_tok, posemb_grid = ( - posemb[:, : self.start_index], - posemb[0, self.start_index :], - ) - - gs_old = int(math.sqrt(len(posemb_grid))) - - posemb_grid = posemb_grid.reshape(1, gs_old, gs_old, -1).permute(0, 3, 1, 2) - posemb_grid = F.interpolate(posemb_grid, size=(gs_h, gs_w), mode="bilinear") - posemb_grid = posemb_grid.permute(0, 2, 3, 1).reshape(1, gs_h * gs_w, -1) - - posemb = torch.cat([posemb_tok, posemb_grid], dim=1) - - return posemb - - -def forward_flex(self, x): - b, c, h, w = x.shape - - pos_embed = self._resize_pos_embed( - self.pos_embed, h // self.patch_size[1], w // self.patch_size[0] - ) - - B = x.shape[0] - - if hasattr(self.patch_embed, "backbone"): - x = self.patch_embed.backbone(x) - if isinstance(x, (list, tuple)): - x = x[-1] # last feature if backbone outputs list/tuple of features - - x = self.patch_embed.proj(x).flatten(2).transpose(1, 2) - - if getattr(self, "dist_token", None) is not None: - cls_tokens = self.cls_token.expand( - B, -1, -1 - ) # stole cls_tokens impl from Phil Wang, thanks - dist_token = self.dist_token.expand(B, -1, -1) - x = torch.cat((cls_tokens, dist_token, x), dim=1) - else: - cls_tokens = self.cls_token.expand( - B, -1, -1 - ) # stole cls_tokens impl from Phil Wang, thanks - x = torch.cat((cls_tokens, x), dim=1) - - x = x + pos_embed - x = self.pos_drop(x) - - for blk in self.blocks: - x = blk(x) - - x = self.norm(x) - - return x - - -activations = {} - - -def get_activation(name): - def hook(model, input, output): - activations[name] = output - - return hook - - -def get_readout_oper(vit_features, features, use_readout, start_index=1): - if use_readout == "ignore": - readout_oper = [Slice(start_index)] * len(features) - elif use_readout == "add": - readout_oper = [AddReadout(start_index)] * len(features) - elif use_readout == "project": - readout_oper = [ - ProjectReadout(vit_features, start_index) for out_feat in features - ] - else: - assert ( - False - ), "wrong operation for readout token, use_readout can be 'ignore', 'add', or 'project'" - - return readout_oper - - -def _make_vit_b16_backbone( - model, - features=[96, 192, 384, 768], - size=[384, 384], - hooks=[2, 5, 8, 11], - vit_features=768, - use_readout="ignore", - start_index=1, -): - pretrained = nn.Module() - - pretrained.model = model - pretrained.model.blocks[hooks[0]].register_forward_hook(get_activation("1")) - pretrained.model.blocks[hooks[1]].register_forward_hook(get_activation("2")) - pretrained.model.blocks[hooks[2]].register_forward_hook(get_activation("3")) - pretrained.model.blocks[hooks[3]].register_forward_hook(get_activation("4")) - - pretrained.activations = activations - - readout_oper = get_readout_oper(vit_features, features, use_readout, start_index) - - # 32, 48, 136, 384 - pretrained.act_postprocess1 = nn.Sequential( - readout_oper[0], - Transpose(1, 2), - nn.Unflatten(2, torch.Size([size[0] // 16, size[1] // 16])), - nn.Conv2d( - in_channels=vit_features, - out_channels=features[0], - kernel_size=1, - stride=1, - padding=0, - ), - nn.ConvTranspose2d( - in_channels=features[0], - out_channels=features[0], - kernel_size=4, - stride=4, - padding=0, - bias=True, - dilation=1, - groups=1, - ), - ) - - pretrained.act_postprocess2 = nn.Sequential( - readout_oper[1], - Transpose(1, 2), - nn.Unflatten(2, torch.Size([size[0] // 16, size[1] // 16])), - nn.Conv2d( - in_channels=vit_features, - out_channels=features[1], - kernel_size=1, - stride=1, - padding=0, - ), - nn.ConvTranspose2d( - in_channels=features[1], - out_channels=features[1], - kernel_size=2, - stride=2, - padding=0, - bias=True, - dilation=1, - groups=1, - ), - ) - - pretrained.act_postprocess3 = nn.Sequential( - readout_oper[2], - Transpose(1, 2), - nn.Unflatten(2, torch.Size([size[0] // 16, size[1] // 16])), - nn.Conv2d( - in_channels=vit_features, - out_channels=features[2], - kernel_size=1, - stride=1, - padding=0, - ), - ) - - pretrained.act_postprocess4 = nn.Sequential( - readout_oper[3], - Transpose(1, 2), - nn.Unflatten(2, torch.Size([size[0] // 16, size[1] // 16])), - nn.Conv2d( - in_channels=vit_features, - out_channels=features[3], - kernel_size=1, - stride=1, - padding=0, - ), - nn.Conv2d( - in_channels=features[3], - out_channels=features[3], - kernel_size=3, - stride=2, - padding=1, - ), - ) - - pretrained.model.start_index = start_index - pretrained.model.patch_size = [16, 16] - - # We inject this function into the VisionTransformer instances so that - # we can use it with interpolated position embeddings without modifying the library source. - pretrained.model.forward_flex = types.MethodType(forward_flex, pretrained.model) - pretrained.model._resize_pos_embed = types.MethodType( - _resize_pos_embed, pretrained.model - ) - - return pretrained - - -def _make_pretrained_vitl16_384(pretrained, use_readout="ignore", hooks=None): - model = timm.create_model("vit_large_patch16_384", pretrained=pretrained) - - hooks = [5, 11, 17, 23] if hooks == None else hooks - return _make_vit_b16_backbone( - model, - features=[256, 512, 1024, 1024], - hooks=hooks, - vit_features=1024, - use_readout=use_readout, - ) - - -def _make_pretrained_vitb16_384(pretrained, use_readout="ignore", hooks=None): - model = timm.create_model("vit_base_patch16_384", pretrained=pretrained) - - hooks = [2, 5, 8, 11] if hooks == None else hooks - return _make_vit_b16_backbone( - model, features=[96, 192, 384, 768], hooks=hooks, use_readout=use_readout - ) - - -def _make_pretrained_deitb16_384(pretrained, use_readout="ignore", hooks=None): - model = timm.create_model("vit_deit_base_patch16_384", pretrained=pretrained) - - hooks = [2, 5, 8, 11] if hooks == None else hooks - return _make_vit_b16_backbone( - model, features=[96, 192, 384, 768], hooks=hooks, use_readout=use_readout - ) - - -def _make_pretrained_deitb16_distil_384(pretrained, use_readout="ignore", hooks=None): - model = timm.create_model( - "vit_deit_base_distilled_patch16_384", pretrained=pretrained - ) - - hooks = [2, 5, 8, 11] if hooks == None else hooks - return _make_vit_b16_backbone( - model, - features=[96, 192, 384, 768], - hooks=hooks, - use_readout=use_readout, - start_index=2, - ) - - -def _make_vit_b_rn50_backbone( - model, - features=[256, 512, 768, 768], - size=[384, 384], - hooks=[0, 1, 8, 11], - vit_features=768, - use_vit_only=False, - use_readout="ignore", - start_index=1, -): - pretrained = nn.Module() - - pretrained.model = model - - if use_vit_only == True: - pretrained.model.blocks[hooks[0]].register_forward_hook(get_activation("1")) - pretrained.model.blocks[hooks[1]].register_forward_hook(get_activation("2")) - else: - pretrained.model.patch_embed.backbone.stages[0].register_forward_hook( - get_activation("1") - ) - pretrained.model.patch_embed.backbone.stages[1].register_forward_hook( - get_activation("2") - ) - - pretrained.model.blocks[hooks[2]].register_forward_hook(get_activation("3")) - pretrained.model.blocks[hooks[3]].register_forward_hook(get_activation("4")) - - pretrained.activations = activations - - readout_oper = get_readout_oper(vit_features, features, use_readout, start_index) - - if use_vit_only == True: - pretrained.act_postprocess1 = nn.Sequential( - readout_oper[0], - Transpose(1, 2), - nn.Unflatten(2, torch.Size([size[0] // 16, size[1] // 16])), - nn.Conv2d( - in_channels=vit_features, - out_channels=features[0], - kernel_size=1, - stride=1, - padding=0, - ), - nn.ConvTranspose2d( - in_channels=features[0], - out_channels=features[0], - kernel_size=4, - stride=4, - padding=0, - bias=True, - dilation=1, - groups=1, - ), - ) - - pretrained.act_postprocess2 = nn.Sequential( - readout_oper[1], - Transpose(1, 2), - nn.Unflatten(2, torch.Size([size[0] // 16, size[1] // 16])), - nn.Conv2d( - in_channels=vit_features, - out_channels=features[1], - kernel_size=1, - stride=1, - padding=0, - ), - nn.ConvTranspose2d( - in_channels=features[1], - out_channels=features[1], - kernel_size=2, - stride=2, - padding=0, - bias=True, - dilation=1, - groups=1, - ), - ) - else: - pretrained.act_postprocess1 = nn.Sequential( - nn.Identity(), nn.Identity(), nn.Identity() - ) - pretrained.act_postprocess2 = nn.Sequential( - nn.Identity(), nn.Identity(), nn.Identity() - ) - - pretrained.act_postprocess3 = nn.Sequential( - readout_oper[2], - Transpose(1, 2), - nn.Unflatten(2, torch.Size([size[0] // 16, size[1] // 16])), - nn.Conv2d( - in_channels=vit_features, - out_channels=features[2], - kernel_size=1, - stride=1, - padding=0, - ), - ) - - pretrained.act_postprocess4 = nn.Sequential( - readout_oper[3], - Transpose(1, 2), - nn.Unflatten(2, torch.Size([size[0] // 16, size[1] // 16])), - nn.Conv2d( - in_channels=vit_features, - out_channels=features[3], - kernel_size=1, - stride=1, - padding=0, - ), - nn.Conv2d( - in_channels=features[3], - out_channels=features[3], - kernel_size=3, - stride=2, - padding=1, - ), - ) - - pretrained.model.start_index = start_index - pretrained.model.patch_size = [16, 16] - - # We inject this function into the VisionTransformer instances so that - # we can use it with interpolated position embeddings without modifying the library source. - pretrained.model.forward_flex = types.MethodType(forward_flex, pretrained.model) - - # We inject this function into the VisionTransformer instances so that - # we can use it with interpolated position embeddings without modifying the library source. - pretrained.model._resize_pos_embed = types.MethodType( - _resize_pos_embed, pretrained.model - ) - - return pretrained - - -def _make_pretrained_vitb_rn50_384( - pretrained, use_readout="ignore", hooks=None, use_vit_only=False -): - model = timm.create_model("vit_base_resnet50_384", pretrained=pretrained) - - hooks = [0, 1, 8, 11] if hooks == None else hooks - return _make_vit_b_rn50_backbone( - model, - features=[256, 512, 768, 768], - size=[384, 384], - hooks=hooks, - use_vit_only=use_vit_only, - use_readout=use_readout, - ) diff --git a/spaces/Benson/text-generation/Examples/Apk Dream League Soccer Classic.md b/spaces/Benson/text-generation/Examples/Apk Dream League Soccer Classic.md deleted file mode 100644 index 54bfae7bd8ff559c0dc3b4d97e1ff4d8669471dc..0000000000000000000000000000000000000000 --- a/spaces/Benson/text-generation/Examples/Apk Dream League Soccer Classic.md +++ /dev/null @@ -1,149 +0,0 @@ -
-

Reino Defensa Mod Apk: Un juego de defensa de torre de estilo de píxeles

-

Si usted es un fan de los juegos de torre de defensa, es posible que desee echa un vistazo a Kingdom Defense mod apk. Este es un juego de torre de defensa estilo píxel que te reta a construir y actualizar varias torres de defensa para bloquear el ataque del enemigo. También puedes usar soldados, trampas y héroes para ayudarte a defender tu reino. En este artículo, le diremos qué es Kingdom Defense, cómo descargar e instalar Kingdom Defense mod apk, y algunos consejos y trucos para jugar Kingdom Defense.

-

¿Qué es la Defensa del Reino?

-

Kingdom Defense es un juego de estrategia casual desarrollado por Little Games Ltd y publicado por Little Game. Fue lanzado en Steam el 23 de diciembre de 2022. El juego tiene un estilo de píxeles y una simple operación de clic. El juego cuenta con tres niveles, tres tipos de torres de defensa básicas y diez tipos de actualizaciones de torres de defensa. El juego también tiene una trama y una perspectiva del mundo que se ampliará en el futuro.

-

apk dream league soccer classic


Downloadhttps://bltlly.com/2v6JIO



-

El juego y las características de Kingdom Defense

-

El modo de juego de Kingdom Defense es similar a otros juegos de torre de defensa. Tienes que colocar tus torres de defensa a lo largo del camino que el enemigo tomará para llegar a tu castillo. También puede utilizar soldados, trampas y héroes para ayudar a su defensa. Cada tipo de torre de defensa tiene diferentes atributos y métodos de ataque. Es necesario que coincida razonablemente diferentes torres de defensa para dar el juego completo a su poder. También puede actualizar sus torres de defensa para hacerlas más poderosas.

-

Cada vez que un enemigo alcanza el punto final, un punto será deducido. Cuando el puntaje es cero, el juego fallará. Tienes que sobrevivir a las oleadas de enemigos y completar el nivel. El juego tiene tres niveles con diferentes dificultades y entornos. También puedes desbloquear logros y recoger recompensas mientras juegas.

-

Los beneficios de jugar Kingdom Defense mod apk

- -

Cómo descargar e instalar Kingdom Defense mod apk?

-

Si desea jugar Kingdom Defense mod apk, es necesario descargar e instalar en su dispositivo. Estos son los pasos para hacerlo:

-

Los pasos para descargar e instalar Kingdom Defense mod apk

-
    -
  1. Ir a [este enlace]( 1 ) y descargar el archivo apk mod Defensa del Reino.
  2. -
  3. Ir a la configuración del dispositivo y permitir la instalación de aplicaciones de fuentes desconocidas.
  4. -
  5. Busque el archivo descargado en su administrador de archivos y toque en él para instalarlo.
  6. -
  7. Espere a que el proceso de instalación termine y luego inicie el juego.
  8. -
  9. Disfruta jugando Kingdom Defense mod apk con dinero ilimitado, gemas, energía y más.
  10. -
-

Las precauciones y requisitos para la defensa del reino mod apk

-

Antes de descargar e instalar Kingdom Defense mod apk, es necesario tomar algunas precauciones y cumplir con algunos requisitos. Estos son algunos de ellos:

-
    -
  • Necesitas tener suficiente espacio de almacenamiento en tu dispositivo para descargar e instalar el juego.
  • -
  • Necesitas tener una versión compatible de Android para ejecutar el juego. El requisito mínimo es Android 4.4 o superior.
  • -
  • Necesitas tener una conexión a Internet estable para descargar el juego y acceder a algunas de sus características.
  • -
  • Usted necesita ser consciente de los riesgos de usar archivos apk mod, tales como malware, virus, o prohibiciones. Solo debe descargar archivos apk mod de fuentes de confianza y escanearlos con software antivirus antes de instalarlos.
  • -
  • Es necesario hacer una copia de seguridad de los datos originales del juego antes de instalar el archivo apk mod, en caso de que desee restaurarlo más tarde.
  • -
-

Consejos y trucos para jugar Kingdom Defense

-

Kingdom Defense es un juego divertido y desafiante que requiere estrategia y habilidad. Aquí hay algunos consejos y trucos para ayudarte a jugar mejor Kingdom Defense:

-

Cómo usar diferentes torres de defensa y actualizaciones

- - - -Torre/Actualización -Rango -Daño -Velocidad -Costo -Capacidad - - -Torre Archer -Medio -Baja -Rápido -Barato -Ninguno - - -Torre de cañón -Corto -Alta -Lento -Caro -Ninguno - - -Torre mágica -Largo -Medio -Medio -Moderado -Ninguno - - -Actualización de la Torre Archer 1: Torre de ballesta -Medio -Bajo-Medio -Muy rápido -Moderado-Barato -Pierce: Ataca múltiples enemigos en una línea. - - -Actualización de la Torre de Cañón 1: Torre de Bombas -Corto-Medio -Muy alto -Lento-muy lento -Moderado-Caro -Splash: Ataca a varios enemigos en un área. - - -Actualización de la torre mágica 1: Torre de hielo -Largo -Medio -Medio -Moderado -Congelar: Ralentizar el movimiento del enemigo y la velocidad de ataque. - - -Actualización de la Torre Archer 2: Torre de francotirador -Muy largo -Alta -Lento -Caro -Crítico: Inflige daño adicional con cierta probabilidad. - - -Actualización de la torre de cañón 2: Torre de misiles -Medio-largo -Muy alto -Medio -Muy caro -Homing: Sigue al enemigo hasta que llegue o falle. - - -Actualización de torre mágica 2: Torre de fuego -Medio largo -High-Medium -Fast-Medium -Caro-moderado -Quemar: Inflige daño continuo a lo largo del tiempo. - - - -

Cómo administrar sus unidades y recursos

-

Además de las torres de defensa, también puedes usar unidades y recursos para ayudarte a defender tu reino. Las unidades son soldados, trampas y héroes que puedes desplegar en el campo de batalla. Los recursos son dinero, gemas y energía que puedes usar para comprar y mejorar tus unidades y torres. Aquí hay algunos consejos sobre cómo administrar sus unidades y recursos:

-
    -
  • Puedes desplegar soldados en el camino para bloquear el avance del enemigo. Los soldados tienen diferentes habilidades y costos. Puedes actualizar a tus soldados para hacerlos más fuertes y duraderos.
  • -
  • Puedes desplegar trampas en el camino para dañar u obstaculizar al enemigo. Las trampas tienen diferentes efectos y costos. Puede actualizar sus trampas para que sean más eficaces y reutilizables.
  • -
  • Puedes desplegar héroes en el camino para luchar contra el enemigo. Los héroes tienen habilidades y atributos especiales. Puedes actualizar a tus héroes para hacerlos más poderosos y desbloquear nuevas habilidades.
  • -
  • Puedes ganar dinero matando enemigos y completando niveles. Puedes usar dinero para comprar y mejorar tus unidades y torres.
  • -
  • Puedes ganar gemas completando logros y recogiendo recompensas. Puedes usar gemas para comprar objetos especiales y potenciadores.
  • -
  • Puedes ganar energía jugando el juego o viendo anuncios. Puedes usar energía para comenzar un nivel o usar la habilidad de un héroe.
  • -
  • Usted debe equilibrar su gasto y ahorro de sus recursos. No debe gastar demasiado en una unidad o torre, pero tampoco debe ahorrar demasiado para más adelante. También debe usar sus recursos sabiamente y estratégicamente.
  • Cómo completar niveles y desafíos

    -

    Kingdom Defense tiene tres niveles con diferentes dificultades y entornos. Tienes que completar cada nivel para desbloquear el siguiente. Cada nivel tiene una serie de oleadas de enemigos que tienes que sobrevivir. También puedes elegir el nivel de dificultad de fácil, normal o difícil. Cuanto mayor sea la dificultad, más enemigos y recompensas encontrarás.

    - -

    Deberías intentar completar niveles y desafíos tanto como sea posible. Te ayudarán a mejorar tu experiencia y habilidades de juego. También te darán más dinero, gemas, energía y objetos que puedes usar para mejorar tu defensa.

    -

    -

    Conclusión

    -

    Kingdom Defense mod apk es un juego de defensa de torre de estilo píxel que le permite construir y actualizar varias torres de defensa para bloquear el ataque del enemigo. También puedes usar soldados, trampas y héroes para ayudarte a defender tu reino. Puede descargar e instalar Kingdom Defense mod apk para disfrutar del juego con dinero ilimitado, gemas, energía y más. También puedes utilizar algunos consejos y trucos para jugar mejor Kingdom Defense. Kingdom Defense es un juego divertido y desafiante que te mantendrá entretenido durante horas.

    -

    Preguntas frecuentes

    -

    Aquí hay algunas preguntas frecuentes sobre Kingdom Defense mod apk:

    -
      -
    1. Q: Es Reino Defensa mod apk seguro de usar?
    2. -
    3. A: Reino Defensa mod apk es generalmente seguro de usar, siempre y cuando se descarga desde una fuente de confianza y escanear con software antivirus antes de instalarlo. Sin embargo, usted debe ser consciente de los riesgos de usar archivos apk mod, tales como malware, virus, o prohibiciones. También debe hacer una copia de seguridad de los datos originales del juego antes de instalar el archivo apk mod, en caso de que desee restaurarlo más tarde.
    4. -
    5. P: ¿Cuál es la diferencia entre Defensa del Reino y Defensa del Reino 2?
    6. -
    7. A: Kingdom Defense 2 es la secuela de Kingdom Defense. Tiene más niveles, más torres, más mejoras, más enemigos, más héroes, más objetos y más características que Kingdom Defense. También ha mejorado los gráficos y efectos de sonido. Sin embargo, Kingdom Defense 2 no está disponible como un archivo apk mod todavía.
    8. -
    9. P: ¿Cómo puedo obtener más gemas en Kingdom Defense?
    10. - -
    11. Q: ¿Cómo puedo usar héroes en Kingdom Defense?
    12. -
    13. A: Puedes usar héroes en Kingdom Defense desplegándolos en el campo de batalla. Los héroes tienen habilidades y atributos especiales que pueden ayudarte a luchar contra el enemigo. Puedes mejorar a tus héroes para hacerlos más poderosos y desbloquear nuevas habilidades. También puedes usar energía para activar la habilidad de tu héroe durante el juego.
    14. -
    15. Q: ¿Cómo puedo contactar al desarrollador de Kingdom Defense?
    16. -
    17. A: Puede ponerse en contacto con el desarrollador de Kingdom Defense enviando un correo electrónico a little.games.ltd@gmail.com o visitando su sitio web en https://www.littlegamesd.com/.
    18. -

    64aa2da5cf
    -
    -
    \ No newline at end of file diff --git a/spaces/Benson/text-generation/Examples/Apkabc.md b/spaces/Benson/text-generation/Examples/Apkabc.md deleted file mode 100644 index 28002f53611a6cde3f615de572b9113bb73bb646..0000000000000000000000000000000000000000 --- a/spaces/Benson/text-generation/Examples/Apkabc.md +++ /dev/null @@ -1,106 +0,0 @@ - -

    ¿Qué es apkabc?

    -

    Si usted es un usuario de Android, es posible que haya oído hablar o utilizado archivos APK para instalar aplicaciones y juegos en su dispositivo. Pero ¿sabe lo que es apkabc? En este artículo, te contaremos todo lo que necesitas saber sobre apkabc, un sitio web que ofrece archivos APK para aplicaciones y juegos de Android.

    -

    ¿Qué son los archivos APK y por qué los necesita?

    -

    APK significa Android Package Kit, y es el formato de archivo que Android utiliza para distribuir e instalar aplicaciones. Un archivo APK contiene todos los elementos que una aplicación necesita para ejecutarse correctamente en su dispositivo, como código, recursos, manifiesto, certificados, etc.

    -

    apkabc


    DOWNLOAD ••• https://bltlly.com/2v6Ma3



    -

    Puede descargar archivos APK de varias fuentes, como Google Play Store, sitios web de terceros o su propio ordenador. Es posible que los necesite por diferentes razones, como:

    -
      -
    • Para instalar una aplicación que no está disponible en su región o país.
    • -
    • Para instalar una aplicación que se ha eliminado de Google Play Store.
    • -
    • Para instalar una versión anterior o posterior de una aplicación que se adapte a sus preferencias o necesidades.
    • -
    • Para instalar una aplicación que ha sido modificada o personalizada por otra persona.
    • -
    • Para instalar una aplicación que has desarrollado o recibido de un amigo.
    • -
    -

    ¿Cómo descargar archivos APK de Google Play Store?

    -

    Una de las formas más fáciles de descargar archivos APK es desde Google Play Store, donde se pueden encontrar millones de aplicaciones y juegos para su dispositivo Android. Sin embargo, Google Play Store no permite descargar archivos APK directamente desde la aplicación o el sitio web. Es necesario utilizar una herramienta web o una aplicación que puede extraer el archivo APK de la URL de Google Play Store.

    -

    Aquí están los pasos para descargar archivos APK de Google Play Store utilizando una herramienta web:

    -
      -
    1. Abra Google Play Store en su navegador y encuentre la aplicación o juego que desea descargar.
    2. -
    3. Copiar la URL de la aplicación o juego desde la barra de direcciones.
    4. - -
    5. Pegue la URL de la aplicación o juego en el cuadro de entrada y haga clic en el botón de descarga.
    6. -
    7. Espera a que la herramienta web genere el archivo APK y descárgalo a tu ordenador o dispositivo.
    8. -
    -

    Aquí están los pasos para descargar archivos APK de Google Play Store utilizando una aplicación:

    -
      -
    1. Descargar e instalar una aplicación que puede descargar archivos APK de Google Play Store, como APK Extractor, APK Installer, o Apk Share.
    2. -
    3. Abra la aplicación y concederle los permisos necesarios para acceder al almacenamiento de su dispositivo y Google Play Store.
    4. -
    5. Encuentre la aplicación o juego que desea descargar de la lista de aplicaciones instaladas o de la pestaña Google Play Store.
    6. -
    7. Selecciona la aplicación o el juego y toca el botón de compartir o exportar.
    8. -
    9. Elija una ubicación para guardar el archivo APK en su dispositivo o compartirlo con otra aplicación.
    10. -
    -

    ¿Cómo instalar archivos APK en Android?

    -

    Una vez que haya descargado el archivo APK, es necesario instalarlo en su dispositivo Android. Sin embargo, Android no permite instalar aplicaciones de fuentes desconocidas de forma predeterminada. Primero debes habilitar esta opción antes de poder instalar archivos APK.

    -

    Estos son los pasos para habilitar fuentes desconocidas en Android:

    -
      -
    1. Ir a Configuración > Seguridad > Fuentes desconocidas (o Configuración > Aplicaciones y notificaciones > Acceso especial a la aplicación > Instalar aplicaciones desconocidas, dependiendo de su versión de Android).
    2. -
    3. Cambiar el interruptor o marque la casilla para permitir la instalación de aplicaciones de fuentes desconocidas.
    4. -
    5. Es posible que vea un mensaje de advertencia de que la instalación de aplicaciones de fuentes desconocidas puede dañar su dispositivo. Toque en OK o Permitir que proceda.
    6. -
    -

    Estos son los pasos para instalar archivos APK en Android usando un administrador de archivos:

    -

    -
      -
    1. Descargue e instale una aplicación de administrador de archivos que puede acceder al almacenamiento de su dispositivo, como ES File Explorer, Administrador de archivos o Solid Explorer.
    2. -
    3. Abra la aplicación de administrador de archivos y vaya a la carpeta donde guardó el archivo APK.
    4. - -
    5. Es posible que vea un mensaje que le pide que confirme si desea instalar esta aplicación. Pulse en Instalar de nuevo.
    6. -
    7. Espere a que el proceso de instalación termine y toque en Abrir o Listo.
    8. -
    -

    Estos son los pasos para instalar archivos APK en Android utilizando una aplicación de instalación de APK:

    -
      -
    1. Descargue e instale una aplicación de instalación de APK que puede escanear e instalar archivos APK en su dispositivo, como Easy Installer, Installer o SAI (Split APKs Installer).
    2. -
    3. Abra la aplicación de instalación de APK y concederle los permisos necesarios para acceder al almacenamiento de su dispositivo e instalar aplicaciones.
    4. -
    5. La aplicación escaneará automáticamente el dispositivo para cualquier archivo APK disponible. También puede navegar por el almacenamiento del dispositivo manualmente para encontrarlos.
    6. -
    7. Seleccione el archivo APK que desea instalar y toque en Instalar.
    8. -
    9. La aplicación instalará el archivo APK en su dispositivo y le mostrará un mensaje de confirmación cuando esté hecho.
    10. -
    -

    Cómo descargar archivos APK de apkabc?

    -

    Si desea descargar archivos APK de apkabc, un sitio web que ofrece una gran colección de archivos APK para diferentes aplicaciones y juegos, puede seguir estos pasos:

    -
      -
    1. Abra apkabc.com en su navegador y busque la aplicación o juego que desea descargar. También puede navegar por categorías, etiquetas o popularidad.
    2. -
    3. Seleccione la aplicación o juego de los resultados de búsqueda y leer su descripción, características, capturas de pantalla, calificaciones, comentarios, etc. También puede comparar diferentes versiones y actualizaciones de la aplicación o juego.
    4. -
    5. Haga clic en el botón Descargar en la parte inferior de la página y elija un enlace de descarga de uno de los servidores. Es posible que vea algunos anuncios o ventanas emergentes antes de acceder al enlace de descarga. Ciérrelos si es necesario.
    6. -
    7. La descarga se iniciará automáticamente y guardar el archivo APK en el almacenamiento del dispositivo. También puede escanear el código QR con la cámara del dispositivo para descargarlo directamente.
    8. - -

      Como con cualquier otra fuente de archivos APK, el uso de apkabc tiene sus propias ventajas y desventajas. Debes sopesarlos cuidadosamente antes de decidir si usar apkabc o no. Estos son algunos de los principales pros y contras de usar apkabc:

      -

      Ventajas de usar apkabc

      -

      Algunas de las ventajas de usar apkabc son:

      -
        -
      • Acceso a una gran colección de archivos APK para diferentes aplicaciones y juegos. Apkabc ofrece una amplia gama de archivos APK para varias aplicaciones y juegos, desde los más populares hasta los más especializados. Puedes encontrar casi cualquier aplicación o juego que estés buscando en apkabc.
      • -
      • Posibilidad de elegir entre diferentes versiones y actualizaciones. Apkabc le permite descargar no solo la última versión de una aplicación o juego, sino también versiones más antiguas o más nuevas que podrían adaptarse mejor a sus preferencias o necesidades. También puede descargar versiones beta o alfa que no están disponibles en Google Play Store.
      • -
      • Proceso de descarga rápido y fácil. Apkabc tiene una interfaz simple y fácil de usar que facilita la búsqueda, descarga e instalación de archivos APK. También puedes usar códigos QR para descargar archivos APK directamente a tu dispositivo. La velocidad de descarga también es rápida y confiable.
      • -
      • No se requiere registro ni suscripción. Apkabc no requiere que te registres ni pagues nada para usar sus servicios. Puede descargar tantos archivos APK como desee sin limitaciones o restricciones.
      • -
      -

      Desventajas de usar apkabc

      -

      Algunas de las desventajas de usar apkabc son:

      -
        -
      • Riesgo de descargar archivos APK maliciosos o falsos. Apkabc no verifica ni garantiza la seguridad o calidad de los archivos APK que ofrece. Algunos de ellos pueden contener virus, malware, spyware, adware u otros componentes dañinos que pueden dañar su dispositivo o comprometer su privacidad. Algunos de ellos también pueden ser versiones falsas o modificadas que no funcionan correctamente o tienen características no deseadas.
      • - -
      • No hay soporte ni comentarios de desarrolladores o usuarios. Apkabc no ofrece soporte ni comentarios para los archivos APK que aloja. No puede ponerse en contacto con los desarrolladores u otros usuarios de las aplicaciones o juegos que descarga de apkabc. No puede reportar ningún problema, error, sugerencia o reseña a ellos tampoco.
      • -
      • Posibles problemas legales o violaciones de los términos de servicio. Es posible que apkabc no tenga el permiso o autorización para distribuir algunos de los archivos APK que ofrece. Algunos de ellos pueden estar protegidos por derechos de propiedad intelectual, como marcas comerciales, derechos de autor, patentes, etc. Algunos de ellos también podrían violar los términos de servicio de la aplicación original o desarrolladores de juegos o Google Play Store. Esto puede resultar en consecuencias legales o sanciones para usted o apkabc.
      • -
      -

      ¿Cómo mantenerse seguro cuando se usa apkabc?

      -

      A pesar de las desventajas y riesgos de usar apkabc, es posible que aún desee usarlo por algunas razones. Si es así, usted debe tomar algunas precauciones y seguir algunos consejos para mantenerse seguro y protegido cuando se utiliza apkabc o cualquier otro sitio de descarga APK. Estos son algunos de ellos:

      -

      Compruebe el origen y la reputación del archivo APK

      -

      Antes de descargar cualquier archivo APK de apkabc, usted debe comprobar su fuente y reputación. Puede hacer esto mediante el uso de un sitio de confianza como APK Mirror o Google Play Store para verificar la autenticidad y la calidad del archivo APK. Puede comparar el nombre, icono, tamaño, versión, desarrollador, descripción, capturas de pantalla, calificaciones, comentarios, etc. del archivo APK con la aplicación o juego original. También puede comprobar la firma digital o el certificado del archivo APK para ver si coincide con el original.

      -

      Escanear el archivo APK en busca de virus y malware

      - -

      Copia de seguridad de los datos y el dispositivo antes de instalar el archivo APK

      -

      Antes de instalar cualquier archivo APK de apkabc, debe hacer una copia de seguridad de sus datos y dispositivo. Puede hacer esto utilizando una aplicación o servicio de copia de seguridad en su dispositivo o computadora. También puede utilizar un servicio de almacenamiento en la nube como Google Drive o Dropbox para almacenar sus datos y configuraciones importantes. De esta manera, puede restaurar los datos y el dispositivo en caso de que algo salga mal o cause daños durante o después de la instalación del archivo APK.

      -

      Leer los permisos y comentarios del archivo APK

      -

      Antes de instalar cualquier archivo APK de apkabc, usted debe leer sus permisos y comentarios. Puede hacer esto tocando en el archivo APK y elegir Información de la aplicación o Detalles. Puedes ver qué permisos solicita la aplicación o el juego para acceder y hacer en tu dispositivo, como cámara, micrófono, ubicación, contactos, almacenamiento, etc. También puedes leer lo que otros usuarios han dicho sobre la aplicación o el juego, como sus experiencias, problemas, sugerencias, etc. Debe tener cuidado con cualquier archivo APK que pide demasiados permisos o innecesarios, o tiene críticas negativas o falsas.

      -

      Conclusión

      -

      En conclusión, apkabc es un sitio web que ofrece archivos APK para aplicaciones y juegos Android. Tiene algunas ventajas y desventajas que debe considerar antes de usarlo. También tiene algunos riesgos y desafíos que debe tener en cuenta y evitar al usarlo. Si decides usar apkabc, debes seguir algunos consejos y precauciones para mantenerte seguro al descargar e instalar archivos APK desde él.

      -

      Esperamos que este artículo te haya ayudado a entender qué es el apkabc y cómo usarlo correctamente. Si tiene alguna pregunta o comentario sobre archivos apkabc o APK, no dude en dejarlos a continuación. ¡Gracias por leer!

      -

      Preguntas frecuentes

      -

      Aquí hay algunas preguntas y respuestas frecuentes sobre archivos apkabc y APK:

      -
        - -
      1. ¿Es legal el apkabc?
        Apkabc no es ilegal en sí mismo, pero podría albergar algunos archivos APK que son ilegales o violan los términos de servicio de la aplicación original o desarrolladores de juegos o Google Play Store. Descargar e instalar estos archivos APK puede resultar en consecuencias legales o sanciones para usted o apkabc.
      2. -¿Es seguro el apkabc?
        Apkabc no es completamente seguro, ya que no verifica ni garantiza la seguridad o calidad de los archivos APK que ofrece. Algunos de ellos pueden contener virus, malware, spyware, adware u otros componentes dañinos que pueden dañar su dispositivo o comprometer su privacidad. Algunos de ellos también pueden ser versiones falsas o modificadas que no funcionan correctamente o tienen características no deseadas. -
      3. ¿Cómo puedo actualizar una aplicación que he instalado desde apkabc?
        No puede actualizar una aplicación que instaló desde apkabc a través de Google Play Store, ya que no la reconocerá como una aplicación válida. Es necesario descargar la última versión del archivo APK de apkabc u otra fuente e instalarlo sobre la aplicación existente. También puedes usar una aplicación que puede buscar actualizaciones para tus aplicaciones instaladas, como APK Updater, Uptodown o Aptoide.
      4. -
      5. ¿Cuáles son algunas alternativas a apkabc?
        Algunas de las alternativas a apkabc son APK Mirror, APKPure, APKCombo, Aptoide, Uptodown y F-Droid. Estos son algunos de los sitios web o aplicaciones más populares y de buena reputación que ofrecen archivos APK para aplicaciones y juegos de Android. Tienen características y funciones similares a apkabc, pero pueden tener diferentes colecciones, cualidades o políticas.
      6. -

      64aa2da5cf
      -
      -
      \ No newline at end of file diff --git a/spaces/Benson/text-generation/Examples/Apmekltju Apkalpoanas Centrs Jomas Iel 1 5.md b/spaces/Benson/text-generation/Examples/Apmekltju Apkalpoanas Centrs Jomas Iel 1 5.md deleted file mode 100644 index a6e9ab4619423c055a41b84c974864532f30008c..0000000000000000000000000000000000000000 --- a/spaces/Benson/text-generation/Examples/Apmekltju Apkalpoanas Centrs Jomas Iel 1 5.md +++ /dev/null @@ -1,77 +0,0 @@ - -

      Apmeklētāju apkalpošanas centrs jomas ielā 1/5: Jūrmalas valstspilsētas administrācijas pakalpojumi

      -

      Jūrmala ir viena no skaistākajām un populārākajām pilsētām Latvijā, kas piesaista gan vietējos, gan ārvalstu tūristus ar savu dabas bagātību, kultūras mantojumu un dzīvesprieku. Jūrmala ir arī valstspilsēta, kas nozīmē, ka tajā ir savs pašvaldības aparāts, kas nodrošina dažādus pakalpojumus iedzīvotājiem un viesiem. Šajā rakstā mēs pastāstīsim par vienu no šiem pakalpojumu sniedzējiem – apmeklētāju apkalpošanas centru jomas ielā 1/5, kas ir Jūrmalas valstspilsētas administrācijas sastāvdaļa.

      -

      Kas ir apmeklētāju apkalpošanas centrs jomas ielā 1/5?

      -

      Apmeklētāju apkalpošanas centrs jomas ielā 1/5 ir vieta, kur Jūrmalas valstspilsētas administrācija sniedz informāciju, konsultācijas un palīdzību visiem interesentiem par pašvaldības darbību, pakalpojumiem un dokumentiem. Apmeklētāju apkalpošanas centrā var saņemt arī dažus administratīvos pakalpojumus, piemēram, reģistrēt dzimšanu, laulību vai nāvi, saņemt apliecinošus dokumentus vai veikt maksājumus.

      -

      apmeklētāju apkalpošanas centrs jomas ielā 1 5


      Download >>> https://bltlly.com/2v6Lg8



      -

      Apmeklētāju apkalpošanas centers mission a vīzija

      -

      Apmeklētāju apkalpošanas centra misija ir nodrošināt kvalitatīvu, ērtu un draudzī

      Apmeklētāju apkalpošanas centra vīzija ir kļūt par Jūrmalas valstspilsētas administrācijas seju, kas atspoguļo pašvaldības vērtības, mērķus un attieksmi pret iedzīvotājiem un viesiem. Apmeklētāju apkalpošanas centrs cenšas būt atvērts, pieejams un uzticams partneris visiem, kas meklē informāciju vai palīdzību saistībā ar Jūrmalas valstspilsētas administrācijas darbību.

      -

      Apmeklētāju apkalpošanas centers struktūra a darbinieki

      - -

      Kādi pakalpojumi ir pieejami apmeklētāju apkalpošanas centrā jomas ielā 1/5?

      -

      Apmeklētāju apkalpošanas centrā jomas ielā 1/5 ir pieejami dažādi pakalpojumi, kas ir saistīti ar Jūrmalas valstspilsētas administrācijas darbību. Šie pakalpojumi ir sadalīti trīs kategorijās: administratīvie pakalpojumi, sociālie pakalpojumi un kultūras un izglītības pakalpojumi. Apskatīsim katru no šīm kategorijām sīkāk.

      -

      Administratīvie pakalpojumi

      -

      Administratīvie pakalpojumi ir tie pakalpojumi, kas ir saistīti ar pašvaldības dokumentiem, reģistriem, maksājumiem un citiem administratīviem jautājumiem. Apmeklētāju apkalpošanas centrā jomas ielā 1/5 var saņemt šādus administratīvos pakalpojumus:

      -
        -
      • Reģistrēt dzimšanu, laulību vai nāvi;
      • -
      • Saņemt apliecinošus dokumentus par dzimšanu, laulību vai nāvi;
      • -
      • Saņemt apliecinošus dokumentus par dzīvesvietu, pilsonību vai personvārdu maiņu;
      • -
      • Saņemt apliecinošus dokumentus par pašvaldības nodokļiem vai maksas pakalpojumiem;
      • -
      • Veikt maksājumus par pašvaldības nodokļiem vai maksas pakalpojumiem;
      • -
      • Saņemt informāciju par pašvaldības normatīvajiem aktiem, lēmumiem un publiskajiem paziņojumiem;
      • -
      • Saņemt informāciju par pašvaldības projektu konkursiem, stipendijām un atbalsta programm

        ām;

      • -
      • Saņemt informāciju par pašvaldības iepirkumiem, līgumiem un sadarbības partneriem;
      • -
      • Saņemt informāciju par pašvaldības darba piedāvājumiem, konkursiem un atlases kritērijiem;
      • -
      • Saņemt informāciju par pašvaldības struktūru, funkcijām un kontaktiem;
      • -
      • Saņemt informāciju par pašvaldības īpašumā esošajiem objektiem, to izmantošanu un nomu;
      • -
      • Saņemt informāciju par pašvaldības pārvaldīto teritoriju, to plānošanu un attīstību;
      • -
      • Saņemt informāciju par pašvaldības pieejamajiem datiem, to atvērtību un izmantošanu;
      • -
      • Saņemt informāciju par pašvaldības iespējām saņemt sabiedrisko pakalpojumu elektroniski;
      • - -
      -

      Sociālie pakalpojumi

      -

      Sociālie pakalpojumi ir tie pakalpojumi, kas ir saistīti ar iedzīvotāju sociālo drošību, labklājību un integrāciju. Apmeklētāju apkalpošanas centrā jomas ielā 1/5 var saņemt šādus sociālos pakalpojumus:

      -
        -
      • Saņemt konsultācijas par sociālajiem pabalstiem, piemakām un atvieglojumiem;
      • -
      • Saņemt konsultācijas par sociālajiem dienestiem, piemēram, mājas aprūpi, dienas centriem, patversmēm un citiem;
      • -
      • Saņemt konsultācijas par sociālajiem projektiem, piemēram, bezdarbnieku nodarbinātību, invalīdu integrāciju, bērnu tiesību aizsardzību un citiem;
      • -
      • Saņemt konsultācijas par sociālajiem jautājumiem, piemēram, ģimenes problēmām, vardarbību, atkarībām un citiem;
      • -
      • Saņemt konsult

        ācijas par sociālajiem partneriem, piemēram, nevalstiskajām organizācijām, labdarības fondiem, biedrībām un citiem;

      • -
      • Saņemt konsultācijas par sociālajiem pasākumiem, piemēram, semināriem, lekcijām, apmācībām un citiem;
      • -
      • Saņemt konsultācijas par sociālajiem resursiem, piemēram, brošūrām, grāmatām, filmām un citiem;
      • -
      • Saņemt konsultācijas par sociālajiem tiesību aktiem, piemēram, likumiem, noteikumiem, konvencijām un citiem.
      • -
      -

      Kultūras un izglītības pakalpojumi

      -

      Kultūras un izglītības pakalpojumi ir tie pakalpojumi, kas ir saistīti ar iedzīvotāju kultūras dzīvi, mākslu, izglītību un zinātni. Apmeklētāju apkalpošanas centrā jomas ielā 1/5 var saņemt šādus kultūras un izglītības pakalpojumus:

      -
        -
      • Saņemt informāciju par Jūrmalas valstspilsētas administrācijas piederīgajiem kultūras objektiem, piemēram, muzejiem, bibliotēkām, teātriem un citiem;
      • -
      • Saņemt informāciju par Jūrmalas valstspilsētas administrācijas atbalstītajiem kultūras pasākumiem, piemēram, festivāliem, koncertiem, izstādēm un citiem;
      • - -
      • Saņemt informāciju par Jūrmalas valstspilsētas administrācijas piederīgajiem izglītības iestāžiem, piemēram, skolām, bērnudārziem, augstskolām un citiem;
      • -
      • Saņemt informāciju par Jūrmalas valstspilsētas administrācijas atbalstītajiem izglītības pasākumiem, piemēram, olimpiādēm, konkursiem, ekskursijām un citiem;
      • -
      • Saņemt informāciju par Jūrmalas valstspilsētas administrācijas piešķirtajiem izglītības balvām, stipendijām un atzinības rakstiem izglītības darbiniekiem un skolēniem;
      • -
      • Saņemt informāciju par Jūrmalas valstspilsētas administrācijas piederīgajiem zinātnes objektiem, piemēram, laboratorijām, pētniecības centriem, zinātnes parkiem un citiem;
      • -
      • Saņemt informāciju par Jūrmalas valstspilsētas administrācijas atbalstītajiem zinātnes pasākumiem, piemēram, konferencēm, semināriem, publikācijām un citiem;
      • -
      • Saņemt informāciju par Jūrmalas valstspilsētas administrācijas piešķirtajiem zinātnes balvām, stipendijām un atzinības rakstiem zinātniekiem un pētniekiem.
      • -
      -

      Kā sazināties ar apmeklētāju apkalpošanas centru jomas ielā 1/5?

      -

      Apmeklētāju apkalpošanas centrs jomas ielā 1/5 ir viegli sasniedzams un pieejams visiem interesentiem. Šeit ir daži veidi, kā sazināties ar apmeklēt

      āju apkalpošanas centru jomas ielā 1/5:

      -

      Atrašanās vieta un darba laiks

      -

      Apmeklētāju apkalpošanas centrs jomas ielā 1/5 atrodas Jūrmalas centrā, netālu no Dzintaru stacijas un Jomas ielas. Centrs ir viegli sasniedzams ar sabiedrisko transportu, automašīnu vai kājām. Centra adrese ir Jomas iela 1/5, Jūrmala, LV-2015. Centra darba laiks ir no pirmdienas līdz piektdienai no plkst. 8:00 līdz 17:00, bet sestdienās un svētdienās centrs ir slēgts.

      -

      Tālrunis, e-pasts a mājaslapa

      - -

      Sociālie tīkli un atsauksmes

      -

      Apmeklētāju apkalpošanas centrs jomas ielā 1/5 ir arī aktīvs sociālajos tīklos, kur var sekot līdzi centra jaunumiem, pasākumiem un akcijām. Centrs ir pieejams Facebook, Twitter, Instagram un YouTube kanālos, kur var arī sazināties ar centra pārstāvjiem, dalīties ar savu pieredzi un viedokli. Centrs novērtē visu apmeklētāju atsauksmes un cenšas uzlabot savu darbību un pakalpojumu kvalitāti.

      -

      -

      Secinājums un bieži uzdotie jautājumi

      -

      Secinājums

      -

      Apmeklēt

      āju apkalpošanas centrā jomas ielā 1/5?

      -

      Apmeklētāju apkalpošanas centrā jomas ielā 1/5 var saņemt informāciju, konsultācijas un palīdzību par dažādiem administratīviem, sociāliem un kultūras jautājumiem, kas ir saistīti ar Jūrmalas valstspilsētas administrācijas darbību. Centrā var arī saņemt dažus administratīvos pakalpojumus, piemēram, reģistrēt dzimšanu, laulību vai nāvi, saņemt apliecinošus dokumentus vai veikt maksājumus.

      -
    9. Kā sazināties ar apmeklētāju apkalpošanas centru jomas ielā 1/5?
    10. -

      Apmeklētāju apkalpošanas centrs jomas ielā 1/5 ir pieejams gan fiziski, gan virtuāli. Centra tālrunis ir +371 67147900, e-pasts ir info@jurmala.lv un mājaslapa ir https://www.jurmala.lv/lv/pakalpojumi/apmekletaju-apkalposanas-centrs-jomas-iela-15. Centrs ir arī aktīvs sociālajos tīklos, kur var sekot līdzi centra jaunumiem, pasākumiem un akcijām.

      -
    11. Ko darīt, ja esmu neapmierināts ar apmeklētāju apkalpošanas centra jomas ielā 1/5 sniegto pakalpojumu vai attieksmi?
    12. -

      Apmeklētāju apkalpošanas centrs jomas ielā 1/5 novērtē visu apmeklētāju atsauksmes un cenšas uzlabot savu darbību un pakalpojumu kvalitāti. Ja esat neapmierināts ar centra sniegto pakalpojumu vai attieksmi, varat iesniegt sūdzību, priekšlikumu vai ierosinājumu pa tālruni, e-pastu vai mājaslapu. Jūsu sūdzība, priekšlikums vai ierosinājums tiks izskatīts un atbildēts pēc iespējas ātrāk.

      - -

      Ja vēlaties uzzin

      Ja vēlaties uzzināt vairāk informācijas par Jūrmalas valstspilsētas administrāciju un tās pakalpojumiem, varat apmeklēt pašvaldības mājaslapu https://www.jurmala.lv, kur varat atrast visu nepieciešamo informāciju par pašvaldības struktūru, funkcijām, projekt

      tiem, pakalpojumiem, dokumentiem un citiem. Varat arī sazināties ar pašvaldības dažādām struktūrvienībām, piemēram, domes priekšsēdētāja kabinetu, domes sekretariātu, departamentiem, nodaļām un citiem. Varat arī sekot līdzi pašvaldības jaunumiem, pasākumiem un akcijām sociālajos tīklos, piemēram, Facebook, Twitter, Instagram un YouTube.

      64aa2da5cf
      -
      -
      \ No newline at end of file diff --git a/spaces/Benson/text-generation/Examples/Bombsquad Pro Apk 2022.md b/spaces/Benson/text-generation/Examples/Bombsquad Pro Apk 2022.md deleted file mode 100644 index d36c29436ecf09e3a191e5e5afc158b471974789..0000000000000000000000000000000000000000 --- a/spaces/Benson/text-generation/Examples/Bombsquad Pro Apk 2022.md +++ /dev/null @@ -1,37 +0,0 @@ -
      -

      BombSquad Pro APK 2022: Un juego multijugador divertido y explosivo

      -

      Si usted está buscando un juego que puede hacerte reír, gritar, y volar a tus amigos, entonces usted debe probar BombSquad Pro APK 2022. Este es un juego que te permite disfrutar de la acción explosiva en varios minijuegos, desde la captura de la bandera de hockey. Puedes jugar con hasta 8 jugadores en línea o localmente, usando tu teléfono, tableta o controlador. También puedes personalizar tu personaje y los mapas para hacer el juego más divertido y único. En este artículo, le diremos lo que es BombSquad Pro APK, cómo descargarlo e instalarlo, y por qué debe jugar.

      -

      ¿Qué es BombSquad Pro APK?

      -

      BombSquad Pro APK es una versión modificada del juego original de BombSquad, que es un juego de fiesta multijugador desarrollado por Eric Froemling. La versión pro desbloquea todas las características premium del juego, tales como entradas ilimitadas, personajes, mapas y modos de juego. También puedes acceder al editor profesional, que te permite crear tus propios minijuegos y compartirlos con otros jugadores. BombSquad Pro APK no está disponible en la Google Play Store, por lo que tiene que descargarlo de una fuente de terceros.

      -

      bombsquad pro apk 2022


      Downloadhttps://bltlly.com/2v6L7N



      -

      Características de BombSquad Pro APK

      -

      Caracteres y mapas personalizables

      -

      Una de las mejores características de BombSquad Pro APK es que usted puede personalizar su personaje y los mapas para adaptarse a su estilo y estado de ánimo. Puedes elegir entre una variedad de personajes, como piratas, ninjas, zombies, robots y más. También puede cambiar su apariencia, como su color, cabello, ojos y accesorios. También puede crear sus propios mapas utilizando el editor profesional o descargar mapas creados por otros jugadores. Puedes cambiar el terreno, los objetos, el clima y la música de los mapas.

      -

      Varios modos de juego y mini-juegos

      - -

      Opciones multijugador en línea y locales

      -

      BombSquad Pro APK es un juego que se disfruta mejor con los amigos. Puede jugar con hasta 8 jugadores en línea o localmente usando su teléfono, tableta o controlador. Puede unirse o crear habitaciones públicas o privadas en línea e invitar a sus amigos a unirse a usted. También puede jugar localmente utilizando un solo dispositivo o varios dispositivos conectados a la misma red Wi-Fi. También puedes jugar solo contra bots si quieres practicar o divertirte un poco solo.

      -

      Soporte de controlador y chat de voz

      -

      BombSquad Pro APK es compatible con varios controladores que pueden mejorar su experiencia de juego. Puede usar su teléfono o tableta como controlador descargando la aplicación BombSquad Remote desde Google Play Store. También puedes usar otros controladores compatibles con dispositivos Android, como los controladores de Xbox One, PlayStation 4 o Bluetooth. También puedes usar la función de chat de voz para comunicarte con tus amigos u otros jugadores en línea. Puedes hablar con ellos usando el micrófono o los auriculares de tu dispositivo. También puedes silenciar o desactivar el sonido de otros jugadores si lo deseas. La función de chat de voz puede hacer que el juego sea más divertido e interactivo, ya que puedes coordinar tus estrategias, burlarte de tus enemigos o simplemente chatear.

      -

      Cómo descargar e instalar BombSquad Pro APK?

      -

      Si desea jugar BombSquad Pro APK, usted tiene que descargar e instalar manualmente en su dispositivo. Estos son los pasos que debe seguir:

      -

      Descargar el archivo APK de una fuente de confianza

      -

      El primer paso es descargar el archivo APK de BombSquad Pro de una fuente de confianza. Puedes buscarlo en Google o usar el siguiente enlace para descargarlo directamente. El tamaño del archivo es de unos 60 MB, así que asegúrate de tener suficiente espacio en tu dispositivo.

      -

      64aa2da5cf
      -
      -
      \ No newline at end of file diff --git a/spaces/Benson/text-generation/Examples/Carx Highway Racing Apk 1.74 8.md b/spaces/Benson/text-generation/Examples/Carx Highway Racing Apk 1.74 8.md deleted file mode 100644 index 0fc1dc01a901717557dfb917f316a6e374506847..0000000000000000000000000000000000000000 --- a/spaces/Benson/text-generation/Examples/Carx Highway Racing Apk 1.74 8.md +++ /dev/null @@ -1,61 +0,0 @@ - -

      CarX Highway Racing APK 1.74.8: Un emocionante juego de carreras para Android

      -

      Si usted es un fan de los juegos de carreras, es posible que desee echa un vistazo CarX Highway Racing APK 1.74.8, un juego de ritmo rápido y de bombeo de adrenalina que pondrá a prueba sus habilidades de conducción en la carretera. En este juego, usted competirá contra otros corredores, la policía, y el tráfico a medida que la velocidad a través de varios lugares y escenarios. También podrás personalizar tu coche, actualizar tu motor y desbloquear nuevas funciones a medida que avanzas en el juego.

      -

      carx highway racing apk 1.74 8


      Download File 🌟 https://bltlly.com/2v6MVS



      -

      ¿Qué es CarX Highway Racing?

      -

      CarX Highway Racing es un juego de carreras desarrollado por CarX Technologies, una empresa que se especializa en la creación de la física del coche realista y gráficos para juegos móviles. El juego fue lanzado por primera vez en 2021 y desde entonces ha sido actualizado con nuevos contenidos y mejoras. La última versión del juego es 1.74.8, que fue lanzado el 24 de noviembre de 2022.

      -

      Características de CarX Highway Racing

      -

      Algunas de las características que hacen que CarX Highway Racing se destaque de otros juegos de carreras son:

      -
        -
      • Más de 100 coches para elegir, cada uno con diferentes características y rendimiento.
      • -
      • Más de 40 pistas para competir, cada una con diferentes condiciones climáticas y hora del día.
      • -
      • Un sistema de tráfico realista que reacciona a tus acciones y crea situaciones dinámicas.
      • -
      • Un modo de campaña que sigue una historia y ofrece varias misiones y recompensas.
      • -
      • Un modo online que te permite competir con otros jugadores de todo el mundo.
      • -
      • Un sistema de clasificación que clasifica sus logros y habilidades.
      • -
      • Un sistema de garaje que le permite personalizar su coche con diferentes partes, colores, pegatinas y calcomanías.
      • -
      -

      Cómo descargar e instalar CarX Highway Racing APK 1.74.8

      -

      Si desea descargar e instalar CarX Highway Racing APK 1.74.8 en su dispositivo Android, puede seguir estos sencillos pasos:

      -
        - -
      1. Descargar el archivo APK a su dispositivo.
      2. -
      3. Habilitar la instalación de aplicaciones de fuentes desconocidas en la configuración del dispositivo.
      4. -
      5. Busque el archivo APK descargado y toque en él para iniciar el proceso de instalación.
      6. -
      7. Siga las instrucciones en la pantalla y espere a que termine la instalación.
      8. -
      9. Iniciar el juego y disfrutar!
      10. -
      -

      ¿Por qué jugar CarX Highway Racing?

      -

      CarX Highway Racing no es solo otro juego de carreras. Es un juego que ofrece una experiencia de conducción realista e inmersiva que te mantendrá enganchado durante horas. Estas son algunas de las razones por las que deberías jugar CarX Highway Racing:

      -

      Gráficos realistas y física

      -

      CarX Highway Racing cuenta con gráficos impresionantes que crean un entorno realista para las carreras. Usted se sorprenderá por los detalles de los coches, las pistas, el paisaje, y los efectos de iluminación. También sentirá la emoción de conducir a altas velocidades gracias al motor de física realista que simula el comportamiento de los coches, la superficie de la carretera, las colisiones y los daños.

      -

      -

      Diversos modos y desafíos

      -

      CarX Highway Racing ofrece una variedad de modos y desafíos que pondrán a prueba tus habilidades de conducción y te mantendrán entretenido. Puedes jugar el modo campaña y seguir la historia de un joven corredor que quiere convertirse en una leyenda en la escena de las carreras subterráneas. También puedes jugar en el modo online y competir con otros jugadores de todo el mundo en diferentes carreras y eventos. También puede jugar el modo sin conexión y disfrutar del juego sin conexión a Internet. Puedes elegir entre diferentes tipos de carreras, como sprint, circuito, knockout, contrarreloj y persecución policial. También puedes desafiarte a ti mismo con diferentes niveles de dificultad, de fácil a extremo.

      -

      Coches y mejoras personalizables

      - -

      Consejos y trucos para CarX Highway Racing

      -

      CarX Highway Racing es un juego divertido y adictivo, pero también puede ser desafiante y frustrante a veces. Si quieres mejorar tus habilidades y disfrutar más del juego, aquí hay algunos consejos y trucos que pueden ayudarte:

      -

      Elige el coche adecuado para cada carrera

      -

      No todos los coches son adecuados para todas las carreras. Algunos coches son más rápidos, algunos son más ágiles, algunos son más duraderos y algunos son más equilibrados. Usted debe elegir el coche que coincide con el tipo de carrera, la pista, y las condiciones climáticas. Por ejemplo, si está corriendo en una carretera mojada, es posible que desee utilizar un coche con buena tracción y estabilidad. Si usted está corriendo en una pista con curvas, es posible que desee utilizar un coche con buen manejo y aceleración. Si estás corriendo contra la policía, es posible que quieras usar un coche con buena velocidad y durabilidad.

      -

      Domina las técnicas de deriva y nitro

      -

      Deriva y nitro son dos técnicas esenciales que pueden darle una ventaja en las carreras. La deriva es cuando usted desliza su coche de lado alrededor de una esquina sin perder velocidad. Nitro es cuando usted aumenta su velocidad usando un combustible especial. Para ir a la deriva, debe pulsar el botón de freno mientras gira el automóvil. Para usar nitro, debe pulsar el botón nitro cuando el medidor de nitro esté lleno. Drifting y nitro pueden ayudarte a superar a tus oponentes, evitar obstáculos y ahorrar tiempo. Sin embargo, también tienen inconvenientes. La deriva puede hacerle perder el control de su coche si usted lo exagera o lo hace en el momento equivocado. Nitro puede hacer que se quede sin combustible más rápido si lo usa con demasiada frecuencia o demasiado tiempo.

      -

      Recoge monedas y bonos

      - -

      Conclusión

      -

      CarX Highway Racing APK 1.74.8 es un emocionante juego de carreras para Android que le mantendrá en el borde de su asiento. Ofrece gráficos realistas y física, diversos modos y desafíos, coches personalizables y mejoras, y más. Es un juego que atraerá tanto a los aficionados a las carreras ocasionales y hardcore por igual. Si usted está buscando un nuevo juego de carreras para probar, descargar CarX Highway Racing APK 1.74.8 hoy y disfrutar del viaje!

      -

      Preguntas frecuentes

      -
        -
      • Q: ¿Es CarX Highway Racing gratis para jugar?
      • -
      • A: Sí, CarX Highway Racing es gratis para jugar. Sin embargo, también contiene compras en la aplicación que le permiten comprar monedas o bonos adicionales.
      • -
      • Q: ¿CarX Highway Racing es compatible con mi dispositivo?
      • -
      • A: CarX Highway Racing requiere Android 5.0 o superior para funcionar sin problemas. También requiere al menos 1 GB de RAM y 1 GB de espacio de almacenamiento gratuito.
      • -
      • Q: ¿Cómo puedo contactar a los desarrolladores de CarX Highway Racing?
      • -
      • A: Puede ponerse en contacto con los desarrolladores de CarX Highway Racing enviando un correo electrónico a support@carx-tech.com o visitando su sitio web en https://carx-tech.com/.
      • -
      • Q: ¿Cómo puedo reportar un error o un problema en CarX Highway Racing?
      • -
      • A: Puede reportar un error o un problema en CarX Highway Racing usando la opción de retroalimentación en la configuración del juego o enviando un correo electrónico a support@carx -tech.com.
      • -
      • Q: ¿Cómo puedo compartir mis comentarios o sugerencias para CarX Highway Racing?
      • -
      • A: Puede compartir sus comentarios o sugerencias para CarX Highway Racing utilizando la opción de comentarios en la configuración del juego o enviando un correo electrónico a support@carx-tech.com. También puede unirse a la comunidad CarX Highway Racing en Facebook, Instagram, YouTube o Discord y compartir sus pensamientos con otros jugadores y desarrolladores.
      • -

      64aa2da5cf
      -
      -
      \ No newline at end of file diff --git a/spaces/Benson/text-generation/Examples/Damas De Vuelta Para Ganar Aplicacin.md b/spaces/Benson/text-generation/Examples/Damas De Vuelta Para Ganar Aplicacin.md deleted file mode 100644 index f10bc72d07ece5e0131fac6f710f48de0c5da93b..0000000000000000000000000000000000000000 --- a/spaces/Benson/text-generation/Examples/Damas De Vuelta Para Ganar Aplicacin.md +++ /dev/null @@ -1,87 +0,0 @@ - -

      Juego de Damas para Ganar App Download: Cómo Obtener Ahorros Instantáneos y Ganar Premios con tu Smartphone

      -

      Si estás buscando una forma fácil y divertida de ahorrar dinero y ganar premios mientras compras en Checkers, deberías probar la aplicación Checkers Spin to Win. Esta aplicación es un juego que te recompensa por ser un leal miembro de Checkers Xtra Savings. Puede hacer girar una rueda virtual en su teléfono inteligente y ganar vales u otros premios que puede usar en su próxima compra o guardar para más tarde. En este artículo, explicaremos qué es la aplicación Checkers Spin to Win, cómo descargarla y registrarla, cómo jugarla, cuáles son los beneficios de usarla y cuáles son algunas alternativas a ella.

      -

      Damas de vuelta para ganar aplicación


      Downloadhttps://bltlly.com/2v6KFs



      -

      ¿Qué es Checkers Spin para ganar aplicación?

      -

      Un juego divertido y gratificante para los miembros de Checkers Xtra Savings

      -

      La aplicación Checkers Spin to Win es un juego que te permite girar una rueda y ganar vales u otros premios cada vez que compras en Checkers. La aplicación está vinculada a su tarjeta de ahorros Checkers Xtra, que le brinda ahorros instantáneos en miles de productos en la tienda. La aplicación también le da acceso a ofertas personalizadas, ofertas de aplicaciones exclusivas y tratamiento VIP. La aplicación está disponible para dispositivos iOS y Android.

      -

      Cómo descargar y registrar la aplicación

      -

      Para descargar la aplicación Checkers Spin to Win, debes seguir estos pasos:

      -
        -
      1. Ir a la App Store o Google Play Store y buscar "Checkers comestibles y ahorros".
      2. -
      3. Descargue e instale la aplicación en su dispositivo.
      4. -
      5. Abra la aplicación y toque en "Ahorros Xtra" en la parte inferior de la pantalla.
      6. -
      7. Si ya tiene una tarjeta de ahorro Checkers Xtra, escanéela o ingrese su número. Si no tiene una, puede obtener una gratis en la tienda.
      8. -
      9. Rellena tus datos personales y crea una contraseña.
      10. -
      11. Verifique su cuenta con un OTP enviado a su número de teléfono o dirección de correo electrónico.
      12. -
      13. Felicidades, ya estás listo para jugar!
      14. -
      -

      Cómo jugar a las damas Spin para ganar App?

      - -

      Para jugar a las damas Spin to Win aplicación, es necesario deslizar su tarjeta de ahorros Xtra con cada compra que haga en la tienda. También puede usar su tarjeta virtual en la aplicación si olvida su tarjeta física. Cada vez que deslizas tu tarjeta, ganarás puntos que puedes usar para desbloquear hasta un 25% de descuento en cada tienda. También obtendrá la entrada automática a las competiciones, calificar para ofertas exclusivas y eventos, y obtener un regalo de cumpleaños gratis

      Girar la rueda en la aplicación y ganar vales u otros premios

      -

      Después de pasar su tarjeta, tendrá la oportunidad de girar la rueda en la aplicación y ganar vales u otros premios. Puedes girar la rueda una vez al día, por tienda. Los cupones van de R5 a R1000 y se pueden usar en cualquier producto en la tienda. Los premios incluyen productos gratuitos, tiempo de emisión, datos, tarjetas de regalo y más. Puedes ver lo que has ganado en la aplicación y en tu caja.

      -

      Canjear o depositar sus vales para uso futuro

      -

      Puede optar por canjear sus vales inmediatamente o depositarlos para su uso posterior. Para canjear sus vales, debe escanearlos en la caja antes de pagar. Para depositar sus vales, debe tocar en "Banco" en la aplicación y guardarlos durante un máximo de 30 días. Puedes ver tu saldo de cupones y fechas de vencimiento en la aplicación. También puede compartir sus vales con sus amigos y familiares a través de WhatsApp, SMS o correo electrónico.

      -

      ¿Cuáles son los beneficios de Checkers Spin to Win App?

      -

      Ahorre dinero en miles de productos con Xtra Ofertas de ahorro

      -

      Uno de los principales beneficios de usar la aplicación Checkers Spin to Win es que puedes ahorrar dinero en miles de productos con las ofertas de Xtra Savings. Estos son descuentos especiales que son exclusivos para los miembros de Xtra Savings y se actualizan cada semana. Puede encontrar estas ofertas en la aplicación, en línea o en la tienda. También puede obtener ofertas personalizadas basadas en sus hábitos de compra y preferencias.

      -

      Disfruta de ofertas personalizadas según tus preferencias

      - -

      Obtenga acceso a ofertas y eventos exclusivos de la aplicación

      -

      Un tercer beneficio de usar la aplicación Checkers Spin to Win es que puedes obtener acceso a ofertas y eventos exclusivos de la aplicación. Estas son promociones especiales que solo están disponibles para los usuarios de aplicaciones y no se anuncian en ningún otro lugar. Usted puede encontrar estas ofertas en la aplicación en "App Only Deals" o "App Events". Algunos ejemplos de estas ofertas son la entrega gratuita, ventas flash, puntos dobles, y más. Algunos ejemplos de estos eventos son shows de cocina en vivo, catas de vino, lanzamientos de productos y más.

      -

      ¿Cuáles son las alternativas a las damas Spin to Win App?

      -

      Otras aplicaciones que ofrecen giros para ganar juegos o recompensas

      -

      Si estás buscando otras aplicaciones que ofrecen giros para ganar juegos o recompensas, tienes algunas opciones para elegir. Algunas de estas aplicaciones son:

      -

      -
        -
      • Pick n Pay Smart Shopper: Esta aplicación te permite ganar puntos cada vez que compras en Pick n Pay y canjearlos por dinero en efectivo o vales. También puedes girar una rueda en la aplicación y ganar premios instantáneos como productos gratuitos, tiempo de emisión, datos o puntos de bonificación.
      • -
      • Shoprite Money Market: Esta aplicación le permite enviar dinero, comprar tiempo de emisión, datos, electricidad y más en las tiendas Shoprite. También puede girar una rueda en la aplicación y ganar premios como tiempo libre, datos, electricidad o tarjetas de regalo.
      • -
      • Woolworths WRewards: Esta aplicación te permite obtener descuentos en artículos seleccionados cada vez que compras en Woolworths con tu tarjeta WRewards. También puede girar una rueda en la aplicación y ganar premios como productos gratuitos, vales o puntos de bonificación.
      • -
      -

      Pros y contras de usar diferentes aplicaciones

      -

      Cada aplicación tiene sus propios pros y contras que debes considerar antes de usarlas. Aquí hay una tabla que compara algunas de las características de cada aplicación:

      - - -Aplicación -Pros -Contras - - -Las damas giran para ganar - -- Limitado a un giro por día, por tienda
      - Cupones caducan después de 30 días
      - Aplicación solo funciona con Checkers Xtra Tarjeta de ahorros - - -Pick n Pagar comprador inteligente -- Los puntos se pueden canjear por dinero en efectivo o vales
      - Girar para ganar juegos en la aplicación
      - Los vales se pueden utilizar en cualquier producto en la tienda
      - Los vales se pueden donar a la caridad> -- Puntos caducan después de 12 meses
      - Vales caducan después de 3 meses
      - Aplicación solo funciona con Pick n Pay Smart Shopper tarjeta - - -Mercado de dinero de Shoprite -- Manera conveniente de enviar dinero y comprar servicios
      - Spin para ganar juegos en la aplicación
      - Los premios se pueden utilizar en cualquier producto o servicio en la tienda -- No hay puntos o descuentos en los productos
      - Los premios caducan después de 7 días
      - Aplicación solo funciona con Shoprite Money Market cuenta - - -Woolworths WRewards -- Descuentos en artículos seleccionados cada vez que compras
      - Girar para ganar juegos en la aplicación
      - Los premios se pueden utilizar en cualquier producto en la tienda
      - Los premios se pueden donar a la caridad -- No hay puntos o dinero en efectivo en las compras
      - Los premios caducan después de 30 días
      - Aplicación solo funciona con la tarjeta Woolworths WRewards - -
      -

      Conclusión

      -

      Resumen de los puntos principales

      -

      La aplicación Checkers Spin to Win es un juego que te recompensa por ser un miembro leal de Checkers Xtra Savings. Puede girar una rueda en su teléfono inteligente y ganar vales u otros premios que puede utilizar en su próxima compra o ahorrar para más tarde. La aplicación también le ofrece ahorros instantáneos en miles de productos, ofertas personalizadas y ofertas exclusivas de aplicaciones y eventos. La aplicación es fácil de descargar y registrarse, y divertido de jugar.

      -

      Llamada a la acción e invitación para probar la aplicación

      - -

      Preguntas frecuentes

      -

      Q1. ¿Es Checkers Spin to Win App gratis para descargar y usar?

      -

      A1. Sí, la aplicación Checkers Spin to Win es gratuita para descargar y usar. Solo necesita una tarjeta de ahorro Checkers Xtra, que también es gratuita para entrar en la tienda.

      -

      Q2. ¿Cuántas veces puedo girar la rueda por día?

      -

      A2. Puedes girar la rueda una vez al día, por tienda. Eso significa que puedes hacer girar la rueda más de una vez si compras en diferentes tiendas Checkers en un día.

      -

      Q3. ¿Por cuánto tiempo son válidos los vales?

      -

      A3. Los vales son válidos durante 30 días a partir de la fecha de emisión. Puede ver el saldo del bono y las fechas de vencimiento en la aplicación.

      -

      Q4. ¿Puedo usar los vales en cualquier tienda o marca de Checkers?

      -

      A4. Sí, puede usar los vales en cualquier tienda o marca de Damas, incluyendo Checkers Hyper, Checkers LiquorShop y Checkers Medirite.

      -

      Q5. ¿Cómo puedo contactar con el servicio al cliente de Checkers si tengo algún problema con la aplicación?

      -

      A5. Puede ponerse en contacto con el servicio de atención al cliente de Checkers llamando al 0800 01 07 09 o enviando un correo electrónico a customerservice@shoprite.co.za. También puede visitar su sitio web en www.checkers.co.za para más información.

      64aa2da5cf
      -
      -
      \ No newline at end of file diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_internal/utils/setuptools_build.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_internal/utils/setuptools_build.py deleted file mode 100644 index 96d1b2460670e20ac92a5ade7a74b7ab1cba71d8..0000000000000000000000000000000000000000 --- a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_internal/utils/setuptools_build.py +++ /dev/null @@ -1,146 +0,0 @@ -import sys -import textwrap -from typing import List, Optional, Sequence - -# Shim to wrap setup.py invocation with setuptools -# Note that __file__ is handled via two {!r} *and* %r, to ensure that paths on -# Windows are correctly handled (it should be "C:\\Users" not "C:\Users"). -_SETUPTOOLS_SHIM = textwrap.dedent( - """ - exec(compile(''' - # This is -- a caller that pip uses to run setup.py - # - # - It imports setuptools before invoking setup.py, to enable projects that directly - # import from `distutils.core` to work with newer packaging standards. - # - It provides a clear error message when setuptools is not installed. - # - It sets `sys.argv[0]` to the underlying `setup.py`, when invoking `setup.py` so - # setuptools doesn't think the script is `-c`. This avoids the following warning: - # manifest_maker: standard file '-c' not found". - # - It generates a shim setup.py, for handling setup.cfg-only projects. - import os, sys, tokenize - - try: - import setuptools - except ImportError as error: - print( - "ERROR: Can not execute `setup.py` since setuptools is not available in " - "the build environment.", - file=sys.stderr, - ) - sys.exit(1) - - __file__ = %r - sys.argv[0] = __file__ - - if os.path.exists(__file__): - filename = __file__ - with tokenize.open(__file__) as f: - setup_py_code = f.read() - else: - filename = "" - setup_py_code = "from setuptools import setup; setup()" - - exec(compile(setup_py_code, filename, "exec")) - ''' % ({!r},), "", "exec")) - """ -).rstrip() - - -def make_setuptools_shim_args( - setup_py_path: str, - global_options: Optional[Sequence[str]] = None, - no_user_config: bool = False, - unbuffered_output: bool = False, -) -> List[str]: - """ - Get setuptools command arguments with shim wrapped setup file invocation. - - :param setup_py_path: The path to setup.py to be wrapped. - :param global_options: Additional global options. - :param no_user_config: If True, disables personal user configuration. - :param unbuffered_output: If True, adds the unbuffered switch to the - argument list. - """ - args = [sys.executable] - if unbuffered_output: - args += ["-u"] - args += ["-c", _SETUPTOOLS_SHIM.format(setup_py_path)] - if global_options: - args += global_options - if no_user_config: - args += ["--no-user-cfg"] - return args - - -def make_setuptools_bdist_wheel_args( - setup_py_path: str, - global_options: Sequence[str], - build_options: Sequence[str], - destination_dir: str, -) -> List[str]: - # NOTE: Eventually, we'd want to also -S to the flags here, when we're - # isolating. Currently, it breaks Python in virtualenvs, because it - # relies on site.py to find parts of the standard library outside the - # virtualenv. - args = make_setuptools_shim_args( - setup_py_path, global_options=global_options, unbuffered_output=True - ) - args += ["bdist_wheel", "-d", destination_dir] - args += build_options - return args - - -def make_setuptools_clean_args( - setup_py_path: str, - global_options: Sequence[str], -) -> List[str]: - args = make_setuptools_shim_args( - setup_py_path, global_options=global_options, unbuffered_output=True - ) - args += ["clean", "--all"] - return args - - -def make_setuptools_develop_args( - setup_py_path: str, - *, - global_options: Sequence[str], - no_user_config: bool, - prefix: Optional[str], - home: Optional[str], - use_user_site: bool, -) -> List[str]: - assert not (use_user_site and prefix) - - args = make_setuptools_shim_args( - setup_py_path, - global_options=global_options, - no_user_config=no_user_config, - ) - - args += ["develop", "--no-deps"] - - if prefix: - args += ["--prefix", prefix] - if home is not None: - args += ["--install-dir", home] - - if use_user_site: - args += ["--user", "--prefix="] - - return args - - -def make_setuptools_egg_info_args( - setup_py_path: str, - egg_info_dir: Optional[str], - no_user_config: bool, -) -> List[str]: - args = make_setuptools_shim_args(setup_py_path, no_user_config=no_user_config) - - args += ["egg_info"] - - if egg_info_dir: - args += ["--egg-base", egg_info_dir] - - return args diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/_vendor/jaraco/context.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/_vendor/jaraco/context.py deleted file mode 100644 index 87a4e3dca299c4201ac50f6ef589dc73f1c45576..0000000000000000000000000000000000000000 --- a/spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/_vendor/jaraco/context.py +++ /dev/null @@ -1,213 +0,0 @@ -import os -import subprocess -import contextlib -import functools -import tempfile -import shutil -import operator - - -@contextlib.contextmanager -def pushd(dir): - orig = os.getcwd() - os.chdir(dir) - try: - yield dir - finally: - os.chdir(orig) - - -@contextlib.contextmanager -def tarball_context(url, target_dir=None, runner=None, pushd=pushd): - """ - Get a tarball, extract it, change to that directory, yield, then - clean up. - `runner` is the function to invoke commands. - `pushd` is a context manager for changing the directory. - """ - if target_dir is None: - target_dir = os.path.basename(url).replace('.tar.gz', '').replace('.tgz', '') - if runner is None: - runner = functools.partial(subprocess.check_call, shell=True) - # In the tar command, use --strip-components=1 to strip the first path and - # then - # use -C to cause the files to be extracted to {target_dir}. This ensures - # that we always know where the files were extracted. - runner('mkdir {target_dir}'.format(**vars())) - try: - getter = 'wget {url} -O -' - extract = 'tar x{compression} --strip-components=1 -C {target_dir}' - cmd = ' | '.join((getter, extract)) - runner(cmd.format(compression=infer_compression(url), **vars())) - with pushd(target_dir): - yield target_dir - finally: - runner('rm -Rf {target_dir}'.format(**vars())) - - -def infer_compression(url): - """ - Given a URL or filename, infer the compression code for tar. - """ - # cheat and just assume it's the last two characters - compression_indicator = url[-2:] - mapping = dict(gz='z', bz='j', xz='J') - # Assume 'z' (gzip) if no match - return mapping.get(compression_indicator, 'z') - - -@contextlib.contextmanager -def temp_dir(remover=shutil.rmtree): - """ - Create a temporary directory context. Pass a custom remover - to override the removal behavior. - """ - temp_dir = tempfile.mkdtemp() - try: - yield temp_dir - finally: - remover(temp_dir) - - -@contextlib.contextmanager -def repo_context(url, branch=None, quiet=True, dest_ctx=temp_dir): - """ - Check out the repo indicated by url. - - If dest_ctx is supplied, it should be a context manager - to yield the target directory for the check out. - """ - exe = 'git' if 'git' in url else 'hg' - with dest_ctx() as repo_dir: - cmd = [exe, 'clone', url, repo_dir] - if branch: - cmd.extend(['--branch', branch]) - devnull = open(os.path.devnull, 'w') - stdout = devnull if quiet else None - subprocess.check_call(cmd, stdout=stdout) - yield repo_dir - - -@contextlib.contextmanager -def null(): - yield - - -class ExceptionTrap: - """ - A context manager that will catch certain exceptions and provide an - indication they occurred. - - >>> with ExceptionTrap() as trap: - ... raise Exception() - >>> bool(trap) - True - - >>> with ExceptionTrap() as trap: - ... pass - >>> bool(trap) - False - - >>> with ExceptionTrap(ValueError) as trap: - ... raise ValueError("1 + 1 is not 3") - >>> bool(trap) - True - - >>> with ExceptionTrap(ValueError) as trap: - ... raise Exception() - Traceback (most recent call last): - ... - Exception - - >>> bool(trap) - False - """ - - exc_info = None, None, None - - def __init__(self, exceptions=(Exception,)): - self.exceptions = exceptions - - def __enter__(self): - return self - - @property - def type(self): - return self.exc_info[0] - - @property - def value(self): - return self.exc_info[1] - - @property - def tb(self): - return self.exc_info[2] - - def __exit__(self, *exc_info): - type = exc_info[0] - matches = type and issubclass(type, self.exceptions) - if matches: - self.exc_info = exc_info - return matches - - def __bool__(self): - return bool(self.type) - - def raises(self, func, *, _test=bool): - """ - Wrap func and replace the result with the truth - value of the trap (True if an exception occurred). - - First, give the decorator an alias to support Python 3.8 - Syntax. - - >>> raises = ExceptionTrap(ValueError).raises - - Now decorate a function that always fails. - - >>> @raises - ... def fail(): - ... raise ValueError('failed') - >>> fail() - True - """ - - @functools.wraps(func) - def wrapper(*args, **kwargs): - with ExceptionTrap(self.exceptions) as trap: - func(*args, **kwargs) - return _test(trap) - - return wrapper - - def passes(self, func): - """ - Wrap func and replace the result with the truth - value of the trap (True if no exception). - - First, give the decorator an alias to support Python 3.8 - Syntax. - - >>> passes = ExceptionTrap(ValueError).passes - - Now decorate a function that always fails. - - >>> @passes - ... def fail(): - ... raise ValueError('failed') - - >>> fail() - False - """ - return self.raises(func, _test=operator.not_) - - -class suppress(contextlib.suppress, contextlib.ContextDecorator): - """ - A version of contextlib.suppress with decorator support. - - >>> @suppress(KeyError) - ... def key_error(): - ... {}[''] - >>> key_error() - """ diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/command/saveopts.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/command/saveopts.py deleted file mode 100644 index 611cec552867a6d50b7edd700c86c7396d906ea2..0000000000000000000000000000000000000000 --- a/spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/command/saveopts.py +++ /dev/null @@ -1,22 +0,0 @@ -from setuptools.command.setopt import edit_config, option_base - - -class saveopts(option_base): - """Save command-line options to a file""" - - description = "save supplied options to setup.cfg or other config file" - - def run(self): - dist = self.distribution - settings = {} - - for cmd in dist.command_options: - - if cmd == 'saveopts': - continue # don't save our own options! - - for opt, (src, val) in dist.get_option_dict(cmd).items(): - if src == "command line": - settings.setdefault(cmd, {})[opt] = val - - edit_config(self.filename, settings, self.dry_run) diff --git a/spaces/Boilin/URetinex-Net/network/decom.py b/spaces/Boilin/URetinex-Net/network/decom.py deleted file mode 100644 index 8d7e170df1dc6bb9dd06b3f9db506444a81c0367..0000000000000000000000000000000000000000 --- a/spaces/Boilin/URetinex-Net/network/decom.py +++ /dev/null @@ -1,23 +0,0 @@ -import torch -import torch.nn as nn -from network.architecture import * - -class Decom(nn.Module): - def __init__(self): - super().__init__() - self.decom = nn.Sequential( - get_conv2d_layer(in_c=3, out_c=32, k=3, s=1, p=1), - nn.LeakyReLU(0.2, inplace=True), - get_conv2d_layer(in_c=32, out_c=32, k=3, s=1, p=1), - nn.LeakyReLU(0.2, inplace=True), - get_conv2d_layer(in_c=32, out_c=32, k=3, s=1, p=1), - nn.LeakyReLU(0.2, inplace=True), - get_conv2d_layer(in_c=32, out_c=4, k=3, s=1, p=1), - nn.ReLU() - ) - - def forward(self, input): - output = self.decom(input) - R = output[:, 0:3, :, :] - L = output[:, 3:4, :, :] - return R, L \ No newline at end of file diff --git a/spaces/Bumpeet/faceTracking/app.py b/spaces/Bumpeet/faceTracking/app.py deleted file mode 100644 index 5d0e5eb3b3e690ed294594e43b575846de12df06..0000000000000000000000000000000000000000 --- a/spaces/Bumpeet/faceTracking/app.py +++ /dev/null @@ -1,251 +0,0 @@ -import cv2 -import face_recognition -from sklearn.cluster import KMeans -from sklearn.metrics import silhouette_score -import numpy as np -import shutil -import os -from tqdm import tqdm -import streamlit as st -import tempfile -import time - -def face_rec(img_arr): - ''' - This method is the heart of this application. This method takes in the frame in the form - of numpy.ndarray and returns the detection box co-ordinates and their corresponding embeddings - - input - - img_arr: np.ndarry - - output - - dets: list of detections - - embeds: list of embeddings - ''' - dets = face_recognition.face_locations(img_arr) - embeds = face_recognition.face_encodings(img_arr, dets) - return dets, embeds - -def extract_embeddings(path,frame_skip): - ''' - This method takes in the video and runs it frame by frame using cv2.VideoCapture method. - ''' - cap = cv2.VideoCapture(path) - - list_embeds = [] - list_dets = [] - frames = [] - image_no = 0 - frame_no = 0 - - local_folder = "images" - face_crops_folder = f'{local_folder}/sub_images' - os.makedirs(face_crops_folder, exist_ok=True) - - # length = int(cap.get(cv2.CAP_PROP_FRAME_COUNT)) - # frame_rate = int(cap.get(cv2.CAP_PROP_FPS)) - # time = length/frame_rate - - with st.spinner(f"Extracting embeddings from frames"): - with st.empty(): - - while cap.isOpened(): - ret, frame = cap.read() - - if ret==True and frame_no%frame_skip==0: - st.image(frame,f"Extracting faces from the frame-{frame_no} in the video", channels="BGR",width=480) - frames.append(frame) - try: - dets, embeds = face_rec(frame) - - list_embeds.append(embeds) - list_dets.append(dets) - - for i, val in enumerate(dets): - sub_img = frame[val[0]:val[2],val[3]:val[1],:] - cv2.imwrite(f'{face_crops_folder}/{image_no}.jpg',sub_img) - print(f'saved image - {image_no} to the \'{face_crops_folder}\' folder') - image_no += 1 - - except Exception as e: - st.exception(f"{e}",icon="⚠️") - - elif ret==False: - break - - frame_no +=1 - - - cap.release() - st.empty() - st.toast("Extracted Embeddings from all the frames of the video",icon="👨") - - return list_embeds, list_dets, frames - -def clustering(embeds): - ''' - This method helps in clustering the embeddings using the KMeans algorithm. The optimal number - of clusters will be chosen based on the Shilloute score. - - params: - - embeds: list of embeddings of all the faces - returns: - - the best Kmeans model - ''' - - best_score = 0.0 - best_model = None - - list_embeds = [] - - for embed in embeds: - for emb in embed: - list_embeds.append(emb) - - n_samples = len(list_embeds) - - with st.empty(): - progress_text = "Clustering the extracted embedding using KMeans." - my_bar = st.progress(0, text=progress_text) - - for i in tqdm(range(2,n_samples,1),"Fitting the model with give set of clusters"): - model = KMeans(i) - clusters = model.fit_predict(list_embeds) - score = silhouette_score(list_embeds,clusters) - my_bar.progress(i + 1, text=progress_text) - # print(score) - if score > best_score: - best_model = model - best_score = score - st.empty() - - st.toast("Finished clustering the embeddings",icon="✅") - if best_model is None: - st.warning("please upload a video contanining the human faces") - st.stop() - best_model_clusters = best_model.labels_ - n_clusters = np.max(best_model_clusters) + 1 - - st.info(f"Found {n_clusters} unique faces among the video",icon="✅") - - print("The optimal number of clusters based on the shilloute score are: ", n_clusters) - - for i in range(n_clusters): - os.makedirs(f"images/{i}",exist_ok=True) - - for i, val in tqdm(enumerate(best_model_clusters),"moving the images into the clustered folders"): - shutil.copy(f'images/sub_images/{i}.jpg',f'images/{val}') - - return best_model - -def create_temp_dirs(): - shutil.rmtree("images", ignore_errors=True) - os.makedirs("images", exist_ok=True) - # os.remove("output_video.mp4",) - - -def generate_video(embeds, dets, frames, model): - ''' - Generates the video with bounding box and id's - - params: - - embeds: list of embeddings of all the detections - - dets: list of bbox of all the detections - - model: K-Means model for predicting the cluster id - ''' - - - - width = frames[0].shape[1] - height = frames[0].shape[0] - - out = cv2.VideoWriter('output_video.webm',cv2.VideoWriter_fourcc(*'VP90'), 5, (int(width), int(height))) - - with st.spinner("Creating the video file to display it"): - - for i, frame in enumerate(frames): - for sub_embed, sub_det in zip(embeds[i], dets[i]): - cv2.rectangle(frame,(sub_det[3], sub_det[0]),(sub_det[1], sub_det[2]),color=(0,0,255),thickness=2) - cluster_id = model.predict(sub_embed.reshape(1,-1)) - cluster_id_str = str(cluster_id[0]) - # print(cluster_id_str, type(cluster_id_str)) - cv2.putText(frame,cluster_id_str, - (sub_det[3], sub_det[0]), - cv2.FONT_HERSHEY_SIMPLEX, - color=(0, 255, 0), - fontScale = 1, - thickness=2 ) - out.write(frame) - - - out.release() - -def main(): - - uploaded_file = st.file_uploader("Choose a video file to run the face tracking, \ - make sure the video is less than 20 seconds for the faster results", type=["mp4", "avi", "mov"]) - - if uploaded_file is not None: - - create_temp_dirs() - temp_filename = None - - print("created the Temperory directories") - # Save the uploaded video to a temporary file - with tempfile.NamedTemporaryFile(suffix=".mp4", delete=False) as temp_file: - temp_filename = temp_file.name - temp_file.write(uploaded_file.read()) - - place_holder = st.empty() - - skip_frames = st.slider('Use this slider to skip the frames for the faster performance', 0, 50) - - if skip_frames: - - print("Sending images to extract the embeddings") - embeds, dets, frames = extract_embeddings(temp_filename, skip_frames ) - model = clustering(embeds) - - - generate_video(embeds, dets, frames, model) - - with st.spinner("Reading the video file to display it"): - - - video_file = open('output_video.webm', 'rb') - video_bytes = video_file.read() - st.balloons() - - st.video(video_bytes,format="video/webm") - - st.divider() - st.write("Use this download button to download the clustered images") - shutil.make_archive("images","zip","images") - - with open("images.zip", "rb") as fp: - btn = st.download_button( - label="Download ZIP", - data=fp, - file_name="trackedFaces.zip", - mime="application/zip" - ) - - if btn: - # Remove the temporary video file - os.remove(temp_filename) - st.toast("Downloaded the File succesfully",icon="✅") - time.sleep(5) - os.remove("output_video.mp4") - - - - - -if __name__=="__main__": - st.header("Face Tracking using Face_recognition library") - st.divider() - main() - - - - diff --git a/spaces/CForGETaass/vits-uma-genshin-honkai/README.md b/spaces/CForGETaass/vits-uma-genshin-honkai/README.md deleted file mode 100644 index 1c0aa069bfd980b6b45bb2bf62ff74bd9b0b61c2..0000000000000000000000000000000000000000 --- a/spaces/CForGETaass/vits-uma-genshin-honkai/README.md +++ /dev/null @@ -1,11 +0,0 @@ ---- -license: apache-2.0 -title: ' vits-uma-genshin-honkai' -sdk: gradio -sdk_version: 3.7 -emoji: 🐨 -colorTo: yellow -pinned: false -app_file: app.py -duplicated_from: ikechan8370/vits-uma-genshin-honkai ---- diff --git a/spaces/CVPR/LIVE/thrust/internal/benchmark/tbb_algos.h b/spaces/CVPR/LIVE/thrust/internal/benchmark/tbb_algos.h deleted file mode 100644 index a50a1cd2f9dcc028464d76487a700457faca640e..0000000000000000000000000000000000000000 --- a/spaces/CVPR/LIVE/thrust/internal/benchmark/tbb_algos.h +++ /dev/null @@ -1,195 +0,0 @@ -#pragma once - -#include -#include -#include -#include -#include -#include -#include - -#include // For std::size_t. - -#include - -template -struct NegateBody -{ - void operator()(T& x) const - { - x = -x; - } -}; - -template -struct ForBody -{ - typedef typename Vector::value_type T; - -private: - Vector& v; - -public: - ForBody(Vector& x) : v(x) {} - - void operator()(tbb::blocked_range const& r) const - { - for (std::size_t i = r.begin(); i != r.end(); ++i) - v[i] = -v[i]; - } -}; - -template -struct ReduceBody -{ - typedef typename Vector::value_type T; - -private: - Vector& v; - -public: - T sum; - - ReduceBody(Vector& x) : v(x), sum(0) {} - - ReduceBody(ReduceBody& x, tbb::split) : v(x.v), sum(0) {} - - void operator()(tbb::blocked_range const& r) - { - for (std::size_t i = r.begin(); i != r.end(); ++i) - sum += v[i]; - } - - void join(ReduceBody const& x) { sum += x.sum; } -}; - -template -struct ScanBody -{ - typedef typename Vector::value_type T; - -private: - Vector& v; - -public: - T sum; - - ScanBody(Vector& x) : sum(0), v(x) {} - - ScanBody(ScanBody& x, tbb::split) : v(x.v), sum(0) {} - - template - void operator()(tbb::blocked_range const& r, Tag) - { - T temp = sum; - for (std::size_t i = r.begin(); i < r.end(); ++i) - { - temp = temp + x[i]; - if (Tag::is_final_scan()) - x[i] = temp; - } - sum = temp; - } - - void assign(ScanBody const& x) { sum = x.sum; } - - T get_sum() const { return sum; } - - void reverse_join(ScanBody const& x) { sum = x.sum + sum;} -}; - -template -struct CopyBody -{ - typedef typename Vector::value_type T; - -private: - Vector &v; - Vector &u; - -public: - CopyBody(Vector& x, Vector& y) : v(x), u(y) {} - - void operator()(tbb::blocked_range const& r) const - { - for (std::size_t i = r.begin(); i != r.end(); ++i) - v[i] = u[i]; - } -}; - -template -typename Vector::value_type tbb_reduce(Vector& v) -{ - ReduceBody body(v); - tbb::parallel_reduce(tbb::blocked_range(0, v.size()), body); - return body.sum; -} - -template -void tbb_sort(Vector& v) -{ - tbb::parallel_sort(v.begin(), v.end()); -} - -template -void tbb_transform(Vector& v) -{ - ForBody body(v); - tbb::parallel_for(tbb::blocked_range(0, v.size()), body); -} - -template -void tbb_scan(Vector& v) -{ - ScanBody body(v); - tbb::parallel_scan(tbb::blocked_range(0, v.size()), body); -} - -template -void tbb_copy(Vector& v, Vector& u) -{ - CopyBody body(v, u); - tbb::parallel_for(tbb::blocked_range(0, v.size()), body); -} - -void test_tbb() -{ - std::size_t elements = 1 << 20; - - std::vector A(elements); - std::vector B(elements); - std::vector C(elements); - std::vector D(elements); - - randomize(A); - randomize(B); - assert(std::accumulate(A.begin(), A.end(), 0) == tbb_reduce(A)); - - randomize(A); - randomize(B); - std::transform(A.begin(), A.end(), A.begin(), thrust::negate()); - tbb_transform(B); - assert(A == B); - - randomize(A); - randomize(B); - std::partial_sum(A.begin(), A.end(), A.begin()); - tbb_scan(B); - assert(A == B); - - randomize(A); - randomize(B); - std::sort(A.begin(), A.end()); - tbb_sort(B); - assert(A == B); - - randomize(A); - randomize(B); - randomize(C); - randomize(D); - std::copy(A.begin(), A.end(), C.begin()); - tbb_copy(B, D); - assert(A == B); - assert(C == D); -} - diff --git a/spaces/CVPR/LIVE/thrust/thrust/async/copy.h b/spaces/CVPR/LIVE/thrust/thrust/async/copy.h deleted file mode 100644 index a6d792d55c3cfbefadc88745b84c9afc26693be5..0000000000000000000000000000000000000000 --- a/spaces/CVPR/LIVE/thrust/thrust/async/copy.h +++ /dev/null @@ -1,149 +0,0 @@ -/* - * Copyright 2008-2018 NVIDIA Corporation - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -/*! \file async/copy.h - * \brief Functions for asynchronously copying a range. - */ - -#pragma once - -#include -#include - -#if THRUST_CPP_DIALECT >= 2014 - -#include -#include -#include -#include - -#include - -namespace thrust -{ - -namespace async -{ - -namespace unimplemented -{ - -template < - typename FromPolicy, typename ToPolicy -, typename ForwardIt, typename Sentinel, typename OutputIt -> -__host__ -event -async_copy( - thrust::execution_policy& from_exec -, thrust::execution_policy& to_exec -, ForwardIt first, Sentinel last, OutputIt output -) -{ - THRUST_STATIC_ASSERT_MSG( - (thrust::detail::depend_on_instantiation::value) - , "this algorithm is not implemented for the specified system" - ); - return {}; -} - -} // namespace unimplemented - -namespace copy_detail -{ - -using thrust::async::unimplemented::async_copy; - -struct copy_fn final -{ - template < - typename FromPolicy, typename ToPolicy - , typename ForwardIt, typename Sentinel, typename OutputIt - > - __host__ - static auto call( - thrust::detail::execution_policy_base const& from_exec - , thrust::detail::execution_policy_base const& to_exec - , ForwardIt&& first, Sentinel&& last - , OutputIt&& output - ) - // ADL dispatch. - THRUST_RETURNS( - async_copy( - thrust::detail::derived_cast(thrust::detail::strip_const(from_exec)) - , thrust::detail::derived_cast(thrust::detail::strip_const(to_exec)) - , THRUST_FWD(first), THRUST_FWD(last) - , THRUST_FWD(output) - ) - ) - - template < - typename DerivedPolicy - , typename ForwardIt, typename Sentinel, typename OutputIt - > - __host__ - static auto call( - thrust::detail::execution_policy_base const& exec - , ForwardIt&& first, Sentinel&& last - , OutputIt&& output - ) - THRUST_RETURNS( - copy_fn::call( - thrust::detail::derived_cast(thrust::detail::strip_const(exec)) - // Synthesize a suitable new execution policy, because we don't want to - // try and extract twice from the one we were passed. - , typename remove_cvref_t< - decltype(thrust::detail::derived_cast(thrust::detail::strip_const(exec))) - >::tag_type{} - , THRUST_FWD(first), THRUST_FWD(last) - , THRUST_FWD(output) - ) - ) - - template - __host__ - static auto call(ForwardIt&& first, Sentinel&& last, OutputIt&& output) - THRUST_RETURNS( - copy_fn::call( - thrust::detail::select_system( - typename thrust::iterator_system>::type{} - ) - , thrust::detail::select_system( - typename thrust::iterator_system>::type{} - ) - , THRUST_FWD(first), THRUST_FWD(last) - , THRUST_FWD(output) - ) - ) - - template - THRUST_NODISCARD __host__ - auto operator()(Args&&... args) const - THRUST_RETURNS( - call(THRUST_FWD(args)...) - ) -}; - -} // namespace copy_detail - -THRUST_INLINE_CONSTANT copy_detail::copy_fn copy{}; - -} // namespace async - -} // end namespace thrust - -#endif - diff --git a/spaces/CVPR/LIVE/thrust/thrust/system/cuda/detail/partition.h b/spaces/CVPR/LIVE/thrust/thrust/system/cuda/detail/partition.h deleted file mode 100644 index c69d02409f49478af09c2d06c60300d57de6a1d1..0000000000000000000000000000000000000000 --- a/spaces/CVPR/LIVE/thrust/thrust/system/cuda/detail/partition.h +++ /dev/null @@ -1,1146 +0,0 @@ -/****************************************************************************** - * Copyright (c) 2016, NVIDIA CORPORATION. All rights reserved. - * - * Redistribution and use in source and binary forms, with or without - * modification, are permitted provided that the following conditions are met: - * * Redistributions of source code must retain the above copyright - * notice, this list of conditions and the following disclaimer. - * * Redistributions in binary form must reproduce the above copyright - * notice, this list of conditions and the following disclaimer in the - * documentation and/or other materials provided with the distribution. - * * Neither the name of the NVIDIA CORPORATION nor the - * names of its contributors may be used to endorse or promote products - * derived from this software without specific prior written permission. - * - * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" - * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE - * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE - * ARE DISCLAIMED. IN NO EVENT SHALL NVIDIA CORPORATION BE LIABLE FOR ANY - * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES - * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; - * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND - * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT - * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS - * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. - * - ******************************************************************************/ -#pragma once - - -#if THRUST_DEVICE_COMPILER == THRUST_DEVICE_COMPILER_NVCC -#include - -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include - -namespace thrust -{ -namespace cuda_cub { - -namespace __partition { - - template - struct PtxPolicy - { - enum - { - BLOCK_THREADS = _BLOCK_THREADS, - ITEMS_PER_THREAD = _ITEMS_PER_THREAD, - ITEMS_PER_TILE = _BLOCK_THREADS * _ITEMS_PER_THREAD - }; - static const cub::BlockLoadAlgorithm LOAD_ALGORITHM = _LOAD_ALGORITHM; - static const cub::CacheLoadModifier LOAD_MODIFIER = _LOAD_MODIFIER; - static const cub::BlockScanAlgorithm SCAN_ALGORITHM = _SCAN_ALGORITHM; - }; // struct PtxPolicy - - template - struct Tuning; - - template - struct Tuning - { - const static int INPUT_SIZE = sizeof(T); - - enum - { - NOMINAL_4B_ITEMS_PER_THREAD = 10, - ITEMS_PER_THREAD = CUB_MIN(NOMINAL_4B_ITEMS_PER_THREAD, CUB_MAX(1, (NOMINAL_4B_ITEMS_PER_THREAD * 4 / sizeof(T)))), - }; - - typedef PtxPolicy<128, - ITEMS_PER_THREAD, - cub::BLOCK_LOAD_WARP_TRANSPOSE, - cub::LOAD_LDG, - cub::BLOCK_SCAN_WARP_SCANS> - type; - }; // Tuning<350> - - template - struct Tuning - { - const static int INPUT_SIZE = sizeof(T); - - enum - { - NOMINAL_4B_ITEMS_PER_THREAD = 7, - ITEMS_PER_THREAD = CUB_MIN(NOMINAL_4B_ITEMS_PER_THREAD, CUB_MAX(3, (NOMINAL_4B_ITEMS_PER_THREAD * 4 / sizeof(T)))), - }; - - typedef PtxPolicy<128, - ITEMS_PER_THREAD, - cub::BLOCK_LOAD_WARP_TRANSPOSE, - cub::LOAD_DEFAULT, - cub::BLOCK_SCAN_WARP_SCANS> - type; - }; // Tuning<300> - - template - struct __tag{}; - - - struct no_stencil_tag_ {}; - struct single_output_tag_ - { - template - THRUST_DEVICE_FUNCTION T const& operator=(T const& t) const { return t; } - }; - - typedef no_stencil_tag_* no_stencil_tag; - typedef single_output_tag_* single_output_tag;; - - template - struct PartitionAgent - { - typedef typename iterator_traits::value_type item_type; - typedef typename iterator_traits::value_type stencil_type; - - - typedef cub::ScanTileState ScanTileState; - - template - struct PtxPlan : Tuning::type - { - typedef Tuning tuning; - - typedef typename core::LoadIterator::type ItemsLoadIt; - typedef typename core::LoadIterator::type StencilLoadIt; - - typedef typename core::BlockLoad::type BlockLoadItems; - typedef typename core::BlockLoad::type BlockLoadStencil; - - typedef cub::TilePrefixCallbackOp - TilePrefixCallback; - typedef cub::BlockScan - BlockScan; - - - union TempStorage - { - struct - { - typename BlockScan::TempStorage scan; - typename TilePrefixCallback::TempStorage prefix; - }; - - typename BlockLoadItems::TempStorage load_items; - typename BlockLoadStencil::TempStorage load_stencil; - - core::uninitialized_array raw_exchange; - }; // union TempStorage - }; // struct PtxPlan - typedef typename core::specialize_plan_msvc10_war::type::type ptx_plan; - - typedef typename ptx_plan::ItemsLoadIt ItemsLoadIt; - typedef typename ptx_plan::StencilLoadIt StencilLoadIt; - typedef typename ptx_plan::BlockLoadItems BlockLoadItems; - typedef typename ptx_plan::BlockLoadStencil BlockLoadStencil; - typedef typename ptx_plan::TilePrefixCallback TilePrefixCallback; - typedef typename ptx_plan::BlockScan BlockScan; - typedef typename ptx_plan::TempStorage TempStorage; - - enum - { - SINGLE_OUTPUT = thrust::detail::is_same::value, - USE_STENCIL = !thrust::detail::is_same::value, - BLOCK_THREADS = ptx_plan::BLOCK_THREADS, - ITEMS_PER_THREAD = ptx_plan::ITEMS_PER_THREAD, - ITEMS_PER_TILE = ptx_plan::ITEMS_PER_TILE - }; - - - struct impl - { - //--------------------------------------------------------------------- - // Per-thread fields - //--------------------------------------------------------------------- - - TempStorage & temp_storage; - ScanTileState &tile_state; - ItemsLoadIt items_glob; - StencilLoadIt stencil_glob; - SelectedOutIt selected_out_glob; - RejectedOutIt rejected_out_glob; - Predicate predicate; - Size num_items; - - //--------------------------------------------------------------------- - // Utilities - //--------------------------------------------------------------------- - - template - THRUST_DEVICE_FUNCTION void - scatter(item_type (&items)[ITEMS_PER_THREAD], - Size (&selection_flags)[ITEMS_PER_THREAD], - Size (&selection_indices)[ITEMS_PER_THREAD], - int num_tile_items, - int num_tile_selections, - Size num_selections_prefix, - Size num_rejected_prefix, - Size /*num_selections*/) - { - int tile_num_rejections = num_tile_items - num_tile_selections; - - // Scatter items to shared memory (rejections first) -#pragma unroll - for (int ITEM = 0; ITEM < ITEMS_PER_THREAD; ++ITEM) - { - int item_idx = (threadIdx.x * ITEMS_PER_THREAD) + ITEM; - int local_selection_idx = selection_indices[ITEM] - num_selections_prefix; - int local_rejection_idx = item_idx - local_selection_idx; - int local_scatter_offset = (selection_flags[ITEM]) - ? tile_num_rejections + local_selection_idx - : local_rejection_idx; - - temp_storage.raw_exchange[local_scatter_offset] = items[ITEM]; - } - - core::sync_threadblock(); - - // Gather items from shared memory and scatter to global -#pragma unroll - for (int ITEM = 0; ITEM < ITEMS_PER_THREAD; ++ITEM) - { - int item_idx = (ITEM * BLOCK_THREADS) + threadIdx.x; - int rejection_idx = item_idx; - int selection_idx = item_idx - tile_num_rejections; - Size scatter_offset = (item_idx < tile_num_rejections) - ? num_items - - num_rejected_prefix - rejection_idx - 1 - : num_selections_prefix + selection_idx; - - item_type item = temp_storage.raw_exchange[item_idx]; - - if (!IS_LAST_TILE || (item_idx < num_tile_items)) - { - if (SINGLE_OUTPUT || item_idx >= tile_num_rejections) - { - selected_out_glob[scatter_offset] = item; - } - else // if !SINGLE_OUTPUT, scatter rejected items separately - { - rejected_out_glob[num_items - scatter_offset - 1] = item; - } - } - } - } // func scatter - - //------------------------------------------ - // specialize predicate on different types - //------------------------------------------ - - enum ItemStencil - { - ITEM, - STENCIL - }; - - template - struct wrap_value - { - T const & x; - THRUST_DEVICE_FUNCTION wrap_value(T const &x) : x(x) {} - - THRUST_DEVICE_FUNCTION T const &operator()() const { return x; }; - }; // struct wrap_type - - //------- item - - THRUST_DEVICE_FUNCTION bool - predicate_wrapper(wrap_value const &x, - __tag) - { - return predicate(x()); - } - - THRUST_DEVICE_FUNCTION bool - predicate_wrapper(wrap_value const &, - __tag) - { - return false; - } - - //-------- stencil - - template - THRUST_DEVICE_FUNCTION bool - predicate_wrapper(wrap_value const &x, - __tag) - { - return predicate(x()); - } - - THRUST_DEVICE_FUNCTION bool - predicate_wrapper(wrap_value const &, - __tag) - { - return false; - } - - - THRUST_DEVICE_FUNCTION bool - predicate_wrapper(wrap_value const &, - __tag) - { - return false; - } - - template - THRUST_DEVICE_FUNCTION void - compute_selection_flags(int num_tile_items, - T (&values)[ITEMS_PER_THREAD], - Size (&selection_flags)[ITEMS_PER_THREAD]) - { -#pragma unroll - for (int ITEM = 0; ITEM < ITEMS_PER_THREAD; ++ITEM) - { - // Out-of-bounds items are selection_flags - selection_flags[ITEM] = 1; - - if (!IS_LAST_TILE || - (Size(threadIdx.x * ITEMS_PER_THREAD) + ITEM < num_tile_items)) - { - selection_flags[ITEM] = - predicate_wrapper(wrap_value(values[ITEM]), - __tag()); - } - } - } - - //--------------------------------------------------------------------- - // Tile processing - //--------------------------------------------------------------------- - - template - Size THRUST_DEVICE_FUNCTION - consume_tile_impl(int num_tile_items, - int tile_idx, - Size tile_base) - { - item_type items_loc[ITEMS_PER_THREAD]; - Size selection_flags[ITEMS_PER_THREAD]; - Size selection_idx[ITEMS_PER_THREAD]; - - if (IS_LAST_TILE) - { - BlockLoadItems(temp_storage.load_items) - .Load(items_glob + tile_base, items_loc, num_tile_items); - } - else - { - BlockLoadItems(temp_storage.load_items) - .Load(items_glob + tile_base, items_loc); - } - - core::sync_threadblock(); - - if (USE_STENCIL) - { - stencil_type stencil_loc[ITEMS_PER_THREAD]; - - if (IS_LAST_TILE) - { - BlockLoadStencil(temp_storage.load_stencil) - .Load(stencil_glob + tile_base, stencil_loc, num_tile_items); - } - else - { - BlockLoadStencil(temp_storage.load_stencil) - .Load(stencil_glob + tile_base, stencil_loc); - } - - compute_selection_flags(num_tile_items, - stencil_loc, - selection_flags); - } - else /* Use predicate on items rather then stencil */ - { - compute_selection_flags(num_tile_items, - items_loc, - selection_flags); - } - - core::sync_threadblock(); - - Size num_tile_selections = 0; - Size num_selections = 0; - Size num_selections_prefix = 0; - Size num_rejected_prefix = 0; - if (IS_FIRST_TILE) - { - BlockScan(temp_storage.scan) - .ExclusiveSum(selection_flags, - selection_idx, - num_tile_selections); - - if (threadIdx.x == 0) - { - // Update tile status if this is not the last tile - if (!IS_LAST_TILE) - tile_state.SetInclusive(0, num_tile_selections); - } - - // Do not count any out-of-bounds selections - if (IS_LAST_TILE) - { - int num_discount = ITEMS_PER_TILE - num_tile_items; - num_tile_selections -= num_discount; - } - num_selections = num_tile_selections; - } - else - { - TilePrefixCallback prefix_cb(tile_state, - temp_storage.prefix, - cub::Sum(), - tile_idx); - BlockScan(temp_storage.scan) - .ExclusiveSum(selection_flags, - selection_idx, - prefix_cb); - - num_selections = prefix_cb.GetInclusivePrefix(); - num_tile_selections = prefix_cb.GetBlockAggregate(); - num_selections_prefix = prefix_cb.GetExclusivePrefix(); - num_rejected_prefix = tile_base - num_selections_prefix; - - if (IS_LAST_TILE) - { - int num_discount = ITEMS_PER_TILE - num_tile_items; - num_tile_selections -= num_discount; - num_selections -= num_discount; - } - } - - core::sync_threadblock(); - - scatter(items_loc, - selection_flags, - selection_idx, - num_tile_items, - num_tile_selections, - num_selections_prefix, - num_rejected_prefix, - num_selections); - - - return num_selections; - } - - - template - THRUST_DEVICE_FUNCTION Size - consume_tile(int num_tile_items, - int tile_idx, - Size tile_base) - { - if (tile_idx == 0) - { - return consume_tile_impl(num_tile_items, - tile_idx, - tile_base); - } - else - { - return consume_tile_impl(num_tile_items, - tile_idx, - tile_base); - } - } - - //--------------------------------------------------------------------- - // Constructor - //--------------------------------------------------------------------- - - THRUST_DEVICE_FUNCTION - impl(TempStorage & temp_storage_, - ScanTileState & tile_state_, - ItemsLoadIt items_glob_, - StencilLoadIt stencil_glob_, - SelectedOutIt selected_out_glob_, - RejectedOutIt rejected_out_glob_, - Predicate predicate_, - Size num_items_, - int num_tiles, - NumSelectedOutIt num_selected_out) - : temp_storage(temp_storage_), - tile_state(tile_state_), - items_glob(items_glob_), - stencil_glob(stencil_glob_), - selected_out_glob(selected_out_glob_), - rejected_out_glob(rejected_out_glob_), - predicate(predicate_), - num_items(num_items_) - { - int tile_idx = blockIdx.x; - Size tile_base = tile_idx * ITEMS_PER_TILE; - - if (tile_idx < num_tiles - 1) - { - consume_tile(ITEMS_PER_TILE, - tile_idx, - tile_base); - } - else - { - int num_remaining = static_cast(num_items - tile_base); - Size num_selections = consume_tile(num_remaining, - tile_idx, - tile_base); - if (threadIdx.x == 0) - { - *num_selected_out = num_selections; - } - } - } // - }; //struct impl - - //--------------------------------------------------------------------- - // Agent entry point - //--------------------------------------------------------------------- - - THRUST_AGENT_ENTRY(ItemsIt items, - StencilIt stencil, - SelectedOutIt selected_out, - RejectedOutIt rejected_out, - Predicate predicate, - Size num_items, - NumSelectedOutIt num_selected_out, - ScanTileState tile_state, - int num_tiles, - char * shmem) - { - TempStorage &storage = *reinterpret_cast(shmem); - - impl(storage, - tile_state, - core::make_load_iterator(ptx_plan(), items), - core::make_load_iterator(ptx_plan(), stencil), - selected_out, - rejected_out, - predicate, - num_items, - num_tiles, - num_selected_out); - } - }; // struct PartitionAgent - - template - struct InitAgent - { - template - struct PtxPlan : PtxPolicy<128> {}; - - - typedef core::specialize_plan ptx_plan; - - //--------------------------------------------------------------------- - // Agent entry point - //--------------------------------------------------------------------- - - THRUST_AGENT_ENTRY(ScanTileState tile_state, - Size num_tiles, - NumSelectedIt num_selected_out, - char * /*shmem*/) - { - tile_state.InitializeStatus(num_tiles); - if (blockIdx.x == 0 && threadIdx.x == 0) - *num_selected_out = 0; - } - - }; // struct InitAgent - - template - static cudaError_t THRUST_RUNTIME_FUNCTION - doit_step(void * d_temp_storage, - size_t & temp_storage_bytes, - ItemsIt items, - StencilIt stencil, - SelectedOutIt selected_out, - RejectedOutIt rejected_out, - Predicate predicate, - NumSelectedOutIt num_selected_out, - Size num_items, - cudaStream_t stream, - bool debug_sync) - { - using core::AgentLauncher; - using core::AgentPlan; - using core::get_agent_plan; - - typedef AgentLauncher< - PartitionAgent > - partition_agent; - - typedef typename partition_agent::ScanTileState ScanTileState; - - typedef AgentLauncher< - InitAgent > - init_agent; - - - using core::get_plan; - typename get_plan::type init_plan = init_agent::get_plan(); - typename get_plan::type partition_plan = partition_agent::get_plan(stream); - - int tile_size = partition_plan.items_per_tile; - size_t num_tiles = (num_items + tile_size - 1) / tile_size; - - size_t vshmem_storage = core::vshmem_size(partition_plan.shared_memory_size, - num_tiles); - - cudaError_t status = cudaSuccess; - if (num_items == 0) - return status; - - size_t allocation_sizes[2] = {0, vshmem_storage}; - status = ScanTileState::AllocationSize(static_cast(num_tiles), allocation_sizes[0]); - CUDA_CUB_RET_IF_FAIL(status); - - - void* allocations[2] = {NULL, NULL}; - status = cub::AliasTemporaries(d_temp_storage, - temp_storage_bytes, - allocations, - allocation_sizes); - CUDA_CUB_RET_IF_FAIL(status); - - if (d_temp_storage == NULL) - { - return status; - } - - ScanTileState tile_status; - status = tile_status.Init(static_cast(num_tiles), allocations[0], allocation_sizes[0]); - CUDA_CUB_RET_IF_FAIL(status); - - init_agent ia(init_plan, num_tiles, stream, "partition::init_agent", debug_sync); - - char *vshmem_ptr = vshmem_storage > 0 ? (char *)allocations[1] : NULL; - - partition_agent pa(partition_plan, num_items, stream, vshmem_ptr, "partition::partition_agent", debug_sync); - - ia.launch(tile_status, num_tiles, num_selected_out); - CUDA_CUB_RET_IF_FAIL(cudaPeekAtLastError()); - - pa.launch(items, - stencil, - selected_out, - rejected_out, - predicate, - num_items, - num_selected_out, - tile_status, - num_tiles); - CUDA_CUB_RET_IF_FAIL(cudaPeekAtLastError()); - return status; - - } - - template - THRUST_RUNTIME_FUNCTION - pair - partition(execution_policy& policy, - InputIt first, - InputIt last, - StencilIt stencil, - SelectedOutIt selected_result, - RejectedOutIt rejected_result, - Predicate predicate) - { - typedef typename iterator_traits::difference_type size_type; - - size_type num_items = static_cast(thrust::distance(first, last)); - size_t temp_storage_bytes = 0; - cudaStream_t stream = cuda_cub::stream(policy); - bool debug_sync = THRUST_DEBUG_SYNC_FLAG; - - cudaError_t status; - status = doit_step(NULL, - temp_storage_bytes, - first, - stencil, - selected_result, - rejected_result, - predicate, - reinterpret_cast(NULL), - num_items, - stream, - debug_sync); - cuda_cub::throw_on_error(status, "partition failed on 1st step"); - - size_t allocation_sizes[2] = {sizeof(size_type), temp_storage_bytes}; - void * allocations[2] = {NULL, NULL}; - - size_t storage_size = 0; - - status = core::alias_storage(NULL, - storage_size, - allocations, - allocation_sizes); - cuda_cub::throw_on_error(status, "partition failed on 1st alias_storage"); - - // Allocate temporary storage. - thrust::detail::temporary_array - tmp(policy, storage_size); - void *ptr = static_cast(tmp.data().get()); - - status = core::alias_storage(ptr, - storage_size, - allocations, - allocation_sizes); - cuda_cub::throw_on_error(status, "partition failed on 2nd alias_storage"); - - size_type* d_num_selected_out - = thrust::detail::aligned_reinterpret_cast(allocations[0]); - - status = doit_step(allocations[1], - temp_storage_bytes, - first, - stencil, - selected_result, - rejected_result, - predicate, - d_num_selected_out, - num_items, - stream, - debug_sync); - cuda_cub::throw_on_error(status, "partition failed on 2nd step"); - - status = cuda_cub::synchronize(policy); - cuda_cub::throw_on_error(status, "partition failed to synchronize"); - - size_type num_selected = 0; - if (num_items > 0) - { - num_selected = get_value(policy, d_num_selected_out); - } - - return thrust::make_pair(selected_result + num_selected, - rejected_result + num_items - num_selected); - } - - template - THRUST_RUNTIME_FUNCTION - Iterator partition_inplace(execution_policy& policy, - Iterator first, - Iterator last, - StencilIt stencil, - Predicate predicate) - { - typedef typename iterator_traits::difference_type size_type; - typedef typename iterator_traits::value_type value_type; - - size_type num_items = thrust::distance(first, last); - - // Allocate temporary storage. - thrust::detail::temporary_array tmp(policy, num_items); - - cuda_cub::uninitialized_copy(policy, first, last, tmp.begin()); - - pair result = - partition(policy, - tmp.data().get(), - tmp.data().get() + num_items, - stencil, - first, - single_output_tag(), - predicate); - - size_type num_selected = result.first - first; - - return first + num_selected; - } -} // namespace __partition - -///// copy - -//------------------------- -// Thrust API entry points -//------------------------- - -__thrust_exec_check_disable__ -template -pair __host__ __device__ -partition_copy(execution_policy &policy, - InputIt first, - InputIt last, - StencilIt stencil, - SelectedOutIt selected_result, - RejectedOutIt rejected_result, - Predicate predicate) -{ - pair ret = thrust::make_pair(selected_result, rejected_result); - if (__THRUST_HAS_CUDART__) - { - ret = __partition::partition(policy, - first, - last, - stencil, - selected_result, - rejected_result, - predicate); - } - else - { -#if !__THRUST_HAS_CUDART__ - ret = thrust::partition_copy(cvt_to_seq(derived_cast(policy)), - first, - last, - stencil, - selected_result, - rejected_result, - predicate); -#endif - } - return ret; -} - -__thrust_exec_check_disable__ -template -pair __host__ __device__ -partition_copy(execution_policy &policy, - InputIt first, - InputIt last, - SelectedOutIt selected_result, - RejectedOutIt rejected_result, - Predicate predicate) -{ - pair ret = thrust::make_pair(selected_result, rejected_result); - if (__THRUST_HAS_CUDART__) - { - ret = __partition::partition(policy, - first, - last, - __partition::no_stencil_tag(), - selected_result, - rejected_result, - predicate); - } - else - { -#if !__THRUST_HAS_CUDART__ - ret = thrust::partition_copy(cvt_to_seq(derived_cast(policy)), - first, - last, - selected_result, - rejected_result, - predicate); -#endif - } - return ret; -} - -__thrust_exec_check_disable__ -template -pair __host__ __device__ -stable_partition_copy(execution_policy &policy, - InputIt first, - InputIt last, - SelectedOutIt selected_result, - RejectedOutIt rejected_result, - Predicate predicate) -{ - pair ret = thrust::make_pair(selected_result, rejected_result); - if (__THRUST_HAS_CUDART__) - { - ret = __partition::partition(policy, - first, - last, - __partition::no_stencil_tag(), - selected_result, - rejected_result, - predicate); - } - else - { -#if !__THRUST_HAS_CUDART__ - ret = thrust::stable_partition_copy(cvt_to_seq(derived_cast(policy)), - first, - last, - selected_result, - rejected_result, - predicate); -#endif - } - return ret; -} - -__thrust_exec_check_disable__ -template -pair __host__ __device__ -stable_partition_copy(execution_policy &policy, - InputIt first, - InputIt last, - StencilIt stencil, - SelectedOutIt selected_result, - RejectedOutIt rejected_result, - Predicate predicate) -{ - pair ret = thrust::make_pair(selected_result, rejected_result); - if (__THRUST_HAS_CUDART__) - { - ret = __partition::partition(policy, - first, - last, - stencil, - selected_result, - rejected_result, - predicate); - } - else - { -#if !__THRUST_HAS_CUDART__ - ret = thrust::stable_partition_copy(cvt_to_seq(derived_cast(policy)), - first, - last, - stencil, - selected_result, - rejected_result, - predicate); -#endif - } - return ret; -} - -/// inplace - -__thrust_exec_check_disable__ -template -Iterator __host__ __device__ -partition(execution_policy &policy, - Iterator first, - Iterator last, - StencilIt stencil, - Predicate predicate) -{ - Iterator ret = first; - if (__THRUST_HAS_CUDART__) - { - ret = __partition::partition_inplace(policy, first, last, stencil, predicate); - } - else - { -#if !__THRUST_HAS_CUDART__ - ret = thrust::partition(cvt_to_seq(derived_cast(policy)), - first, - last, - stencil, - predicate); -#endif - } - return ret; -} - -__thrust_exec_check_disable__ -template -Iterator __host__ __device__ -partition(execution_policy &policy, - Iterator first, - Iterator last, - Predicate predicate) -{ - Iterator ret = first; - if (__THRUST_HAS_CUDART__) - { - ret = __partition::partition_inplace(policy, - first, - last, - __partition::no_stencil_tag(), - predicate); - } - else - { -#if !__THRUST_HAS_CUDART__ - ret = thrust::partition(cvt_to_seq(derived_cast(policy)), - first, - last, - predicate); -#endif - } - return ret; -} - -__thrust_exec_check_disable__ -template -Iterator __host__ __device__ -stable_partition(execution_policy &policy, - Iterator first, - Iterator last, - StencilIt stencil, - Predicate predicate) -{ - Iterator result = first; - if (__THRUST_HAS_CUDART__) - { - result = __partition::partition_inplace(policy, - first, - last, - stencil, - predicate); - - // partition returns rejected values in reverese order - // so reverse the rejected elements to make it stable - cuda_cub::reverse(policy, result, last); - } - else - { -#if !__THRUST_HAS_CUDART__ - result = thrust::stable_partition(cvt_to_seq(derived_cast(policy)), - first, - last, - stencil, - predicate); -#endif - } - return result; -} - -__thrust_exec_check_disable__ -template -Iterator __host__ __device__ -stable_partition(execution_policy &policy, - Iterator first, - Iterator last, - Predicate predicate) -{ - Iterator result = first; - if (__THRUST_HAS_CUDART__) - { - result = __partition::partition_inplace(policy, - first, - last, - __partition::no_stencil_tag(), - predicate); - - // partition returns rejected values in reverese order - // so reverse the rejected elements to make it stable - cuda_cub::reverse(policy, result, last); - } - else - { -#if !__THRUST_HAS_CUDART__ - result = thrust::stable_partition(cvt_to_seq(derived_cast(policy)), - first, - last, - predicate); -#endif - } - return result; -} - -template -bool __host__ __device__ -is_partitioned(execution_policy &policy, - ItemsIt first, - ItemsIt last, - Predicate predicate) -{ - ItemsIt boundary = cuda_cub::find_if_not(policy, first, last, predicate); - ItemsIt end = cuda_cub::find_if(policy,boundary,last,predicate); - return end == last; -} - - -} // namespace cuda_cub -} // end namespace thrust -#endif diff --git a/spaces/CVPR/LIVE/thrust/thrust/system/tbb/detail/reduce_by_key.h b/spaces/CVPR/LIVE/thrust/thrust/system/tbb/detail/reduce_by_key.h deleted file mode 100644 index d8e3b38c59093f309942cda0577580ee4c1df251..0000000000000000000000000000000000000000 --- a/spaces/CVPR/LIVE/thrust/thrust/system/tbb/detail/reduce_by_key.h +++ /dev/null @@ -1,57 +0,0 @@ -/* - * Copyright 2008-2013 NVIDIA Corporation - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -#pragma once - -#include -#include -#include - -namespace thrust -{ -namespace system -{ -namespace tbb -{ -namespace detail -{ - - -template - thrust::pair - reduce_by_key(execution_policy &exec, - InputIterator1 keys_first, - InputIterator1 keys_last, - InputIterator2 values_first, - OutputIterator1 keys_output, - OutputIterator2 values_output, - BinaryPredicate binary_pred, - BinaryFunction binary_op); - - -} // end namespace detail -} // end namespace tbb -} // end namespace system -} // end namespace thrust - -#include - diff --git a/spaces/CVPR/WALT/mmdet/core/bbox/samplers/ohem_sampler.py b/spaces/CVPR/WALT/mmdet/core/bbox/samplers/ohem_sampler.py deleted file mode 100644 index 8b99f60ef0176f1b7a56665fb0f59272f65b84cd..0000000000000000000000000000000000000000 --- a/spaces/CVPR/WALT/mmdet/core/bbox/samplers/ohem_sampler.py +++ /dev/null @@ -1,107 +0,0 @@ -import torch - -from ..builder import BBOX_SAMPLERS -from ..transforms import bbox2roi -from .base_sampler import BaseSampler - - -@BBOX_SAMPLERS.register_module() -class OHEMSampler(BaseSampler): - r"""Online Hard Example Mining Sampler described in `Training Region-based - Object Detectors with Online Hard Example Mining - `_. - """ - - def __init__(self, - num, - pos_fraction, - context, - neg_pos_ub=-1, - add_gt_as_proposals=True, - **kwargs): - super(OHEMSampler, self).__init__(num, pos_fraction, neg_pos_ub, - add_gt_as_proposals) - self.context = context - if not hasattr(self.context, 'num_stages'): - self.bbox_head = self.context.bbox_head - else: - self.bbox_head = self.context.bbox_head[self.context.current_stage] - - def hard_mining(self, inds, num_expected, bboxes, labels, feats): - with torch.no_grad(): - rois = bbox2roi([bboxes]) - if not hasattr(self.context, 'num_stages'): - bbox_results = self.context._bbox_forward(feats, rois) - else: - bbox_results = self.context._bbox_forward( - self.context.current_stage, feats, rois) - cls_score = bbox_results['cls_score'] - loss = self.bbox_head.loss( - cls_score=cls_score, - bbox_pred=None, - rois=rois, - labels=labels, - label_weights=cls_score.new_ones(cls_score.size(0)), - bbox_targets=None, - bbox_weights=None, - reduction_override='none')['loss_cls'] - _, topk_loss_inds = loss.topk(num_expected) - return inds[topk_loss_inds] - - def _sample_pos(self, - assign_result, - num_expected, - bboxes=None, - feats=None, - **kwargs): - """Sample positive boxes. - - Args: - assign_result (:obj:`AssignResult`): Assigned results - num_expected (int): Number of expected positive samples - bboxes (torch.Tensor, optional): Boxes. Defaults to None. - feats (list[torch.Tensor], optional): Multi-level features. - Defaults to None. - - Returns: - torch.Tensor: Indices of positive samples - """ - # Sample some hard positive samples - pos_inds = torch.nonzero(assign_result.gt_inds > 0, as_tuple=False) - if pos_inds.numel() != 0: - pos_inds = pos_inds.squeeze(1) - if pos_inds.numel() <= num_expected: - return pos_inds - else: - return self.hard_mining(pos_inds, num_expected, bboxes[pos_inds], - assign_result.labels[pos_inds], feats) - - def _sample_neg(self, - assign_result, - num_expected, - bboxes=None, - feats=None, - **kwargs): - """Sample negative boxes. - - Args: - assign_result (:obj:`AssignResult`): Assigned results - num_expected (int): Number of expected negative samples - bboxes (torch.Tensor, optional): Boxes. Defaults to None. - feats (list[torch.Tensor], optional): Multi-level features. - Defaults to None. - - Returns: - torch.Tensor: Indices of negative samples - """ - # Sample some hard negative samples - neg_inds = torch.nonzero(assign_result.gt_inds == 0, as_tuple=False) - if neg_inds.numel() != 0: - neg_inds = neg_inds.squeeze(1) - if len(neg_inds) <= num_expected: - return neg_inds - else: - neg_labels = assign_result.labels.new_empty( - neg_inds.size(0)).fill_(self.bbox_head.num_classes) - return self.hard_mining(neg_inds, num_expected, bboxes[neg_inds], - neg_labels, feats) diff --git a/spaces/CVPR/WALT/mmdet/models/roi_heads/double_roi_head.py b/spaces/CVPR/WALT/mmdet/models/roi_heads/double_roi_head.py deleted file mode 100644 index a1aa6c8244a889fbbed312a89574c3e11be294f0..0000000000000000000000000000000000000000 --- a/spaces/CVPR/WALT/mmdet/models/roi_heads/double_roi_head.py +++ /dev/null @@ -1,33 +0,0 @@ -from ..builder import HEADS -from .standard_roi_head import StandardRoIHead - - -@HEADS.register_module() -class DoubleHeadRoIHead(StandardRoIHead): - """RoI head for Double Head RCNN. - - https://arxiv.org/abs/1904.06493 - """ - - def __init__(self, reg_roi_scale_factor, **kwargs): - super(DoubleHeadRoIHead, self).__init__(**kwargs) - self.reg_roi_scale_factor = reg_roi_scale_factor - - def _bbox_forward(self, x, rois): - """Box head forward function used in both training and testing time.""" - bbox_cls_feats = self.bbox_roi_extractor( - x[:self.bbox_roi_extractor.num_inputs], rois) - bbox_reg_feats = self.bbox_roi_extractor( - x[:self.bbox_roi_extractor.num_inputs], - rois, - roi_scale_factor=self.reg_roi_scale_factor) - if self.with_shared_head: - bbox_cls_feats = self.shared_head(bbox_cls_feats) - bbox_reg_feats = self.shared_head(bbox_reg_feats) - cls_score, bbox_pred = self.bbox_head(bbox_cls_feats, bbox_reg_feats) - - bbox_results = dict( - cls_score=cls_score, - bbox_pred=bbox_pred, - bbox_feats=bbox_cls_feats) - return bbox_results diff --git a/spaces/CVPR/lama-example/saicinpainting/training/losses/segmentation.py b/spaces/CVPR/lama-example/saicinpainting/training/losses/segmentation.py deleted file mode 100644 index 3d4a9f94eaae84722db584277dbbf9bc41ede357..0000000000000000000000000000000000000000 --- a/spaces/CVPR/lama-example/saicinpainting/training/losses/segmentation.py +++ /dev/null @@ -1,43 +0,0 @@ -import torch -import torch.nn as nn -import torch.nn.functional as F - -from .constants import weights as constant_weights - - -class CrossEntropy2d(nn.Module): - def __init__(self, reduction="mean", ignore_label=255, weights=None, *args, **kwargs): - """ - weight (Tensor, optional): a manual rescaling weight given to each class. - If given, has to be a Tensor of size "nclasses" - """ - super(CrossEntropy2d, self).__init__() - self.reduction = reduction - self.ignore_label = ignore_label - self.weights = weights - if self.weights is not None: - device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') - self.weights = torch.FloatTensor(constant_weights[weights]).to(device) - - def forward(self, predict, target): - """ - Args: - predict:(n, c, h, w) - target:(n, 1, h, w) - """ - target = target.long() - assert not target.requires_grad - assert predict.dim() == 4, "{0}".format(predict.size()) - assert target.dim() == 4, "{0}".format(target.size()) - assert predict.size(0) == target.size(0), "{0} vs {1} ".format(predict.size(0), target.size(0)) - assert target.size(1) == 1, "{0}".format(target.size(1)) - assert predict.size(2) == target.size(2), "{0} vs {1} ".format(predict.size(2), target.size(2)) - assert predict.size(3) == target.size(3), "{0} vs {1} ".format(predict.size(3), target.size(3)) - target = target.squeeze(1) - n, c, h, w = predict.size() - target_mask = (target >= 0) * (target != self.ignore_label) - target = target[target_mask] - predict = predict.transpose(1, 2).transpose(2, 3).contiguous() - predict = predict[target_mask.view(n, h, w, 1).repeat(1, 1, 1, c)].view(-1, c) - loss = F.cross_entropy(predict, target, weight=self.weights, reduction=self.reduction) - return loss diff --git a/spaces/CVPR/regionclip-demo/app.py b/spaces/CVPR/regionclip-demo/app.py deleted file mode 100644 index 66db7e53054c9c66f2043a6a0053ae12958bc4cd..0000000000000000000000000000000000000000 --- a/spaces/CVPR/regionclip-demo/app.py +++ /dev/null @@ -1,125 +0,0 @@ -import argparse -import requests -import logging -import os -import gradio as gr -import numpy as np -import cv2 -import torch -import torch.nn as nn -from PIL import Image -from torchvision import transforms -from timm.data.constants import IMAGENET_DEFAULT_MEAN, IMAGENET_DEFAULT_STD -from timm.data import create_transform -from config import get_config - -from collections import OrderedDict - -os.system("python -m pip install -e .") -os.system("pip install opencv-python timm diffdist h5py sklearn ftfy") -os.system("pip install git+https://github.com/lvis-dataset/lvis-api.git") - -import detectron2.utils.comm as comm -from detectron2.checkpoint import DetectionCheckpointer -from detectron2.config import get_cfg -from detectron2.data import MetadataCatalog -from detectron2.engine import DefaultTrainer as Trainer -from detectron2.engine import default_argument_parser, default_setup, hooks, launch -from detectron2.evaluation import ( - CityscapesInstanceEvaluator, - CityscapesSemSegEvaluator, - COCOEvaluator, - COCOPanopticEvaluator, - DatasetEvaluators, - LVISEvaluator, - PascalVOCDetectionEvaluator, - SemSegEvaluator, - verify_results, - FLICKR30KEvaluator, -) -from detectron2.modeling import GeneralizedRCNNWithTTA - -def parse_option(): - parser = argparse.ArgumentParser('RegionCLIP demo script', add_help=False) - parser.add_argument('--config-file', type=str, default="configs/CLIP_fast_rcnn_R_50_C4.yaml", metavar="FILE", help='path to config file', ) - args, unparsed = parser.parse_known_args() - - return args - -def build_transforms(img_size, center_crop=True): - t = [] - if center_crop: - size = int((256 / 224) * img_size) - t.append( - transforms.Resize(size) - ) - t.append( - transforms.CenterCrop(img_size) - ) - else: - t.append( - transforms.Resize(img_size) - ) - t.append(transforms.ToTensor()) - return transforms.Compose(t) - -def setup(args): - """ - Create configs and perform basic setups. - """ - cfg = get_cfg() - cfg.merge_from_file(args.config_file) - cfg.freeze() - default_setup(cfg, args) - return cfg - -''' -build model -''' -args = parse_option() -cfg = setup(args) - -model = Trainer.build_model(cfg) -DetectionCheckpointer(model, save_dir=cfg.OUTPUT_DIR).resume_or_load( - cfg.MODEL.WEIGHTS, resume=False -) -if cfg.MODEL.META_ARCHITECTURE in ['CLIPRCNN', 'CLIPFastRCNN', 'PretrainFastRCNN'] \ - and cfg.MODEL.CLIP.BB_RPN_WEIGHTS is not None\ - and cfg.MODEL.CLIP.CROP_REGION_TYPE == 'RPN': # load 2nd pretrained model - DetectionCheckpointer(model, save_dir=cfg.OUTPUT_DIR, bb_rpn_weights=True).resume_or_load( - cfg.MODEL.CLIP.BB_RPN_WEIGHTS, resume=False - ) - -''' -build data transform -''' -eval_transforms = build_transforms(800, center_crop=False) -# display_transforms = build_transforms4display(960, center_crop=False) - -def localize_object(image, texts): - img_t = eval_transforms(Image.fromarray(image).convert("RGB")) * 255 - model.eval() - with torch.no_grad(): - res = model(texts, [{"image": img_t}]) - - return res - - -image = gr.inputs.Image() - -gr.Interface( - description="Zero-Shot Object Detection with RegionCLIP (https://github.com/microsoft/RegionCLIP)", - fn=localize_object, - inputs=["image", "text"], - outputs=[ - gr.outputs.Image( - type="pil", - label="grounding results"), - ], - examples=[ - ["./birds.png", "a goldfinch"], - ["./apples_six.jpg", "a yellow apple"], - ["./wines.jpg", "milk shake"], - ["./logos.jpg", "a microsoft logo"], - ], -).launch() diff --git a/spaces/CVPR/transfiner/configs/new_baselines/mask_rcnn_R_50_FPN_200ep_LSJ.py b/spaces/CVPR/transfiner/configs/new_baselines/mask_rcnn_R_50_FPN_200ep_LSJ.py deleted file mode 100644 index 2a7c376da5f9269197c44079f3e0f3b09cdc63fa..0000000000000000000000000000000000000000 --- a/spaces/CVPR/transfiner/configs/new_baselines/mask_rcnn_R_50_FPN_200ep_LSJ.py +++ /dev/null @@ -1,14 +0,0 @@ -from .mask_rcnn_R_50_FPN_100ep_LSJ import ( - dataloader, - lr_multiplier, - model, - optimizer, - train, -) - -train.max_iter *= 2 # 100ep -> 200ep - -lr_multiplier.scheduler.milestones = [ - milestone * 2 for milestone in lr_multiplier.scheduler.milestones -] -lr_multiplier.scheduler.num_updates = train.max_iter diff --git a/spaces/Cecil8352/vits-models/mel_processing.py b/spaces/Cecil8352/vits-models/mel_processing.py deleted file mode 100644 index 3e252e76320522a8a4195a60665168f22769aec2..0000000000000000000000000000000000000000 --- a/spaces/Cecil8352/vits-models/mel_processing.py +++ /dev/null @@ -1,101 +0,0 @@ -import torch -import torch.utils.data -from librosa.filters import mel as librosa_mel_fn - -MAX_WAV_VALUE = 32768.0 - - -def dynamic_range_compression_torch(x, C=1, clip_val=1e-5): - """ - PARAMS - ------ - C: compression factor - """ - return torch.log(torch.clamp(x, min=clip_val) * C) - - -def dynamic_range_decompression_torch(x, C=1): - """ - PARAMS - ------ - C: compression factor used to compress - """ - return torch.exp(x) / C - - -def spectral_normalize_torch(magnitudes): - output = dynamic_range_compression_torch(magnitudes) - return output - - -def spectral_de_normalize_torch(magnitudes): - output = dynamic_range_decompression_torch(magnitudes) - return output - - -mel_basis = {} -hann_window = {} - - -def spectrogram_torch(y, n_fft, sampling_rate, hop_size, win_size, center=False): - if torch.min(y) < -1.: - print('min value is ', torch.min(y)) - if torch.max(y) > 1.: - print('max value is ', torch.max(y)) - - global hann_window - dtype_device = str(y.dtype) + '_' + str(y.device) - wnsize_dtype_device = str(win_size) + '_' + dtype_device - if wnsize_dtype_device not in hann_window: - hann_window[wnsize_dtype_device] = torch.hann_window(win_size).to(dtype=y.dtype, device=y.device) - - y = torch.nn.functional.pad(y.unsqueeze(1), (int((n_fft-hop_size)/2), int((n_fft-hop_size)/2)), mode='reflect') - y = y.squeeze(1) - - spec = torch.stft(y, n_fft, hop_length=hop_size, win_length=win_size, window=hann_window[wnsize_dtype_device], - center=center, pad_mode='reflect', normalized=False, onesided=True, return_complex=False) - - spec = torch.sqrt(spec.pow(2).sum(-1) + 1e-6) - return spec - - -def spec_to_mel_torch(spec, n_fft, num_mels, sampling_rate, fmin, fmax): - global mel_basis - dtype_device = str(spec.dtype) + '_' + str(spec.device) - fmax_dtype_device = str(fmax) + '_' + dtype_device - if fmax_dtype_device not in mel_basis: - mel = librosa_mel_fn(sampling_rate, n_fft, num_mels, fmin, fmax) - mel_basis[fmax_dtype_device] = torch.from_numpy(mel).to(dtype=spec.dtype, device=spec.device) - spec = torch.matmul(mel_basis[fmax_dtype_device], spec) - spec = spectral_normalize_torch(spec) - return spec - - -def mel_spectrogram_torch(y, n_fft, num_mels, sampling_rate, hop_size, win_size, fmin, fmax, center=False): - if torch.min(y) < -1.: - print('min value is ', torch.min(y)) - if torch.max(y) > 1.: - print('max value is ', torch.max(y)) - - global mel_basis, hann_window - dtype_device = str(y.dtype) + '_' + str(y.device) - fmax_dtype_device = str(fmax) + '_' + dtype_device - wnsize_dtype_device = str(win_size) + '_' + dtype_device - if fmax_dtype_device not in mel_basis: - mel = librosa_mel_fn(sampling_rate, n_fft, num_mels, fmin, fmax) - mel_basis[fmax_dtype_device] = torch.from_numpy(mel).to(dtype=y.dtype, device=y.device) - if wnsize_dtype_device not in hann_window: - hann_window[wnsize_dtype_device] = torch.hann_window(win_size).to(dtype=y.dtype, device=y.device) - - y = torch.nn.functional.pad(y.unsqueeze(1), (int((n_fft-hop_size)/2), int((n_fft-hop_size)/2)), mode='reflect') - y = y.squeeze(1) - - spec = torch.stft(y, n_fft, hop_length=hop_size, win_length=win_size, window=hann_window[wnsize_dtype_device], - center=center, pad_mode='reflect', normalized=False, onesided=True) - - spec = torch.sqrt(spec.pow(2).sum(-1) + 1e-6) - - spec = torch.matmul(mel_basis[fmax_dtype_device], spec) - spec = spectral_normalize_torch(spec) - - return spec diff --git a/spaces/ChallengeHub/Chinese-LangChain/clc/__init__.py b/spaces/ChallengeHub/Chinese-LangChain/clc/__init__.py deleted file mode 100644 index 19f43cc3a0f55b4f02f26f8892b7a9e28a7e725c..0000000000000000000000000000000000000000 --- a/spaces/ChallengeHub/Chinese-LangChain/clc/__init__.py +++ /dev/null @@ -1,11 +0,0 @@ -#!/usr/bin/env python -# -*- coding:utf-8 _*- -""" -@author:quincy qiang -@license: Apache Licence -@file: __init__.py -@time: 2023/04/17 -@contact: yanqiangmiffy@gamil.com -@software: PyCharm -@description: coding.. -""" diff --git a/spaces/Comet/txt2im-models/app.py b/spaces/Comet/txt2im-models/app.py deleted file mode 100644 index e587fbc30722e387f1e14b313234a0ecb15aa6c0..0000000000000000000000000000000000000000 --- a/spaces/Comet/txt2im-models/app.py +++ /dev/null @@ -1,144 +0,0 @@ -import uuid - -import gradio as gr -import pandas as pd -from PIL import Image -from transformers import CLIPModel, CLIPProcessor - -from comet import get_experiment, get_experiment_status, start_experiment - -CLIP_MODEL_PATH = "openai/clip-vit-base-patch32" - -clip_model = CLIPModel.from_pretrained(CLIP_MODEL_PATH) -clip_processor = CLIPProcessor.from_pretrained(CLIP_MODEL_PATH) - -DESCRIPTION = """Glad to see you here 😄. -You can use this Space to log predictions to [Comet](https://www.comet.ml/site) from Spaces that use Text to Image Diffusion Models. - -Keep track of all your prompts and generated images so that you remember the good ones! - -Set your Comet credentials in the Comet Settings tab and create an Experiment for logging data. If you don't have credentials yet, -you can [sign up for Comet here](https://www.comet.ml/signup) - -If you want to continue logging to the same Experiment over multiple sessions, simply provide the experiment name. - -Set a path to a Space using that uses a Diffusion model and submit your prompt in the Diffusion Model tab - -** Note: ** This Space will still run even if you don't set credentials -""" - - -def predict( - model, - prompt, - experiment_state, -): - io = gr.Interface.load(model) - image = io(prompt) - pil_image = Image.open(image) - - inputs = clip_processor( - text=[prompt], - images=pil_image, - return_tensors="pt", - padding=True, - ) - outputs = clip_model(**inputs) - clip_score = outputs.logits_per_image.item() / 100.0 - - experiment = get_experiment(experiment_state) - if experiment is not None: - image_id = uuid.uuid4().hex - experiment.log_image(image, image_id) - - asset = pd.DataFrame.from_records( - [ - { - "prompt": prompt, - "model": model, - "clip_model": CLIP_MODEL_PATH, - "clip_score": round(clip_score, 3), - } - ] - ) - experiment.log_table(f"{image_id}.json", asset, orient="records") - - return image, experiment_state - - -def start_interface(): - demo = gr.Blocks() - with demo: - description = gr.Markdown(DESCRIPTION) - with gr.Tabs(): - with gr.TabItem(label="Comet Settings"): - # credentials - comet_api_key = gr.Textbox( - label="Comet API Key", - placeholder="This is required if you'd like to create an Experiment", - ) - comet_workspace = gr.Textbox(label="Comet Workspace") - comet_project_name = gr.Textbox(label="Comet Project Name") - comet_experiment_name = gr.Textbox( - label="Comet Experiment Name", - placeholder=( - "Set this if you'd like" - "to continue logging to an existing Experiment", - ), - ) - - with gr.Row(): - start = gr.Button("Start Experiment", variant="primary") - status = gr.Button("Experiment Status") - - status_output = gr.Textbox(label="Status") - experiment_state = gr.Variable(label="Experiment State") - - start.click( - start_experiment, - inputs=[ - comet_api_key, - comet_workspace, - comet_project_name, - comet_experiment_name, - experiment_state, - ], - outputs=[experiment_state, status_output], - ) - - status.click( - get_experiment_status, - inputs=[experiment_state], - outputs=[experiment_state, status_output], - ) - - with gr.TabItem(label="Diffusion Model"): - diff_description = gr.Markdown( - """The Model must be a path to any Space that accepts - only text as input and produces an image as an output - """ - ) - model = gr.Textbox( - label="Model", - value="spaces/valhalla/glide-text2im", - placeholder="Enter a path to a Space", - ) - prompt = gr.Textbox( - label="Prompt", - value="an oil painting of a corgi", - placeholder="Enter your text prompt here", - ) - - outputs = gr.Image(label="Image") - - submit = gr.Button("Submit", variant="primary") - submit.click( - predict, - inputs=[model, prompt, experiment_state], - outputs=[outputs, experiment_state], - ) - - demo.launch() - - -start_interface() diff --git a/spaces/DJQmUKV/rvc-inference/infer_pack/models_onnx.py b/spaces/DJQmUKV/rvc-inference/infer_pack/models_onnx.py deleted file mode 100644 index 3c5be53a572151820de7d82dfce84f2e2979ed56..0000000000000000000000000000000000000000 --- a/spaces/DJQmUKV/rvc-inference/infer_pack/models_onnx.py +++ /dev/null @@ -1,760 +0,0 @@ -import math, pdb, os -from time import time as ttime -import torch -from torch import nn -from torch.nn import functional as F -from infer_pack import modules -from infer_pack import attentions -from infer_pack import commons -from infer_pack.commons import init_weights, get_padding -from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm -from infer_pack.commons import init_weights -import numpy as np -from infer_pack import commons - - -class TextEncoder256(nn.Module): - def __init__( - self, - out_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - f0=True, - ): - super().__init__() - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.emb_phone = nn.Linear(256, hidden_channels) - self.lrelu = nn.LeakyReLU(0.1, inplace=True) - if f0 == True: - self.emb_pitch = nn.Embedding(256, hidden_channels) # pitch 256 - self.encoder = attentions.Encoder( - hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout - ) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, phone, pitch, lengths): - if pitch == None: - x = self.emb_phone(phone) - else: - x = self.emb_phone(phone) + self.emb_pitch(pitch) - x = x * math.sqrt(self.hidden_channels) # [b, t, h] - x = self.lrelu(x) - x = torch.transpose(x, 1, -1) # [b, h, t] - x_mask = torch.unsqueeze(commons.sequence_mask(lengths, x.size(2)), 1).to( - x.dtype - ) - x = self.encoder(x * x_mask, x_mask) - stats = self.proj(x) * x_mask - - m, logs = torch.split(stats, self.out_channels, dim=1) - return m, logs, x_mask - - -class TextEncoder256Sim(nn.Module): - def __init__( - self, - out_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - f0=True, - ): - super().__init__() - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.emb_phone = nn.Linear(256, hidden_channels) - self.lrelu = nn.LeakyReLU(0.1, inplace=True) - if f0 == True: - self.emb_pitch = nn.Embedding(256, hidden_channels) # pitch 256 - self.encoder = attentions.Encoder( - hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout - ) - self.proj = nn.Conv1d(hidden_channels, out_channels, 1) - - def forward(self, phone, pitch, lengths): - if pitch == None: - x = self.emb_phone(phone) - else: - x = self.emb_phone(phone) + self.emb_pitch(pitch) - x = x * math.sqrt(self.hidden_channels) # [b, t, h] - x = self.lrelu(x) - x = torch.transpose(x, 1, -1) # [b, h, t] - x_mask = torch.unsqueeze(commons.sequence_mask(lengths, x.size(2)), 1).to( - x.dtype - ) - x = self.encoder(x * x_mask, x_mask) - x = self.proj(x) * x_mask - return x, x_mask - - -class ResidualCouplingBlock(nn.Module): - def __init__( - self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - n_flows=4, - gin_channels=0, - ): - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.n_flows = n_flows - self.gin_channels = gin_channels - - self.flows = nn.ModuleList() - for i in range(n_flows): - self.flows.append( - modules.ResidualCouplingLayer( - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=gin_channels, - mean_only=True, - ) - ) - self.flows.append(modules.Flip()) - - def forward(self, x, x_mask, g=None, reverse=False): - if not reverse: - for flow in self.flows: - x, _ = flow(x, x_mask, g=g, reverse=reverse) - else: - for flow in reversed(self.flows): - x = flow(x, x_mask, g=g, reverse=reverse) - return x - - def remove_weight_norm(self): - for i in range(self.n_flows): - self.flows[i * 2].remove_weight_norm() - - -class PosteriorEncoder(nn.Module): - def __init__( - self, - in_channels, - out_channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=0, - ): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - - self.pre = nn.Conv1d(in_channels, hidden_channels, 1) - self.enc = modules.WN( - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=gin_channels, - ) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, x, x_lengths, g=None): - x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to( - x.dtype - ) - x = self.pre(x) * x_mask - x = self.enc(x, x_mask, g=g) - stats = self.proj(x) * x_mask - m, logs = torch.split(stats, self.out_channels, dim=1) - z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask - return z, m, logs, x_mask - - def remove_weight_norm(self): - self.enc.remove_weight_norm() - - -class Generator(torch.nn.Module): - def __init__( - self, - initial_channel, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=0, - ): - super(Generator, self).__init__() - self.num_kernels = len(resblock_kernel_sizes) - self.num_upsamples = len(upsample_rates) - self.conv_pre = Conv1d( - initial_channel, upsample_initial_channel, 7, 1, padding=3 - ) - resblock = modules.ResBlock1 if resblock == "1" else modules.ResBlock2 - - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)): - self.ups.append( - weight_norm( - ConvTranspose1d( - upsample_initial_channel // (2**i), - upsample_initial_channel // (2 ** (i + 1)), - k, - u, - padding=(k - u) // 2, - ) - ) - ) - - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = upsample_initial_channel // (2 ** (i + 1)) - for j, (k, d) in enumerate( - zip(resblock_kernel_sizes, resblock_dilation_sizes) - ): - self.resblocks.append(resblock(ch, k, d)) - - self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False) - self.ups.apply(init_weights) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1) - - def forward(self, x, g=None): - x = self.conv_pre(x) - if g is not None: - x = x + self.cond(g) - - for i in range(self.num_upsamples): - x = F.leaky_relu(x, modules.LRELU_SLOPE) - x = self.ups[i](x) - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i * self.num_kernels + j](x) - else: - xs += self.resblocks[i * self.num_kernels + j](x) - x = xs / self.num_kernels - x = F.leaky_relu(x) - x = self.conv_post(x) - x = torch.tanh(x) - - return x - - def remove_weight_norm(self): - for l in self.ups: - remove_weight_norm(l) - for l in self.resblocks: - l.remove_weight_norm() - - -class SineGen(torch.nn.Module): - """Definition of sine generator - SineGen(samp_rate, harmonic_num = 0, - sine_amp = 0.1, noise_std = 0.003, - voiced_threshold = 0, - flag_for_pulse=False) - samp_rate: sampling rate in Hz - harmonic_num: number of harmonic overtones (default 0) - sine_amp: amplitude of sine-wavefrom (default 0.1) - noise_std: std of Gaussian noise (default 0.003) - voiced_thoreshold: F0 threshold for U/V classification (default 0) - flag_for_pulse: this SinGen is used inside PulseGen (default False) - Note: when flag_for_pulse is True, the first time step of a voiced - segment is always sin(np.pi) or cos(0) - """ - - def __init__( - self, - samp_rate, - harmonic_num=0, - sine_amp=0.1, - noise_std=0.003, - voiced_threshold=0, - flag_for_pulse=False, - ): - super(SineGen, self).__init__() - self.sine_amp = sine_amp - self.noise_std = noise_std - self.harmonic_num = harmonic_num - self.dim = self.harmonic_num + 1 - self.sampling_rate = samp_rate - self.voiced_threshold = voiced_threshold - - def _f02uv(self, f0): - # generate uv signal - uv = torch.ones_like(f0) - uv = uv * (f0 > self.voiced_threshold) - return uv - - def forward(self, f0, upp): - """sine_tensor, uv = forward(f0) - input F0: tensor(batchsize=1, length, dim=1) - f0 for unvoiced steps should be 0 - output sine_tensor: tensor(batchsize=1, length, dim) - output uv: tensor(batchsize=1, length, 1) - """ - with torch.no_grad(): - f0 = f0[:, None].transpose(1, 2) - f0_buf = torch.zeros(f0.shape[0], f0.shape[1], self.dim, device=f0.device) - # fundamental component - f0_buf[:, :, 0] = f0[:, :, 0] - for idx in np.arange(self.harmonic_num): - f0_buf[:, :, idx + 1] = f0_buf[:, :, 0] * ( - idx + 2 - ) # idx + 2: the (idx+1)-th overtone, (idx+2)-th harmonic - rad_values = (f0_buf / self.sampling_rate) % 1 ###%1意味着n_har的乘积无法后处理优化 - rand_ini = torch.rand( - f0_buf.shape[0], f0_buf.shape[2], device=f0_buf.device - ) - rand_ini[:, 0] = 0 - rad_values[:, 0, :] = rad_values[:, 0, :] + rand_ini - tmp_over_one = torch.cumsum(rad_values, 1) # % 1 #####%1意味着后面的cumsum无法再优化 - tmp_over_one *= upp - tmp_over_one = F.interpolate( - tmp_over_one.transpose(2, 1), - scale_factor=upp, - mode="linear", - align_corners=True, - ).transpose(2, 1) - rad_values = F.interpolate( - rad_values.transpose(2, 1), scale_factor=upp, mode="nearest" - ).transpose( - 2, 1 - ) ####### - tmp_over_one %= 1 - tmp_over_one_idx = (tmp_over_one[:, 1:, :] - tmp_over_one[:, :-1, :]) < 0 - cumsum_shift = torch.zeros_like(rad_values) - cumsum_shift[:, 1:, :] = tmp_over_one_idx * -1.0 - sine_waves = torch.sin( - torch.cumsum(rad_values + cumsum_shift, dim=1) * 2 * np.pi - ) - sine_waves = sine_waves * self.sine_amp - uv = self._f02uv(f0) - uv = F.interpolate( - uv.transpose(2, 1), scale_factor=upp, mode="nearest" - ).transpose(2, 1) - noise_amp = uv * self.noise_std + (1 - uv) * self.sine_amp / 3 - noise = noise_amp * torch.randn_like(sine_waves) - sine_waves = sine_waves * uv + noise - return sine_waves, uv, noise - - -class SourceModuleHnNSF(torch.nn.Module): - """SourceModule for hn-nsf - SourceModule(sampling_rate, harmonic_num=0, sine_amp=0.1, - add_noise_std=0.003, voiced_threshod=0) - sampling_rate: sampling_rate in Hz - harmonic_num: number of harmonic above F0 (default: 0) - sine_amp: amplitude of sine source signal (default: 0.1) - add_noise_std: std of additive Gaussian noise (default: 0.003) - note that amplitude of noise in unvoiced is decided - by sine_amp - voiced_threshold: threhold to set U/V given F0 (default: 0) - Sine_source, noise_source = SourceModuleHnNSF(F0_sampled) - F0_sampled (batchsize, length, 1) - Sine_source (batchsize, length, 1) - noise_source (batchsize, length 1) - uv (batchsize, length, 1) - """ - - def __init__( - self, - sampling_rate, - harmonic_num=0, - sine_amp=0.1, - add_noise_std=0.003, - voiced_threshod=0, - is_half=True, - ): - super(SourceModuleHnNSF, self).__init__() - - self.sine_amp = sine_amp - self.noise_std = add_noise_std - self.is_half = is_half - # to produce sine waveforms - self.l_sin_gen = SineGen( - sampling_rate, harmonic_num, sine_amp, add_noise_std, voiced_threshod - ) - - # to merge source harmonics into a single excitation - self.l_linear = torch.nn.Linear(harmonic_num + 1, 1) - self.l_tanh = torch.nn.Tanh() - - def forward(self, x, upp=None): - sine_wavs, uv, _ = self.l_sin_gen(x, upp) - if self.is_half: - sine_wavs = sine_wavs.half() - sine_merge = self.l_tanh(self.l_linear(sine_wavs)) - return sine_merge, None, None # noise, uv - - -class GeneratorNSF(torch.nn.Module): - def __init__( - self, - initial_channel, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels, - sr, - is_half=False, - ): - super(GeneratorNSF, self).__init__() - self.num_kernels = len(resblock_kernel_sizes) - self.num_upsamples = len(upsample_rates) - - self.f0_upsamp = torch.nn.Upsample(scale_factor=np.prod(upsample_rates)) - self.m_source = SourceModuleHnNSF( - sampling_rate=sr, harmonic_num=0, is_half=is_half - ) - self.noise_convs = nn.ModuleList() - self.conv_pre = Conv1d( - initial_channel, upsample_initial_channel, 7, 1, padding=3 - ) - resblock = modules.ResBlock1 if resblock == "1" else modules.ResBlock2 - - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)): - c_cur = upsample_initial_channel // (2 ** (i + 1)) - self.ups.append( - weight_norm( - ConvTranspose1d( - upsample_initial_channel // (2**i), - upsample_initial_channel // (2 ** (i + 1)), - k, - u, - padding=(k - u) // 2, - ) - ) - ) - if i + 1 < len(upsample_rates): - stride_f0 = np.prod(upsample_rates[i + 1 :]) - self.noise_convs.append( - Conv1d( - 1, - c_cur, - kernel_size=stride_f0 * 2, - stride=stride_f0, - padding=stride_f0 // 2, - ) - ) - else: - self.noise_convs.append(Conv1d(1, c_cur, kernel_size=1)) - - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = upsample_initial_channel // (2 ** (i + 1)) - for j, (k, d) in enumerate( - zip(resblock_kernel_sizes, resblock_dilation_sizes) - ): - self.resblocks.append(resblock(ch, k, d)) - - self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False) - self.ups.apply(init_weights) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1) - - self.upp = np.prod(upsample_rates) - - def forward(self, x, f0, g=None): - har_source, noi_source, uv = self.m_source(f0, self.upp) - har_source = har_source.transpose(1, 2) - x = self.conv_pre(x) - if g is not None: - x = x + self.cond(g) - - for i in range(self.num_upsamples): - x = F.leaky_relu(x, modules.LRELU_SLOPE) - x = self.ups[i](x) - x_source = self.noise_convs[i](har_source) - x = x + x_source - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i * self.num_kernels + j](x) - else: - xs += self.resblocks[i * self.num_kernels + j](x) - x = xs / self.num_kernels - x = F.leaky_relu(x) - x = self.conv_post(x) - x = torch.tanh(x) - return x - - def remove_weight_norm(self): - for l in self.ups: - remove_weight_norm(l) - for l in self.resblocks: - l.remove_weight_norm() - - -sr2sr = { - "32k": 32000, - "40k": 40000, - "48k": 48000, -} - - -class SynthesizerTrnMs256NSFsidO(nn.Module): - def __init__( - self, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - spk_embed_dim, - gin_channels, - sr, - **kwargs - ): - super().__init__() - if type(sr) == type("strr"): - sr = sr2sr[sr] - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.gin_channels = gin_channels - # self.hop_length = hop_length# - self.spk_embed_dim = spk_embed_dim - self.enc_p = TextEncoder256( - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - ) - self.dec = GeneratorNSF( - inter_channels, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=gin_channels, - sr=sr, - is_half=kwargs["is_half"], - ) - self.enc_q = PosteriorEncoder( - spec_channels, - inter_channels, - hidden_channels, - 5, - 1, - 16, - gin_channels=gin_channels, - ) - self.flow = ResidualCouplingBlock( - inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels - ) - self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels) - print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim) - - def remove_weight_norm(self): - self.dec.remove_weight_norm() - self.flow.remove_weight_norm() - self.enc_q.remove_weight_norm() - - def forward(self, phone, phone_lengths, pitch, nsff0, sid, max_len=None): - g = self.emb_g(sid).unsqueeze(-1) - m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths) - z_p = (m_p + torch.exp(logs_p) * torch.randn_like(m_p) * 0.66666) * x_mask - z = self.flow(z_p, x_mask, g=g, reverse=True) - o = self.dec((z * x_mask)[:, :, :max_len], nsff0, g=g) - return o - - -class MultiPeriodDiscriminator(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(MultiPeriodDiscriminator, self).__init__() - periods = [2, 3, 5, 7, 11, 17] - # periods = [3, 5, 7, 11, 17, 23, 37] - - discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)] - discs = discs + [ - DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods - ] - self.discriminators = nn.ModuleList(discs) - - def forward(self, y, y_hat): - y_d_rs = [] # - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for i, d in enumerate(self.discriminators): - y_d_r, fmap_r = d(y) - y_d_g, fmap_g = d(y_hat) - # for j in range(len(fmap_r)): - # print(i,j,y.shape,y_hat.shape,fmap_r[j].shape,fmap_g[j].shape) - y_d_rs.append(y_d_r) - y_d_gs.append(y_d_g) - fmap_rs.append(fmap_r) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - - -class DiscriminatorS(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(DiscriminatorS, self).__init__() - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList( - [ - norm_f(Conv1d(1, 16, 15, 1, padding=7)), - norm_f(Conv1d(16, 64, 41, 4, groups=4, padding=20)), - norm_f(Conv1d(64, 256, 41, 4, groups=16, padding=20)), - norm_f(Conv1d(256, 1024, 41, 4, groups=64, padding=20)), - norm_f(Conv1d(1024, 1024, 41, 4, groups=256, padding=20)), - norm_f(Conv1d(1024, 1024, 5, 1, padding=2)), - ] - ) - self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1)) - - def forward(self, x): - fmap = [] - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class DiscriminatorP(torch.nn.Module): - def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False): - super(DiscriminatorP, self).__init__() - self.period = period - self.use_spectral_norm = use_spectral_norm - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList( - [ - norm_f( - Conv2d( - 1, - 32, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 32, - 128, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 128, - 512, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 512, - 1024, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 1024, - 1024, - (kernel_size, 1), - 1, - padding=(get_padding(kernel_size, 1), 0), - ) - ), - ] - ) - self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0))) - - def forward(self, x): - fmap = [] - - # 1d to 2d - b, c, t = x.shape - if t % self.period != 0: # pad first - n_pad = self.period - (t % self.period) - x = F.pad(x, (0, n_pad), "reflect") - t = t + n_pad - x = x.view(b, c, t // self.period, self.period) - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/varLib/mvar.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/varLib/mvar.py deleted file mode 100644 index 653aeb45e0ff18e06c2dd04ad58085d77a73c1b5..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/varLib/mvar.py +++ /dev/null @@ -1,40 +0,0 @@ -MVAR_ENTRIES = { - "hasc": ("OS/2", "sTypoAscender"), # horizontal ascender - "hdsc": ("OS/2", "sTypoDescender"), # horizontal descender - "hlgp": ("OS/2", "sTypoLineGap"), # horizontal line gap - "hcla": ("OS/2", "usWinAscent"), # horizontal clipping ascent - "hcld": ("OS/2", "usWinDescent"), # horizontal clipping descent - "vasc": ("vhea", "ascent"), # vertical ascender - "vdsc": ("vhea", "descent"), # vertical descender - "vlgp": ("vhea", "lineGap"), # vertical line gap - "hcrs": ("hhea", "caretSlopeRise"), # horizontal caret rise - "hcrn": ("hhea", "caretSlopeRun"), # horizontal caret run - "hcof": ("hhea", "caretOffset"), # horizontal caret offset - "vcrs": ("vhea", "caretSlopeRise"), # vertical caret rise - "vcrn": ("vhea", "caretSlopeRun"), # vertical caret run - "vcof": ("vhea", "caretOffset"), # vertical caret offset - "xhgt": ("OS/2", "sxHeight"), # x height - "cpht": ("OS/2", "sCapHeight"), # cap height - "sbxs": ("OS/2", "ySubscriptXSize"), # subscript em x size - "sbys": ("OS/2", "ySubscriptYSize"), # subscript em y size - "sbxo": ("OS/2", "ySubscriptXOffset"), # subscript em x offset - "sbyo": ("OS/2", "ySubscriptYOffset"), # subscript em y offset - "spxs": ("OS/2", "ySuperscriptXSize"), # superscript em x size - "spys": ("OS/2", "ySuperscriptYSize"), # superscript em y size - "spxo": ("OS/2", "ySuperscriptXOffset"), # superscript em x offset - "spyo": ("OS/2", "ySuperscriptYOffset"), # superscript em y offset - "strs": ("OS/2", "yStrikeoutSize"), # strikeout size - "stro": ("OS/2", "yStrikeoutPosition"), # strikeout offset - "unds": ("post", "underlineThickness"), # underline size - "undo": ("post", "underlinePosition"), # underline offset - #'gsp0': ('gasp', 'gaspRange[0].rangeMaxPPEM'), # gaspRange[0] - #'gsp1': ('gasp', 'gaspRange[1].rangeMaxPPEM'), # gaspRange[1] - #'gsp2': ('gasp', 'gaspRange[2].rangeMaxPPEM'), # gaspRange[2] - #'gsp3': ('gasp', 'gaspRange[3].rangeMaxPPEM'), # gaspRange[3] - #'gsp4': ('gasp', 'gaspRange[4].rangeMaxPPEM'), # gaspRange[4] - #'gsp5': ('gasp', 'gaspRange[5].rangeMaxPPEM'), # gaspRange[5] - #'gsp6': ('gasp', 'gaspRange[6].rangeMaxPPEM'), # gaspRange[6] - #'gsp7': ('gasp', 'gaspRange[7].rangeMaxPPEM'), # gaspRange[7] - #'gsp8': ('gasp', 'gaspRange[8].rangeMaxPPEM'), # gaspRange[8] - #'gsp9': ('gasp', 'gaspRange[9].rangeMaxPPEM'), # gaspRange[9] -} diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/frontend/assets/index-edf307d2.css b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/frontend/assets/index-edf307d2.css deleted file mode 100644 index 690ed736f2c29c32ba8499343659e9fde81f2098..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/frontend/assets/index-edf307d2.css +++ /dev/null @@ -1 +0,0 @@ -div.svelte-1yrv54 .math.inline{fill:var(--body-text-color);display:inline-block;vertical-align:middle;padding:var(--size-1-5) -var(--size-1);color:var(--body-text-color)}div.svelte-1yrv54 .math.inline svg{display:inline;margin-bottom:.22em}div.svelte-1yrv54{max-width:100%}.min.svelte-1yrv54{min-height:var(--size-24)}.hide.svelte-1yrv54{display:none}div.svelte-1ed2p3z{transition:.15s}.pending.svelte-1ed2p3z{opacity:.2} diff --git a/spaces/DataForGood/bechdelai-demo/README.md b/spaces/DataForGood/bechdelai-demo/README.md deleted file mode 100644 index f2dc13d8860df0d192fd1c1a78863e0a02e471cb..0000000000000000000000000000000000000000 --- a/spaces/DataForGood/bechdelai-demo/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: BechdelAI Demo -emoji: 🎥 -colorFrom: blue -colorTo: green -sdk: gradio -sdk_version: 3.10.1 -app_file: app.py -pinned: false ---- - -# bechdelai-tool-demo diff --git a/spaces/Datasculptor/MusicGen/tests/common_utils/__init__.py b/spaces/Datasculptor/MusicGen/tests/common_utils/__init__.py deleted file mode 100644 index 74ffcfef96fec35c99b2a1a053a61f44f7a8bbe9..0000000000000000000000000000000000000000 --- a/spaces/Datasculptor/MusicGen/tests/common_utils/__init__.py +++ /dev/null @@ -1,9 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -# flake8: noqa -from .temp_utils import TempDirMixin -from .wav_utils import get_batch_white_noise, get_white_noise, save_wav diff --git a/spaces/Detomo/ai-avatar-frontend/src/App.test.js b/spaces/Detomo/ai-avatar-frontend/src/App.test.js deleted file mode 100644 index 1f03afeece5ac28064fa3c73a29215037465f789..0000000000000000000000000000000000000000 --- a/spaces/Detomo/ai-avatar-frontend/src/App.test.js +++ /dev/null @@ -1,8 +0,0 @@ -import { render, screen } from '@testing-library/react'; -import App from './App'; - -test('renders learn react link', () => { - render(); - const linkElement = screen.getByText(/learn react/i); - expect(linkElement).toBeInTheDocument(); -}); diff --git a/spaces/DpNaze/Dreamlikeart/style.css b/spaces/DpNaze/Dreamlikeart/style.css deleted file mode 100644 index fdbef9e64cc6b9f8003698ffa38997ee22a640ac..0000000000000000000000000000000000000000 --- a/spaces/DpNaze/Dreamlikeart/style.css +++ /dev/null @@ -1,84 +0,0 @@ -#col-container { - max-width: 800px; - margin-left: auto; - margin-right: auto; -} -a { - color: inherit; - text-decoration: underline; -} -.gradio-container { - font-family: 'IBM Plex Sans', sans-serif; -} -.gr-button { - color: white; - border-color: #9d66e5; - background: #9d66e5; -} -input[type='range'] { - accent-color: #9d66e5; -} -.dark input[type='range'] { - accent-color: #dfdfdf; -} -.container { - max-width: 800px; - margin: auto; - padding-top: 1.5rem; -} -#gallery { - min-height: 22rem; - margin-bottom: 15px; - margin-left: auto; - margin-right: auto; - border-bottom-right-radius: .5rem !important; - border-bottom-left-radius: .5rem !important; -} -#gallery>div>.h-full { - min-height: 20rem; -} -.details:hover { - text-decoration: underline; -} -.gr-button { - white-space: nowrap; -} -.gr-button:focus { - border-color: rgb(147 197 253 / var(--tw-border-opacity)); - outline: none; - box-shadow: var(--tw-ring-offset-shadow), var(--tw-ring-shadow), var(--tw-shadow, 0 0 #0000); - --tw-border-opacity: 1; - --tw-ring-offset-shadow: var(--tw-ring-inset) 0 0 0 var(--tw-ring-offset-width) var(--tw-ring-offset-color); - --tw-ring-shadow: var(--tw-ring-inset) 0 0 0 calc(3px var(--tw-ring-offset-width)) var(--tw-ring-color); - --tw-ring-color: rgb(191 219 254 / var(--tw-ring-opacity)); - --tw-ring-opacity: .5; -} -#advanced-options { - margin-bottom: 20px; -} -.footer { - margin-bottom: 45px; - margin-top: 35px; - text-align: center; - border-bottom: 1px solid #e5e5e5; -} -.footer>p { - font-size: .8rem; - display: inline-block; - padding: 0 10px; - transform: translateY(10px); - background: white; -} -.dark .logo{ filter: invert(1); } -.dark .footer { - border-color: #303030; -} -.dark .footer>p { - background: #0b0f19; -} -.acknowledgments h4{ - margin: 1.25em 0 .25em 0; - font-weight: bold; - font-size: 115%; -} - diff --git a/spaces/DrGabrielLopez/fractal-generator/README.md b/spaces/DrGabrielLopez/fractal-generator/README.md deleted file mode 100644 index 5965e3d443e1b0421836ecb345c27d8bf8131a67..0000000000000000000000000000000000000000 --- a/spaces/DrGabrielLopez/fractal-generator/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Fractal Generator -emoji: 😀 -colorFrom: indigo -colorTo: pink -sdk: gradio -sdk_version: 3.9.1 -app_file: app.py -pinned: false -license: cc-by-nc-sa-4.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/DragGan/DragGan-Inversion/stylegan_human/__init__.py b/spaces/DragGan/DragGan-Inversion/stylegan_human/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/DragGan/DragGan/stylegan_human/pti/training/coaches/single_id_coach.py b/spaces/DragGan/DragGan/stylegan_human/pti/training/coaches/single_id_coach.py deleted file mode 100644 index 7521a6eed000de76c14504f293efd2b6789eb5f1..0000000000000000000000000000000000000000 --- a/spaces/DragGan/DragGan/stylegan_human/pti/training/coaches/single_id_coach.py +++ /dev/null @@ -1,89 +0,0 @@ -# Copyright (c) SenseTime Research. All rights reserved. - -import os -import torch -from tqdm import tqdm -from pti.pti_configs import paths_config, hyperparameters, global_config -from pti.training.coaches.base_coach import BaseCoach -from utils.log_utils import log_images_from_w -from torchvision.utils import save_image - -class SingleIDCoach(BaseCoach): - - def __init__(self, data_loader, use_wandb): - super().__init__(data_loader, use_wandb) - - def train(self): - - w_path_dir = f'{paths_config.embedding_base_dir}/{paths_config.input_data_id}' - os.makedirs(w_path_dir, exist_ok=True) - os.makedirs(f'{w_path_dir}/{paths_config.pti_results_keyword}', exist_ok=True) - - use_ball_holder = True - - for fname, image in tqdm(self.data_loader): - image_name = fname[0] - - self.restart_training() - - if self.image_counter >= hyperparameters.max_images_to_invert: - break - - embedding_dir = f'{w_path_dir}/{paths_config.pti_results_keyword}/{image_name}' - os.makedirs(embedding_dir, exist_ok=True) - - w_pivot = None - - if hyperparameters.use_last_w_pivots: - w_pivot = self.load_inversions(w_path_dir, image_name) -# Copyright (c) SenseTime Research. All rights reserved. - - elif not hyperparameters.use_last_w_pivots or w_pivot is None: - w_pivot = self.calc_inversions(image, image_name) - - # w_pivot = w_pivot.detach().clone().to(global_config.device) - w_pivot = w_pivot.to(global_config.device) - - torch.save(w_pivot, f'{embedding_dir}/0.pt') - log_images_counter = 0 - real_images_batch = image.to(global_config.device) - - for i in range(hyperparameters.max_pti_steps): - - generated_images = self.forward(w_pivot) - loss, l2_loss_val, loss_lpips = self.calc_loss(generated_images, real_images_batch, image_name, - self.G, use_ball_holder, w_pivot) - if i == 0: - tmp1 = torch.clone(generated_images) - if i % 10 == 0: - print("pti loss: ", i, loss.data, loss_lpips.data) - self.optimizer.zero_grad() - - if loss_lpips <= hyperparameters.LPIPS_value_threshold: - break - - loss.backward() - self.optimizer.step() - - use_ball_holder = global_config.training_step % hyperparameters.locality_regularization_interval == 0 - - if self.use_wandb and log_images_counter % global_config.image_rec_result_log_snapshot == 0: - log_images_from_w([w_pivot], self.G, [image_name]) - - global_config.training_step += 1 - log_images_counter += 1 - - # save output image - tmp = torch.cat([real_images_batch, tmp1, generated_images], axis= 3) - save_image(tmp, f"{paths_config.experiments_output_dir}/{image_name}.png", normalize=True) - - - self.image_counter += 1 - - # torch.save(self.G, - # f'{paths_config.checkpoints_dir}/model_{image_name}.pt') #'.pt' - snapshot_data = dict() - snapshot_data['G_ema'] = self.G - import pickle - with open(f'{paths_config.checkpoints_dir}/model_{image_name}.pkl', 'wb') as f: - pickle.dump(snapshot_data, f) diff --git a/spaces/Enterprisium/Easy_GUI/lib/infer_pack/modules/F0Predictor/__init__.py b/spaces/Enterprisium/Easy_GUI/lib/infer_pack/modules/F0Predictor/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/EronSamez/RVC_HFmeu/lib/uvr5_pack/lib_v5/layers_33966KB.py b/spaces/EronSamez/RVC_HFmeu/lib/uvr5_pack/lib_v5/layers_33966KB.py deleted file mode 100644 index a38b7bb3ae3136b07eadfc2db445fef4c2de186b..0000000000000000000000000000000000000000 --- a/spaces/EronSamez/RVC_HFmeu/lib/uvr5_pack/lib_v5/layers_33966KB.py +++ /dev/null @@ -1,126 +0,0 @@ -import torch -from torch import nn -import torch.nn.functional as F - -from . import spec_utils - - -class Conv2DBNActiv(nn.Module): - def __init__(self, nin, nout, ksize=3, stride=1, pad=1, dilation=1, activ=nn.ReLU): - super(Conv2DBNActiv, self).__init__() - self.conv = nn.Sequential( - nn.Conv2d( - nin, - nout, - kernel_size=ksize, - stride=stride, - padding=pad, - dilation=dilation, - bias=False, - ), - nn.BatchNorm2d(nout), - activ(), - ) - - def __call__(self, x): - return self.conv(x) - - -class SeperableConv2DBNActiv(nn.Module): - def __init__(self, nin, nout, ksize=3, stride=1, pad=1, dilation=1, activ=nn.ReLU): - super(SeperableConv2DBNActiv, self).__init__() - self.conv = nn.Sequential( - nn.Conv2d( - nin, - nin, - kernel_size=ksize, - stride=stride, - padding=pad, - dilation=dilation, - groups=nin, - bias=False, - ), - nn.Conv2d(nin, nout, kernel_size=1, bias=False), - nn.BatchNorm2d(nout), - activ(), - ) - - def __call__(self, x): - return self.conv(x) - - -class Encoder(nn.Module): - def __init__(self, nin, nout, ksize=3, stride=1, pad=1, activ=nn.LeakyReLU): - super(Encoder, self).__init__() - self.conv1 = Conv2DBNActiv(nin, nout, ksize, 1, pad, activ=activ) - self.conv2 = Conv2DBNActiv(nout, nout, ksize, stride, pad, activ=activ) - - def __call__(self, x): - skip = self.conv1(x) - h = self.conv2(skip) - - return h, skip - - -class Decoder(nn.Module): - def __init__( - self, nin, nout, ksize=3, stride=1, pad=1, activ=nn.ReLU, dropout=False - ): - super(Decoder, self).__init__() - self.conv = Conv2DBNActiv(nin, nout, ksize, 1, pad, activ=activ) - self.dropout = nn.Dropout2d(0.1) if dropout else None - - def __call__(self, x, skip=None): - x = F.interpolate(x, scale_factor=2, mode="bilinear", align_corners=True) - if skip is not None: - skip = spec_utils.crop_center(skip, x) - x = torch.cat([x, skip], dim=1) - h = self.conv(x) - - if self.dropout is not None: - h = self.dropout(h) - - return h - - -class ASPPModule(nn.Module): - def __init__(self, nin, nout, dilations=(4, 8, 16, 32, 64), activ=nn.ReLU): - super(ASPPModule, self).__init__() - self.conv1 = nn.Sequential( - nn.AdaptiveAvgPool2d((1, None)), - Conv2DBNActiv(nin, nin, 1, 1, 0, activ=activ), - ) - self.conv2 = Conv2DBNActiv(nin, nin, 1, 1, 0, activ=activ) - self.conv3 = SeperableConv2DBNActiv( - nin, nin, 3, 1, dilations[0], dilations[0], activ=activ - ) - self.conv4 = SeperableConv2DBNActiv( - nin, nin, 3, 1, dilations[1], dilations[1], activ=activ - ) - self.conv5 = SeperableConv2DBNActiv( - nin, nin, 3, 1, dilations[2], dilations[2], activ=activ - ) - self.conv6 = SeperableConv2DBNActiv( - nin, nin, 3, 1, dilations[2], dilations[2], activ=activ - ) - self.conv7 = SeperableConv2DBNActiv( - nin, nin, 3, 1, dilations[2], dilations[2], activ=activ - ) - self.bottleneck = nn.Sequential( - Conv2DBNActiv(nin * 7, nout, 1, 1, 0, activ=activ), nn.Dropout2d(0.1) - ) - - def forward(self, x): - _, _, h, w = x.size() - feat1 = F.interpolate( - self.conv1(x), size=(h, w), mode="bilinear", align_corners=True - ) - feat2 = self.conv2(x) - feat3 = self.conv3(x) - feat4 = self.conv4(x) - feat5 = self.conv5(x) - feat6 = self.conv6(x) - feat7 = self.conv7(x) - out = torch.cat((feat1, feat2, feat3, feat4, feat5, feat6, feat7), dim=1) - bottle = self.bottleneck(out) - return bottle diff --git a/spaces/EsoCode/text-generation-webui/extensions/google_translate/script.py b/spaces/EsoCode/text-generation-webui/extensions/google_translate/script.py deleted file mode 100644 index 63226107b2c2afe086fc343c7b7f7df78bef3f8a..0000000000000000000000000000000000000000 --- a/spaces/EsoCode/text-generation-webui/extensions/google_translate/script.py +++ /dev/null @@ -1,46 +0,0 @@ -import gradio as gr -from deep_translator import GoogleTranslator - -params = { - "language string": "ja", -} - -language_codes = {'Afrikaans': 'af', 'Albanian': 'sq', 'Amharic': 'am', 'Arabic': 'ar', 'Armenian': 'hy', 'Azerbaijani': 'az', 'Basque': 'eu', 'Belarusian': 'be', 'Bengali': 'bn', 'Bosnian': 'bs', 'Bulgarian': 'bg', 'Catalan': 'ca', 'Cebuano': 'ceb', 'Chinese (Simplified)': 'zh-CN', 'Chinese (Traditional)': 'zh-TW', 'Corsican': 'co', 'Croatian': 'hr', 'Czech': 'cs', 'Danish': 'da', 'Dutch': 'nl', 'English': 'en', 'Esperanto': 'eo', 'Estonian': 'et', 'Finnish': 'fi', 'French': 'fr', 'Frisian': 'fy', 'Galician': 'gl', 'Georgian': 'ka', 'German': 'de', 'Greek': 'el', 'Gujarati': 'gu', 'Haitian Creole': 'ht', 'Hausa': 'ha', 'Hawaiian': 'haw', 'Hebrew': 'iw', 'Hindi': 'hi', 'Hmong': 'hmn', 'Hungarian': 'hu', 'Icelandic': 'is', 'Igbo': 'ig', 'Indonesian': 'id', 'Irish': 'ga', 'Italian': 'it', 'Japanese': 'ja', 'Javanese': 'jw', 'Kannada': 'kn', 'Kazakh': 'kk', 'Khmer': 'km', 'Korean': 'ko', 'Kurdish': 'ku', 'Kyrgyz': 'ky', 'Lao': 'lo', 'Latin': 'la', 'Latvian': 'lv', 'Lithuanian': 'lt', 'Luxembourgish': 'lb', 'Macedonian': 'mk', 'Malagasy': 'mg', 'Malay': 'ms', 'Malayalam': 'ml', 'Maltese': 'mt', 'Maori': 'mi', 'Marathi': 'mr', 'Mongolian': 'mn', 'Myanmar (Burmese)': 'my', 'Nepali': 'ne', 'Norwegian': 'no', 'Nyanja (Chichewa)': 'ny', 'Pashto': 'ps', 'Persian': 'fa', 'Polish': 'pl', 'Portuguese (Portugal, Brazil)': 'pt', 'Punjabi': 'pa', 'Romanian': 'ro', 'Russian': 'ru', 'Samoan': 'sm', 'Scots Gaelic': 'gd', 'Serbian': 'sr', 'Sesotho': 'st', 'Shona': 'sn', 'Sindhi': 'sd', 'Sinhala (Sinhalese)': 'si', 'Slovak': 'sk', 'Slovenian': 'sl', 'Somali': 'so', 'Spanish': 'es', 'Sundanese': 'su', 'Swahili': 'sw', 'Swedish': 'sv', 'Tagalog (Filipino)': 'tl', 'Tajik': 'tg', 'Tamil': 'ta', 'Telugu': 'te', 'Thai': 'th', 'Turkish': 'tr', 'Ukrainian': 'uk', 'Urdu': 'ur', 'Uzbek': 'uz', 'Vietnamese': 'vi', 'Welsh': 'cy', 'Xhosa': 'xh', 'Yiddish': 'yi', 'Yoruba': 'yo', 'Zulu': 'zu'} - - -def input_modifier(string): - """ - This function is applied to your text inputs before - they are fed into the model. - """ - - return GoogleTranslator(source=params['language string'], target='en').translate(string) - - -def output_modifier(string): - """ - This function is applied to the model outputs. - """ - - return GoogleTranslator(source='en', target=params['language string']).translate(string) - - -def bot_prefix_modifier(string): - """ - This function is only applied in chat mode. It modifies - the prefix text for the Bot and can be used to bias its - behavior. - """ - - return string - - -def ui(): - # Finding the language name from the language code to use as the default value - language_name = list(language_codes.keys())[list(language_codes.values()).index(params['language string'])] - - # Gradio elements - language = gr.Dropdown(value=language_name, choices=[k for k in language_codes], label='Language') - - # Event functions to update the parameters in the backend - language.change(lambda x: params.update({"language string": language_codes[x]}), language, None) diff --git a/spaces/EuroPython2022/clickbaitonator/fudge/poetry_util.py b/spaces/EuroPython2022/clickbaitonator/fudge/poetry_util.py deleted file mode 100644 index b8e27da3880a305df542794c8f14a3819e27d52e..0000000000000000000000000000000000000000 --- a/spaces/EuroPython2022/clickbaitonator/fudge/poetry_util.py +++ /dev/null @@ -1,83 +0,0 @@ -import string - -import pronouncing -from Phyme import Phyme -phyme = Phyme() - -from fudge.constants import * - -def is_iambic(phrase): - """ - check that we satisfy iambic meter. - return 1 if so, otherwise 0. - definitely an imperfect check... - if we end up needing to check a word that's not in the CMU dictionary, just return 0. - """ - meter = '' - for word in phrase.split(): - word = word.strip().strip(string.punctuation).lower() - try: - phones_list = pronouncing.phones_for_word(word) - stresses = pronouncing.stresses(phones_list[0]) - if len(stresses) == 1: - if stresses == '1': - stresses = '2' # allow ambiguity for 1-syllable words with stress 1 - meter += stresses # just default to the first pronunciation if > 1 given - except: - return 0 # word not found - meter = [int(x) for x in meter] - even_stresses_full = [meter[i] for i in range(0, len(meter), 2)] - odd_stresses_full = [meter[i] for i in range(1, len(meter), 2)] - even_stresses = set(even_stresses_full) - odd_stresses = set(odd_stresses_full) - if 0 in odd_stresses: - return 0 - if 1 in even_stresses: - return 0 - return 1 - - -def count_syllables(words): - syllables = 0 - for word in words.split(): - word = word.strip().strip(string.punctuation) - try: - phones_list = pronouncing.phones_for_word(word) - stresses = pronouncing.stresses(phones_list[0]) - syllables += min(MAX_SYLLABLES_PER_WORD, len(stresses)) - except: - # if we don't know, just do a quick approximation here; it shouldn't come up too often - syllables += min(MAX_SYLLABLES_PER_WORD, round(len(word) / 3)) - return syllables - - -def get_rhymes(word): - # throws exception if word not in the rhyme dict (rare) - rhymes = [] - rhyme_dict = phyme.get_perfect_rhymes(word) - for length_dict in rhyme_dict.values(): - for word in length_dict: - if '(' in word: # sometimes you have stuff like preferred(1) where they indicate a particular pronunciation - rhymes.append(word.split('(')[0]) - else: - rhymes.append(word) - return sorted(list(set(rhymes))) - - -def get_rhyme_group(word): - sorted_rhyme_list = get_rhymes(word) - return ' '.join(sorted_rhyme_list) - - -def perfect_rhyme_end(s1, s2): - ending_word1 = s1.split()[-1].strip(string.punctuation) - ending_word2 = s2.split()[-1].strip(string.punctuation) - try: - return get_rhyme_group(ending_word1) == get_rhyme_group(ending_word2) - except: - return False # unknown words - -if __name__=='__main__': - result = is_iambic('Shall I compare thee to a summer day') - result2 = count_syllables('Shall I compare thee to a summer day') - import pdb; pdb.set_trace() \ No newline at end of file diff --git "a/spaces/Fengbinbin/gpt-academic/crazy_functions/\347\220\206\350\247\243PDF\346\226\207\346\241\243\345\206\205\345\256\271.py" "b/spaces/Fengbinbin/gpt-academic/crazy_functions/\347\220\206\350\247\243PDF\346\226\207\346\241\243\345\206\205\345\256\271.py" deleted file mode 100644 index 505086455af8d2676055ab084cf97058b954c7d5..0000000000000000000000000000000000000000 --- "a/spaces/Fengbinbin/gpt-academic/crazy_functions/\347\220\206\350\247\243PDF\346\226\207\346\241\243\345\206\205\345\256\271.py" +++ /dev/null @@ -1,112 +0,0 @@ -from toolbox import update_ui -from toolbox import CatchException, report_execption -from .crazy_utils import read_and_clean_pdf_text -from .crazy_utils import request_gpt_model_in_new_thread_with_ui_alive -fast_debug = False - - -def 解析PDF(file_name, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt): - import tiktoken - print('begin analysis on:', file_name) - - ############################## <第 0 步,切割PDF> ################################## - # 递归地切割PDF文件,每一块(尽量是完整的一个section,比如introduction,experiment等,必要时再进行切割) - # 的长度必须小于 2500 个 Token - file_content, page_one = read_and_clean_pdf_text(file_name) # (尝试)按照章节切割PDF - - TOKEN_LIMIT_PER_FRAGMENT = 2500 - - from .crazy_utils import breakdown_txt_to_satisfy_token_limit_for_pdf - from request_llm.bridge_all import model_info - enc = model_info["gpt-3.5-turbo"]['tokenizer'] - def get_token_num(txt): return len(enc.encode(txt, disallowed_special=())) - paper_fragments = breakdown_txt_to_satisfy_token_limit_for_pdf( - txt=file_content, get_token_fn=get_token_num, limit=TOKEN_LIMIT_PER_FRAGMENT) - page_one_fragments = breakdown_txt_to_satisfy_token_limit_for_pdf( - txt=str(page_one), get_token_fn=get_token_num, limit=TOKEN_LIMIT_PER_FRAGMENT//4) - # 为了更好的效果,我们剥离Introduction之后的部分(如果有) - paper_meta = page_one_fragments[0].split('introduction')[0].split('Introduction')[0].split('INTRODUCTION')[0] - - ############################## <第 1 步,从摘要中提取高价值信息,放到history中> ################################## - final_results = [] - final_results.append(paper_meta) - - ############################## <第 2 步,迭代地历遍整个文章,提取精炼信息> ################################## - i_say_show_user = f'首先你在英文语境下通读整篇论文。'; gpt_say = "[Local Message] 收到。" # 用户提示 - chatbot.append([i_say_show_user, gpt_say]); yield from update_ui(chatbot=chatbot, history=[]) # 更新UI - - iteration_results = [] - last_iteration_result = paper_meta # 初始值是摘要 - MAX_WORD_TOTAL = 4096 - n_fragment = len(paper_fragments) - if n_fragment >= 20: print('文章极长,不能达到预期效果') - for i in range(n_fragment): - NUM_OF_WORD = MAX_WORD_TOTAL // n_fragment - i_say = f"Read this section, recapitulate the content of this section with less than {NUM_OF_WORD} words: {paper_fragments[i]}" - i_say_show_user = f"[{i+1}/{n_fragment}] Read this section, recapitulate the content of this section with less than {NUM_OF_WORD} words: {paper_fragments[i][:200]}" - gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive(i_say, i_say_show_user, # i_say=真正给chatgpt的提问, i_say_show_user=给用户看的提问 - llm_kwargs, chatbot, - history=["The main idea of the previous section is?", last_iteration_result], # 迭代上一次的结果 - sys_prompt="Extract the main idea of this section." # 提示 - ) - iteration_results.append(gpt_say) - last_iteration_result = gpt_say - - ############################## <第 3 步,整理history> ################################## - final_results.extend(iteration_results) - final_results.append(f'接下来,你是一名专业的学术教授,利用以上信息,使用中文回答我的问题。') - # 接下来两句话只显示在界面上,不起实际作用 - i_say_show_user = f'接下来,你是一名专业的学术教授,利用以上信息,使用中文回答我的问题。'; gpt_say = "[Local Message] 收到。" - chatbot.append([i_say_show_user, gpt_say]) - - ############################## <第 4 步,设置一个token上限,防止回答时Token溢出> ################################## - from .crazy_utils import input_clipping - _, final_results = input_clipping("", final_results, max_token_limit=3200) - yield from update_ui(chatbot=chatbot, history=final_results) # 注意这里的历史记录被替代了 - - -@CatchException -def 理解PDF文档内容标准文件输入(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port): - import glob, os - - # 基本信息:功能、贡献者 - chatbot.append([ - "函数插件功能?", - "理解PDF论文内容,并且将结合上下文内容,进行学术解答。函数插件贡献者: Hanzoe, binary-husky"]) - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - - # 尝试导入依赖,如果缺少依赖,则给出安装建议 - try: - import fitz - except: - report_execption(chatbot, history, - a = f"解析项目: {txt}", - b = f"导入软件依赖失败。使用该模块需要额外依赖,安装方法```pip install --upgrade pymupdf```。") - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - return - - # 清空历史,以免输入溢出 - history = [] - - # 检测输入参数,如没有给定输入参数,直接退出 - if os.path.exists(txt): - project_folder = txt - else: - if txt == "": - txt = '空空如也的输入栏' - report_execption(chatbot, history, - a=f"解析项目: {txt}", b=f"找不到本地项目或无权访问: {txt}") - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - return - - # 搜索需要处理的文件清单 - file_manifest = [f for f in glob.glob(f'{project_folder}/**/*.pdf', recursive=True)] - # 如果没找到任何文件 - if len(file_manifest) == 0: - report_execption(chatbot, history, - a=f"解析项目: {txt}", b=f"找不到任何.tex或.pdf文件: {txt}") - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - return - txt = file_manifest[0] - # 开始正式执行任务 - yield from 解析PDF(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt) diff --git a/spaces/FlippFuzz/whisper-webui/tests/vad_test.py b/spaces/FlippFuzz/whisper-webui/tests/vad_test.py deleted file mode 100644 index b465d8a380f9316a6830d9aac320c85f22aba0a0..0000000000000000000000000000000000000000 --- a/spaces/FlippFuzz/whisper-webui/tests/vad_test.py +++ /dev/null @@ -1,66 +0,0 @@ -import pprint -import unittest -import numpy as np -import sys - -sys.path.append('../whisper-webui') - -from src.vad import AbstractTranscription, TranscriptionConfig, VadSileroTranscription - -class TestVad(unittest.TestCase): - def __init__(self, *args, **kwargs): - super(TestVad, self).__init__(*args, **kwargs) - self.transcribe_calls = [] - - def test_transcript(self): - mock = MockVadTranscription() - - self.transcribe_calls.clear() - result = mock.transcribe("mock", lambda segment : self.transcribe_segments(segment)) - - self.assertListEqual(self.transcribe_calls, [ - [30, 30], - [100, 100] - ]) - - self.assertListEqual(result['segments'], - [{'end': 50.0, 'start': 40.0, 'text': 'Hello world '}, - {'end': 120.0, 'start': 110.0, 'text': 'Hello world '}] - ) - - def transcribe_segments(self, segment): - self.transcribe_calls.append(segment.tolist()) - - # Dummy text - return { - 'text': "Hello world ", - 'segments': [ - { - "start": 10.0, - "end": 20.0, - "text": "Hello world " - } - ], - 'language': "" - } - -class MockVadTranscription(AbstractTranscription): - def __init__(self): - super().__init__() - - def get_audio_segment(self, str, start_time: str = None, duration: str = None): - start_time_seconds = float(start_time.removesuffix("s")) - duration_seconds = float(duration.removesuffix("s")) - - # For mocking, this just returns a simple numppy array - return np.array([start_time_seconds, duration_seconds], dtype=np.float64) - - def get_transcribe_timestamps(self, audio: str, config: TranscriptionConfig, start_time: float, duration: float): - result = [] - - result.append( { 'start': 30, 'end': 60 } ) - result.append( { 'start': 100, 'end': 200 } ) - return result - -if __name__ == '__main__': - unittest.main() \ No newline at end of file diff --git a/spaces/Flux9665/Blizzard2023IMS/app.py b/spaces/Flux9665/Blizzard2023IMS/app.py deleted file mode 100644 index cbb0909ed1bec06923248af634a48d4494c59ce6..0000000000000000000000000000000000000000 --- a/spaces/Flux9665/Blizzard2023IMS/app.py +++ /dev/null @@ -1,21 +0,0 @@ -import os -import torch -from flair.models import SequenceTagger -import flair -from pathlib import Path - - -flair.device = torch.device('cpu') -os.makedirs("Corpora/.flair/models/pos-french-camembert-flair", exist_ok=True) -flair.cache_root = Path(f"./Corpora/.flair") -pos_tagger = SequenceTagger.load("qanastek/pos-french-camembert-flair") -del pos_tagger - -os.system("git clone --branch v2.b https://github.com/DigitalPhonetics/IMS-Toucan.git toucan_codebase") -os.system("mv toucan_codebase/* .") - -from run_model_downloader import download_models -from demo import Demo - -download_models() -Demo(gpu_id="cpu") diff --git a/spaces/FrankZxShen/so-vits-svc-models-pcr/modules/F0Predictor/PMF0Predictor.py b/spaces/FrankZxShen/so-vits-svc-models-pcr/modules/F0Predictor/PMF0Predictor.py deleted file mode 100644 index ccf4128436c5b7e5a3e720d4597bad0c622d0920..0000000000000000000000000000000000000000 --- a/spaces/FrankZxShen/so-vits-svc-models-pcr/modules/F0Predictor/PMF0Predictor.py +++ /dev/null @@ -1,83 +0,0 @@ -from modules.F0Predictor.F0Predictor import F0Predictor -import parselmouth -import numpy as np - -class PMF0Predictor(F0Predictor): - def __init__(self,hop_length=512,f0_min=50,f0_max=1100,sampling_rate=44100): - self.hop_length = hop_length - self.f0_min = f0_min - self.f0_max = f0_max - self.sampling_rate = sampling_rate - - - def interpolate_f0(self,f0): - ''' - 对F0进行插值处理 - ''' - - data = np.reshape(f0, (f0.size, 1)) - - vuv_vector = np.zeros((data.size, 1), dtype=np.float32) - vuv_vector[data > 0.0] = 1.0 - vuv_vector[data <= 0.0] = 0.0 - - ip_data = data - - frame_number = data.size - last_value = 0.0 - for i in range(frame_number): - if data[i] <= 0.0: - j = i + 1 - for j in range(i + 1, frame_number): - if data[j] > 0.0: - break - if j < frame_number - 1: - if last_value > 0.0: - step = (data[j] - data[i - 1]) / float(j - i) - for k in range(i, j): - ip_data[k] = data[i - 1] + step * (k - i + 1) - else: - for k in range(i, j): - ip_data[k] = data[j] - else: - for k in range(i, frame_number): - ip_data[k] = last_value - else: - ip_data[i] = data[i] #这里可能存在一个没有必要的拷贝 - last_value = data[i] - - return ip_data[:,0], vuv_vector[:,0] - - def compute_f0(self,wav,p_len=None): - x = wav - if p_len is None: - p_len = x.shape[0]//self.hop_length - else: - assert abs(p_len-x.shape[0]//self.hop_length) < 4, "pad length error" - time_step = self.hop_length / self.sampling_rate * 1000 - f0 = parselmouth.Sound(x, self.sampling_rate).to_pitch_ac( - time_step=time_step / 1000, voicing_threshold=0.6, - pitch_floor=self.f0_min, pitch_ceiling=self.f0_max).selected_array['frequency'] - - pad_size=(p_len - len(f0) + 1) // 2 - if(pad_size>0 or p_len - len(f0) - pad_size>0): - f0 = np.pad(f0,[[pad_size,p_len - len(f0) - pad_size]], mode='constant') - f0,uv = self.interpolate_f0(f0) - return f0 - - def compute_f0_uv(self,wav,p_len=None): - x = wav - if p_len is None: - p_len = x.shape[0]//self.hop_length - else: - assert abs(p_len-x.shape[0]//self.hop_length) < 4, "pad length error" - time_step = self.hop_length / self.sampling_rate * 1000 - f0 = parselmouth.Sound(x, self.sampling_rate).to_pitch_ac( - time_step=time_step / 1000, voicing_threshold=0.6, - pitch_floor=self.f0_min, pitch_ceiling=self.f0_max).selected_array['frequency'] - - pad_size=(p_len - len(f0) + 1) // 2 - if(pad_size>0 or p_len - len(f0) - pad_size>0): - f0 = np.pad(f0,[[pad_size,p_len - len(f0) - pad_size]], mode='constant') - f0,uv = self.interpolate_f0(f0) - return f0,uv diff --git a/spaces/FrankZxShen/vits-fast-fineturning-models-ba/text/korean.py b/spaces/FrankZxShen/vits-fast-fineturning-models-ba/text/korean.py deleted file mode 100644 index edee07429a450c55e3d8e246997faaa1e0b89cc9..0000000000000000000000000000000000000000 --- a/spaces/FrankZxShen/vits-fast-fineturning-models-ba/text/korean.py +++ /dev/null @@ -1,210 +0,0 @@ -import re -from jamo import h2j, j2hcj -import ko_pron - - -# This is a list of Korean classifiers preceded by pure Korean numerals. -_korean_classifiers = '군데 권 개 그루 닢 대 두 마리 모 모금 뭇 발 발짝 방 번 벌 보루 살 수 술 시 쌈 움큼 정 짝 채 척 첩 축 켤레 톨 통' - -# List of (hangul, hangul divided) pairs: -_hangul_divided = [(re.compile('%s' % x[0]), x[1]) for x in [ - ('ㄳ', 'ㄱㅅ'), - ('ㄵ', 'ㄴㅈ'), - ('ㄶ', 'ㄴㅎ'), - ('ㄺ', 'ㄹㄱ'), - ('ㄻ', 'ㄹㅁ'), - ('ㄼ', 'ㄹㅂ'), - ('ㄽ', 'ㄹㅅ'), - ('ㄾ', 'ㄹㅌ'), - ('ㄿ', 'ㄹㅍ'), - ('ㅀ', 'ㄹㅎ'), - ('ㅄ', 'ㅂㅅ'), - ('ㅘ', 'ㅗㅏ'), - ('ㅙ', 'ㅗㅐ'), - ('ㅚ', 'ㅗㅣ'), - ('ㅝ', 'ㅜㅓ'), - ('ㅞ', 'ㅜㅔ'), - ('ㅟ', 'ㅜㅣ'), - ('ㅢ', 'ㅡㅣ'), - ('ㅑ', 'ㅣㅏ'), - ('ㅒ', 'ㅣㅐ'), - ('ㅕ', 'ㅣㅓ'), - ('ㅖ', 'ㅣㅔ'), - ('ㅛ', 'ㅣㅗ'), - ('ㅠ', 'ㅣㅜ') -]] - -# List of (Latin alphabet, hangul) pairs: -_latin_to_hangul = [(re.compile('%s' % x[0], re.IGNORECASE), x[1]) for x in [ - ('a', '에이'), - ('b', '비'), - ('c', '시'), - ('d', '디'), - ('e', '이'), - ('f', '에프'), - ('g', '지'), - ('h', '에이치'), - ('i', '아이'), - ('j', '제이'), - ('k', '케이'), - ('l', '엘'), - ('m', '엠'), - ('n', '엔'), - ('o', '오'), - ('p', '피'), - ('q', '큐'), - ('r', '아르'), - ('s', '에스'), - ('t', '티'), - ('u', '유'), - ('v', '브이'), - ('w', '더블유'), - ('x', '엑스'), - ('y', '와이'), - ('z', '제트') -]] - -# List of (ipa, lazy ipa) pairs: -_ipa_to_lazy_ipa = [(re.compile('%s' % x[0], re.IGNORECASE), x[1]) for x in [ - ('t͡ɕ','ʧ'), - ('d͡ʑ','ʥ'), - ('ɲ','n^'), - ('ɕ','ʃ'), - ('ʷ','w'), - ('ɭ','l`'), - ('ʎ','ɾ'), - ('ɣ','ŋ'), - ('ɰ','ɯ'), - ('ʝ','j'), - ('ʌ','ə'), - ('ɡ','g'), - ('\u031a','#'), - ('\u0348','='), - ('\u031e',''), - ('\u0320',''), - ('\u0339','') -]] - - -def latin_to_hangul(text): - for regex, replacement in _latin_to_hangul: - text = re.sub(regex, replacement, text) - return text - - -def divide_hangul(text): - text = j2hcj(h2j(text)) - for regex, replacement in _hangul_divided: - text = re.sub(regex, replacement, text) - return text - - -def hangul_number(num, sino=True): - '''Reference https://github.com/Kyubyong/g2pK''' - num = re.sub(',', '', num) - - if num == '0': - return '영' - if not sino and num == '20': - return '스무' - - digits = '123456789' - names = '일이삼사오육칠팔구' - digit2name = {d: n for d, n in zip(digits, names)} - - modifiers = '한 두 세 네 다섯 여섯 일곱 여덟 아홉' - decimals = '열 스물 서른 마흔 쉰 예순 일흔 여든 아흔' - digit2mod = {d: mod for d, mod in zip(digits, modifiers.split())} - digit2dec = {d: dec for d, dec in zip(digits, decimals.split())} - - spelledout = [] - for i, digit in enumerate(num): - i = len(num) - i - 1 - if sino: - if i == 0: - name = digit2name.get(digit, '') - elif i == 1: - name = digit2name.get(digit, '') + '십' - name = name.replace('일십', '십') - else: - if i == 0: - name = digit2mod.get(digit, '') - elif i == 1: - name = digit2dec.get(digit, '') - if digit == '0': - if i % 4 == 0: - last_three = spelledout[-min(3, len(spelledout)):] - if ''.join(last_three) == '': - spelledout.append('') - continue - else: - spelledout.append('') - continue - if i == 2: - name = digit2name.get(digit, '') + '백' - name = name.replace('일백', '백') - elif i == 3: - name = digit2name.get(digit, '') + '천' - name = name.replace('일천', '천') - elif i == 4: - name = digit2name.get(digit, '') + '만' - name = name.replace('일만', '만') - elif i == 5: - name = digit2name.get(digit, '') + '십' - name = name.replace('일십', '십') - elif i == 6: - name = digit2name.get(digit, '') + '백' - name = name.replace('일백', '백') - elif i == 7: - name = digit2name.get(digit, '') + '천' - name = name.replace('일천', '천') - elif i == 8: - name = digit2name.get(digit, '') + '억' - elif i == 9: - name = digit2name.get(digit, '') + '십' - elif i == 10: - name = digit2name.get(digit, '') + '백' - elif i == 11: - name = digit2name.get(digit, '') + '천' - elif i == 12: - name = digit2name.get(digit, '') + '조' - elif i == 13: - name = digit2name.get(digit, '') + '십' - elif i == 14: - name = digit2name.get(digit, '') + '백' - elif i == 15: - name = digit2name.get(digit, '') + '천' - spelledout.append(name) - return ''.join(elem for elem in spelledout) - - -def number_to_hangul(text): - '''Reference https://github.com/Kyubyong/g2pK''' - tokens = set(re.findall(r'(\d[\d,]*)([\uac00-\ud71f]+)', text)) - for token in tokens: - num, classifier = token - if classifier[:2] in _korean_classifiers or classifier[0] in _korean_classifiers: - spelledout = hangul_number(num, sino=False) - else: - spelledout = hangul_number(num, sino=True) - text = text.replace(f'{num}{classifier}', f'{spelledout}{classifier}') - # digit by digit for remaining digits - digits = '0123456789' - names = '영일이삼사오육칠팔구' - for d, n in zip(digits, names): - text = text.replace(d, n) - return text - - -def korean_to_lazy_ipa(text): - text = latin_to_hangul(text) - text = number_to_hangul(text) - text=re.sub('[\uac00-\ud7af]+',lambda x:ko_pron.romanise(x.group(0),'ipa').split('] ~ [')[0],text) - for regex, replacement in _ipa_to_lazy_ipa: - text = re.sub(regex, replacement, text) - return text - - -def korean_to_ipa(text): - text = korean_to_lazy_ipa(text) - return text.replace('ʧ','tʃ').replace('ʥ','dʑ') diff --git a/spaces/FridaZuley/RVC_HFKawaii/train/losses.py b/spaces/FridaZuley/RVC_HFKawaii/train/losses.py deleted file mode 100644 index b89038f14d06d7fae43628183e9ffb465e4edafd..0000000000000000000000000000000000000000 --- a/spaces/FridaZuley/RVC_HFKawaii/train/losses.py +++ /dev/null @@ -1,59 +0,0 @@ -import torch -from torch.nn import functional as F - - -def feature_loss(fmap_r, fmap_g): - loss = 0 - for dr, dg in zip(fmap_r, fmap_g): - for rl, gl in zip(dr, dg): - rl = rl.float().detach() - gl = gl.float() - loss += torch.mean(torch.abs(rl - gl)) - - return loss * 2 - - -def discriminator_loss(disc_real_outputs, disc_generated_outputs): - loss = 0 - r_losses = [] - g_losses = [] - for dr, dg in zip(disc_real_outputs, disc_generated_outputs): - dr = dr.float() - dg = dg.float() - r_loss = torch.mean((1 - dr) ** 2) - g_loss = torch.mean(dg**2) - loss += r_loss + g_loss - r_losses.append(r_loss.item()) - g_losses.append(g_loss.item()) - - return loss, r_losses, g_losses - - -def generator_loss(disc_outputs): - loss = 0 - gen_losses = [] - for dg in disc_outputs: - dg = dg.float() - l = torch.mean((1 - dg) ** 2) - gen_losses.append(l) - loss += l - - return loss, gen_losses - - -def kl_loss(z_p, logs_q, m_p, logs_p, z_mask): - """ - z_p, logs_q: [b, h, t_t] - m_p, logs_p: [b, h, t_t] - """ - z_p = z_p.float() - logs_q = logs_q.float() - m_p = m_p.float() - logs_p = logs_p.float() - z_mask = z_mask.float() - - kl = logs_p - logs_q - 0.5 - kl += 0.5 * ((z_p - m_p) ** 2) * torch.exp(-2.0 * logs_p) - kl = torch.sum(kl * z_mask) - l = kl / torch.sum(z_mask) - return l diff --git a/spaces/FridaZuley/RVC_HFKawaii/utils/README.md b/spaces/FridaZuley/RVC_HFKawaii/utils/README.md deleted file mode 100644 index fb45a36b5909585aa964f2033762ee59b55526b0..0000000000000000000000000000000000000000 --- a/spaces/FridaZuley/RVC_HFKawaii/utils/README.md +++ /dev/null @@ -1,6 +0,0 @@ -# External Colab Code -Code used to make Google Colab work correctly -- Repo link: https://github.com/IAHispano/Applio-RVC-Fork/ - -Thanks to https://github.com/kalomaze/externalcolabcode - diff --git a/spaces/GEM/submission-form/app.py b/spaces/GEM/submission-form/app.py deleted file mode 100644 index bcf2f67b6c71fb2c77d954f9e905afe292873608..0000000000000000000000000000000000000000 --- a/spaces/GEM/submission-form/app.py +++ /dev/null @@ -1,302 +0,0 @@ -import json -import os -import shutil -import uuid -from datetime import datetime -from pathlib import Path - -import jsonlines -import streamlit as st -from dotenv import load_dotenv -from huggingface_hub import Repository, cached_download, hf_hub_url - -from utils import http_get, http_post, validate_json - -if Path(".env").is_file(): - load_dotenv(".env") - -HF_TOKEN = os.getenv("HF_TOKEN") -AUTOTRAIN_USERNAME = os.getenv("AUTOTRAIN_USERNAME") -AUTOTRAIN_BACKEND_API = os.getenv("AUTOTRAIN_BACKEND_API") -LOCAL_REPO = "submission_repo" -LOGS_REPO = "submission-logs" - -# TODO -# 1. Add check that fields are nested under `tasks` field correctly -# 2. Add check that names of tasks and datasets are valid - -MARKDOWN = """--- -benchmark: gem -type: prediction -submission_name: {submission_name} -tags: -- evaluation -- benchmark ---- -# GEM Submission - -Submission name: {submission_name} - -""" - - -def generate_dataset_card(submission_name): - """ - Generate dataset card for the submission - """ - markdown = MARKDOWN.format( - submission_name=submission_name, - ) - with open(os.path.join(LOCAL_REPO, "README.md"), "w") as f: - f.write(markdown) - - -def load_json(path): - with open(path, "r") as f: - return json.load(f) - - -def get_submission_names(): - """Download all submission names. - - The GEM frontend requires the submission names to be unique, so here we - download all submission names and use them as a check against the user - submissions. - """ - scores_url = hf_hub_url("GEM-submissions/submission-scores", "scores.json", repo_type="dataset") - scores_filepath = cached_download(scores_url, force_download=True) - scores_data = load_json(scores_filepath) - return [score["submission_name"] for score in scores_data] - - -####### -# APP # -####### -st.title("GEM Submissions") -st.markdown( - """ - Welcome to the [GEM benchmark](https://gem-benchmark.com/)! GEM is a benchmark - environment for Natural Language Generation with a focus on its Evaluation, both - through human annotations and automated Metrics. - - GEM aims to: - - - measure NLG progress across many NLG tasks across languages. - - audit data and models and present results via data cards and model robustness - reports. - - develop standards for evaluation of generated text using both automated and - human metrics. - - Use this page to submit your system's predictions to the benchmark. - """ -) - -with st.form(key="form"): - # Flush local repo - shutil.rmtree(LOCAL_REPO, ignore_errors=True) - submission_errors = 0 - uploaded_file = st.file_uploader("Upload submission file", type=["json"]) - - if uploaded_file: - data = str(uploaded_file.read(), "utf-8") - json_data = json.loads(data) - submission_names = get_submission_names() - submission_name = json_data["submission_name"] - if submission_name in submission_names: - st.error(f"🙈 Submission name `{submission_name}` is already taken. Please rename your submission.") - submission_errors += 1 - else: - is_valid, message = validate_json(json_data) - if is_valid: - st.success(message) - else: - st.error(message) - submission_errors += 1 - - with st.expander("Submission format"): - st.markdown( - """ - Please follow this JSON format for your `submission.json` file: - - ```json - { - "submission_name": "An identifying name of your system", - "param_count": 123, # The number of parameters your system has. - "description": "An optional brief description of the system that will be shown on the results page", - "tasks": - { - "dataset_identifier": { - "values": ["output-0", "output-1", "..."], # A list of system outputs. - "keys": ["gem_id-0", "gem_id-1", ...] # A list of GEM IDs. - } - } - } - ``` - Here, `dataset_identifier` is the identifier of the dataset followed by - an identifier of the set the outputs were created from, for example - `_validation` or `_test`. For example, the `mlsum_de` test set has the - identifier `mlsum_de_test`. The `keys` field is needed to avoid - accidental shuffling that will impact your metrics. Simply add a list of - IDs from the `gem_id` column of each evaluation dataset in the same - order as your values. Please see the sample submission below: - """ - ) - with open("sample-submission.json", "r") as f: - example_submission = json.load(f) - st.json(example_submission) - - user_name = st.text_input("Enter your 🤗 Hub username", help="This field is required to track your submission and cannot be empty") - submit_button = st.form_submit_button("Make Submission") - -if submit_button and submission_errors == 0: - with st.spinner("⏳ Preparing submission for evaluation ..."): - submission_name = json_data["submission_name"] - submission_name_formatted = submission_name.lower().replace(" ", "-").replace("/", "-") - submission_time = str(int(datetime.now().timestamp())) - - # Create submission dataset under benchmarks ORG - submission_repo_id = f"GEM-submissions/{user_name}__{submission_name_formatted}__{submission_time}" - dataset_repo_url = f"https://huggingface.co/datasets/{submission_repo_id}" - repo = Repository( - local_dir=LOCAL_REPO, - clone_from=dataset_repo_url, - repo_type="dataset", - private=False, - use_auth_token=HF_TOKEN, - ) - generate_dataset_card(submission_name) - - with open(f"{LOCAL_REPO}/submission.json", "w", encoding="utf-8") as f: - json.dump(json_data, f) - - # TODO: add informative commit msg - commit_url = repo.push_to_hub() - if commit_url is not None: - commit_sha = commit_url.split("/")[-1] - else: - commit_sha = repo.git_head_commit_url().split("/")[-1] - - submission_id = submission_name + "__" + str(uuid.uuid4())[:6] + "__" + submission_time - - # Define AutoTrain payload - project_config = {} - # Need a dummy dataset to use the dataset loader in AutoTrain - project_config["dataset_name"] = "lewtun/imdb-dummy" - project_config["dataset_config"] = "lewtun--imdb-dummy" - project_config["dataset_split"] = "train" - project_config["col_mapping"] = {"text": "text", "label": "target"} - # Specify benchmark parameters - project_config["model"] = "gem" - project_config["dataset"] = "GEM/references" - project_config["submission_dataset"] = submission_repo_id - project_id = str(uuid.uuid4()).split("-")[0] - project_payload = { - "username": AUTOTRAIN_USERNAME, - "proj_name": f"benchmark-gem-{project_id}", - "task": 1, - "config": { - "language": "en", - "max_models": 5, - "instance": { - "provider": "aws", - "instance_type": "ml.g4dn.4xlarge", - "max_runtime_seconds": 172800, - "num_instances": 1, - "disk_size_gb": 150, - }, - "benchmark": { - "dataset": project_config["dataset"], - "model": project_config["model"], - "submission_dataset": project_config["submission_dataset"], - }, - }, - } - project_json_resp = http_post( - path="/projects/create", payload=project_payload, token=HF_TOKEN, domain=AUTOTRAIN_BACKEND_API - ).json() - print(f"Project creation: {project_json_resp}") - - # Upload data - payload = { - "split": 4, - "col_mapping": project_config["col_mapping"], - "load_config": {"max_size_bytes": 0, "shuffle": False}, - } - data_json_resp = http_post( - path=f"/projects/{project_json_resp['id']}/data/{project_config['dataset_name']}", - payload=payload, - token=HF_TOKEN, - domain=AUTOTRAIN_BACKEND_API, - params={ - "type": "dataset", - "config_name": project_config["dataset_config"], - "split_name": project_config["dataset_split"], - }, - ).json() - print(f"Dataset creation: {data_json_resp}") - - # Run training - train_json_resp = http_get( - path=f"/projects/{project_json_resp['id']}/data/start_process", - token=HF_TOKEN, - domain=AUTOTRAIN_BACKEND_API, - ).json() - print(f"Training job response: {train_json_resp}") - - logs_repo_url = f"https://huggingface.co/datasets/GEM-submissions/{LOGS_REPO}" - logs_repo = Repository( - local_dir=LOGS_REPO, - clone_from=logs_repo_url, - repo_type="dataset", - private=True, - use_auth_token=HF_TOKEN, - ) - evaluation_log = {} - evaluation_log["payload"] = project_payload - evaluation_log["project_creation_response"] = project_json_resp - evaluation_log["dataset_creation_response"] = data_json_resp - evaluation_log["autotrain_job_response"] = train_json_resp - with jsonlines.open(f"{LOGS_REPO}/logs.jsonl") as r: - lines = [] - for obj in r: - lines.append(obj) - - lines.append(evaluation_log) - with jsonlines.open(f"{LOGS_REPO}/logs.jsonl", mode="w") as writer: - for job in lines: - writer.write(job) - logs_repo.push_to_hub(commit_message=f"Submission with job ID {project_json_resp['id']}") - - if train_json_resp["success"] == 1: - st.success( - f"✅ Submission {submission_name} was successfully submitted for evaluation!" - ) - st.markdown( - f""" - Evaluation can take up to 1 hour to complete, so grab a ☕ or 🍵 while you wait: - - * 📊 Click [here](https://huggingface.co/spaces/GEM/results) to view the results from your submission - * 💾 Click [here]({dataset_repo_url}) to view your submission file on the Hugging Face Hub - - Please [contact the organisers](mailto:gehrmann@google.com) if you would like your submission and/or evaluation scores deleted. - """ - ) - else: - st.error( - "🙈 Oh noes, there was an error submitting your submission! Please [contact the organisers](mailto:gehrmann@google.com)" - ) - - # # Flush local repos - shutil.rmtree(LOCAL_REPO, ignore_errors=True) - shutil.rmtree(LOGS_REPO, ignore_errors=True) - - -with st.expander("Download all submissions and scores"): - st.markdown("Click the button below if you'd like to download all the submissions and evaluations from GEM:") - outputs_url = hf_hub_url( - "GEM-submissions/v2-outputs-and-scores", "gem-v2-outputs-and-scores.zip", repo_type="dataset" - ) - outputs_filepath = cached_download(outputs_url) - - with open(outputs_filepath, "rb") as f: - btn = st.download_button(label="Download submissions and scores", data=f, file_name="outputs-and-scores.zip") diff --git a/spaces/GT4SD/keyword_bert/model_cards/description.md b/spaces/GT4SD/keyword_bert/model_cards/description.md deleted file mode 100644 index be423d5c5038e7bbbc12267a29ac94916f96a385..0000000000000000000000000000000000000000 --- a/spaces/GT4SD/keyword_bert/model_cards/description.md +++ /dev/null @@ -1,6 +0,0 @@ -logo - -[KeywordBERT](https://github.com/MaartenGr/KeyBERT) is a minimal and easy-to-use keyword extraction technique that leverages BERT embeddings to create keywords and keyphrases that are most similar to a document. - -For **examples** and **documentation** of the model parameters, please see below. -Moreover, we provide a **model card** ([Mitchell et al. (2019)](https://dl.acm.org/doi/abs/10.1145/3287560.3287596?casa_token=XD4eHiE2cRUAAAAA:NL11gMa1hGPOUKTAbtXnbVQBDBbjxwcjGECF_i-WC_3g1aBgU1Hbz_f2b4kI_m1in-w__1ztGeHnwHs)) at the bottom of this page. diff --git a/spaces/Gen-Sim/Gen-Sim/cliport/generated_tasks/color_coordinated_block_bridge.py b/spaces/Gen-Sim/Gen-Sim/cliport/generated_tasks/color_coordinated_block_bridge.py deleted file mode 100644 index 67344d749436fc0bb504dc12015ae6ece14a68f9..0000000000000000000000000000000000000000 --- a/spaces/Gen-Sim/Gen-Sim/cliport/generated_tasks/color_coordinated_block_bridge.py +++ /dev/null @@ -1,63 +0,0 @@ -import numpy as np -import os -import pybullet as p -import random -from cliport.tasks import primitives -from cliport.tasks.grippers import Spatula -from cliport.tasks.task import Task -from cliport.utils import utils -import numpy as np -from cliport.tasks.task import Task -from cliport.utils import utils - -class ColorCoordinatedBlockBridge(Task): - """Construct a bridge by interleaving three differently colored blocks (red, blue, and green) on a pallet in a specific sequence - red block at the edges, blue block in the middle, and a green block on top of the red and blue blocks. Repeat this sequence until a bridge is formed across the length of the pallet.""" - - def __init__(self): - super().__init__() - self.max_steps = 20 - self.lang_template = "construct a bridge by interleaving three differently colored blocks (red, blue, and green) on a pallet in a specific sequence" - self.task_completed_desc = "done constructing the bridge." - self.additional_reset() - - def reset(self, env): - super().reset(env) - - # Add pallet. - pallet_size = (0.30, 0.15, 0.02) - pallet_pose = self.get_random_pose(env, pallet_size) - env.add_object('pallet/pallet.urdf', pallet_pose, 'fixed') - - # Block colors. - colors = [utils.COLORS['red'], utils.COLORS['blue'], utils.COLORS['green']] - - # Add blocks. - block_size = (0.04, 0.04, 0.04) - block_urdf = 'block/block.urdf' - - objs = [] - for i in range(9): # 3 sets of 3 colored blocks - block_pose = self.get_random_pose(env, block_size) - block_id = env.add_object(block_urdf, block_pose, color=colors[i % 3]) - objs.append(block_id) - - # Associate placement locations for goals. - place_pos = [(0, -0.05, 0.02), (0, 0, 0.02), (0, 0.05, 0.02), # bottom layer - (0, -0.05, 0.06), (0, 0, 0.06), (0, 0.05, 0.06), # middle layer - (0, -0.05, 0.10), (0, 0, 0.10), (0, 0.05, 0.10)] # top layer - targs = [(utils.apply(pallet_pose, i), pallet_pose[1]) for i in place_pos] - - # Goal: blocks are stacked in a bridge (bottom layer: red, blue, red). - self.add_goal(objs=objs[:3], matches=np.ones((3, 3)), targ_poses=targs[:3], replace=False, - rotations=True, metric='pose', params=None, step_max_reward=1 / 3, symmetries=[np.pi/2]*3, - language_goal=self.lang_template) - - # Goal: blocks are stacked in a bridge (middle layer: green, green, green). - self.add_goal(objs=objs[3:6], matches=np.ones((3, 3)), targ_poses=targs[3:6], replace=False, - rotations=True, metric='pose', params=None, step_max_reward=1 / 3, symmetries=[np.pi/2]*3, - language_goal=self.lang_template) - - # Goal: blocks are stacked in a bridge (top layer: red, blue, red). - self.add_goal(objs=objs[6:], matches=np.ones((3, 3)), targ_poses=targs[6:], replace=False, - rotations=True, metric='pose', params=None, step_max_reward=1 / 3, symmetries=[np.pi/2]*3, - language_goal=self.lang_template) diff --git a/spaces/Gen-Sim/Gen-Sim/cliport/generated_tasks/stack_color_coordinated_blocks.py b/spaces/Gen-Sim/Gen-Sim/cliport/generated_tasks/stack_color_coordinated_blocks.py deleted file mode 100644 index 27cc15a0e8e1145cea0fa3ca03d2b6965816d8c1..0000000000000000000000000000000000000000 --- a/spaces/Gen-Sim/Gen-Sim/cliport/generated_tasks/stack_color_coordinated_blocks.py +++ /dev/null @@ -1,68 +0,0 @@ -import numpy as np -import os -import pybullet as p -import random -from cliport.tasks import primitives -from cliport.tasks.grippers import Spatula -from cliport.tasks.task import Task -from cliport.utils import utils -import numpy as np -from cliport.tasks.task import Task -from cliport.utils import utils - -class StackColorCoordinatedBlocks(Task): - """Pick up six blocks of different colors (red, blue, green, yellow, orange, and purple) - and stack them on a pallet in two separate stacks. The first stack should be red at the bottom, - blue in the middle, and green at top. The second stack should be yellow at the bottom, - orange in the middle, and purple at the top.""" - - def __init__(self): - super().__init__() - self.max_steps = 20 - self.lang_template = "stack the blocks on the pallet in two separate stacks. " \ - "The first stack should be red at the bottom, blue in the middle, " \ - "and green at top. The second stack should be yellow at the bottom, " \ - "orange in the middle, and purple at the top." - self.task_completed_desc = "done stacking color-coordinated blocks." - - def reset(self, env): - super().reset(env) - - # Add pallet. - # x, y, z dimensions for the asset size - pallet_size = (0.15, 0.15, 0.01) - pallet_urdf = 'pallet/pallet.urdf' - pallet_pose = self.get_random_pose(env, pallet_size) - env.add_object(pallet_urdf, pallet_pose, 'fixed') - - # Block colors. - colors = [ - utils.COLORS['red'], utils.COLORS['blue'], utils.COLORS['green'], - utils.COLORS['yellow'], utils.COLORS['orange'], utils.COLORS['purple'] - ] - - # Add blocks. - # x, y, z dimensions for the asset size - block_size = (0.04, 0.04, 0.04) - block_urdf = 'box/box-template.urdf' - blocks = [] - for i in range(6): - block_pose = self.get_random_pose(env, block_size) - block_id = env.add_object(block_urdf, block_pose, color=colors[i]) - blocks.append(block_id) - - # Associate placement locations for goals. - place_pos = [(0, -0.05, 0.02), (0, 0, 0.02), (0, 0.05, 0.02), - (0, -0.05, 0.06), (0, 0, 0.06), (0, 0.05, 0.06)] - targs = [(utils.apply(pallet_pose, i), pallet_pose[1]) for i in place_pos] - - # Goal: blocks are stacked on the pallet in two separate stacks. - # First stack: red at the bottom, blue in the middle, and green at top. - self.add_goal(objs=blocks[:3], matches=np.ones((3, 3)), targ_poses=targs[:3], replace=False, - rotations=True, metric='pose', params=None, step_max_reward=1 / 2, symmetries=[np.pi/2]*3, - language_goal=self.lang_template) - - # Second stack: yellow at the bottom, orange in the middle, and purple at the top. - self.add_goal(objs=blocks[3:], matches=np.ones((3, 3)), targ_poses=targs[3:], replace=False, - rotations=True, metric='pose', params=None, step_max_reward=1 / 2, symmetries=[np.pi/2]*3, - language_goal=self.lang_template) \ No newline at end of file diff --git a/spaces/Gen-Sim/Gen-Sim/misc/tsne_visualize_chatgpt_embeddings_for_task.py b/spaces/Gen-Sim/Gen-Sim/misc/tsne_visualize_chatgpt_embeddings_for_task.py deleted file mode 100644 index fc619b92f37457e2070cd7ce78a9ca6c11707df0..0000000000000000000000000000000000000000 --- a/spaces/Gen-Sim/Gen-Sim/misc/tsne_visualize_chatgpt_embeddings_for_task.py +++ /dev/null @@ -1,197 +0,0 @@ -import torch -import torch.nn -import torchvision.models as models -from copy import deepcopy -import cv2 - -import cv2 -import numpy as np -import sys -import itertools -import os -import IPython -import matplotlib -matplotlib.use("Agg") - -import matplotlib.pyplot as plt -import pandas as pd - -import openai -from sklearn.manifold import TSNE -from sklearn.decomposition import PCA, KernelPCA -import seaborn as sns - -import time -from matplotlib.offsetbox import OffsetImage, AnnotationBbox -import colorsys -from torchvision import datasets -import argparse -import matplotlib.patheffects as PathEffects -from sklearn.cluster import KMeans - - -sns.set_style("white") -sns.set_palette("muted") - -font = { - "size": 22, -} - -matplotlib.rc("font", **font) -sns.set_context("paper", font_scale=3.0) - - -plt_param = {'legend.fontsize': 60, - 'axes.labelsize': 80, - 'axes.titlesize':80, - 'font.size' : 80 , - 'xtick.labelsize':80, - 'ytick.labelsize':80, - 'lines.linewidth': 10, - 'lines.color': (0,0,0)} - -plt.rcParams.update(plt_param) - -openai.api_key ="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" -GPT_MODEL = "gpt4" -EMBEDDING_MODEL = "text-embedding-ada-002" - - -def normalize_numpy_array(arr): - return arr / (arr.max(axis=-1, keepdims=True) - arr.min(axis=-1, keepdims=True)) - -def fashion_scatter( - x, class_labels, fig_name, class_names, add_text=True -): - # choose a color palette with seaborn. - x = np.array(x) - class_labels = np.array(class_labels) - num_classes = np.max(class_labels) + 1 - - # create a scatter plot. - fig_size1, fig_size2 = 140 * 0.8, 80 * 0.6 - plt.clf() - plt.cla() - f = plt.figure(figsize=(fig_size1, fig_size2)) - ax = plt.subplot() - - # divide by a scale - # x = normalize_numpy_array(x) - for x_i in range(num_classes): - mask = class_labels == x_i - if mask.sum() > 0: - sc = ax.scatter( - x[mask, 0], - x[mask, 1], - lw=0, - s=1500, - label=class_names[x_i] - # c=rgb_color[mask], - ) # 40 - if add_text: - txts = [] - for i in range(len(class_names)): - xtext, ytext = x[i, :] # np.median(x[i, :], axis=0) - txt = ax.text(xtext, ytext, str(class_names[i]), fontsize=40) # 24 - txt.set_path_effects( - [PathEffects.Stroke(linewidth=5, foreground="w"), PathEffects.Normal()] - ) - txts.append(txt) - - # ax.legend(loc='upper left', bbox_to_anchor=(1, 1)) - ax.axis("on") - # ax.axis("tight") - plt.savefig(fig_name +".pdf") - plt.clf() - print("save figure to ", fig_name) - -def compute_embedding(response): - while True: - try: - print('ping openai api') - response_embedding = openai.Embedding.create( - model=EMBEDDING_MODEL, - input=response, - ) - - response_embedding = np.array(response_embedding["data"][0]['embedding']) - return response_embedding - except Exception as e: - print(e) - -def draw_latent_plot( - max_num=80, - method="pca+tsne", - fig_name="", -): - # query: (response, embeddings) - latents = [] - class_labels = [] - label_sets = [] - - # chatgpt embedding - total_tasks = [os.path.join("cliport/tasks", x) for x in os.listdir("cliport/tasks")] + [os.path.join("cliport/generated_tasks", x) for x in os.listdir("cliport/generated_tasks")] - total_tasks = [t for t in total_tasks if 'pycache' not in t and 'init' not in t \ - and 'README' not in t and 'extended' not in t and 'gripper' not in t and 'primitive' not in t\ - and 'task.py' not in t and 'camera' not in t and 'seq' not in t] - cache_embedding_path = "output/output_embedding/task_cache_embedding.npz" - cache_embedding = {} - - if os.path.exists(cache_embedding_path): - cache_embedding = dict(np.load(cache_embedding_path)) - - print(total_tasks) - - for idx, task_name in enumerate(total_tasks): - if task_name in cache_embedding: - code_embedding = cache_embedding[task_name] - else: - code = open(task_name).read() - code_embedding = compute_embedding(code) - - latents.append(code_embedding) - label_sets.append(task_name.split("/")[-1][:-3]) - cache_embedding[task_name] = code_embedding - class_labels.append(idx) - - latents = np.array(latents) - print("latents shape:", latents.shape) - np.savez(cache_embedding_path, **cache_embedding) - - n_clusters = 6 - kmeans = KMeans(n_clusters=n_clusters, init="k-means++", random_state=42) - kmeans.fit(latents) - cluster_labels = kmeans.labels_ - - if method == "pca+tsne": - # reduce dimension to the number of datapoints - pca = PCA(random_state=123, n_components=min(50, max_num)) # kernel PCA - - X_embedded = pca.fit_transform(latents) - print( - "Variance explained per principal component: {}".format( - pca.explained_variance_ratio_[:5] - ) - ) - print("PCA data shape:", X_embedded.shape) - X_embedded = TSNE(random_state=123, perplexity=20).fit_transform(X_embedded) - - if method == "pca": - pca = KernelPCA(random_state=123, n_components=2) # kernel PCA - X_embedded = pca.fit_transform(latents[:, :5]) - - if method == "tsne": - X_embedded = TSNE(random_state=123).fit_transform(latents) # perplexity - - fashion_scatter(X_embedded, class_labels, fig_name, label_sets) - fashion_scatter(X_embedded, cluster_labels, fig_name + "_cluster", label_sets) - - -if __name__ == "__main__": - parser = argparse.ArgumentParser(description="Generate chat-gpt embeddings") - """ - load task descriptions from the tasks folder and embed - """ - parser.add_argument("--file", type=str, default="task_embedding") - args = parser.parse_args() - draw_latent_plot(fig_name=f'output/output_embedding/{args.file}') \ No newline at end of file diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/ccnet/ccnet_r50-d8_512x1024_40k_cityscapes.py b/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/ccnet/ccnet_r50-d8_512x1024_40k_cityscapes.py deleted file mode 100644 index 6a4316dde57206fe369e72fa0d32a529fe1a1932..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/ccnet/ccnet_r50-d8_512x1024_40k_cityscapes.py +++ /dev/null @@ -1,4 +0,0 @@ -_base_ = [ - '../_base_/models/ccnet_r50-d8.py', '../_base_/datasets/cityscapes.py', - '../_base_/default_runtime.py', '../_base_/schedules/schedule_40k.py' -] diff --git a/spaces/GrandaddyShmax/AudioCraft_Plus/tests/models/test_musicgen.py b/spaces/GrandaddyShmax/AudioCraft_Plus/tests/models/test_musicgen.py deleted file mode 100644 index 65618a9e2ef5bb382694b50b23dd50958d590d4e..0000000000000000000000000000000000000000 --- a/spaces/GrandaddyShmax/AudioCraft_Plus/tests/models/test_musicgen.py +++ /dev/null @@ -1,58 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import pytest -import torch - -from audiocraft.models import MusicGen - - -class TestMusicGenModel: - def get_musicgen(self): - mg = MusicGen.get_pretrained(name='debug', device='cpu') - mg.set_generation_params(duration=2.0, extend_stride=2.) - return mg - - def test_base(self): - mg = self.get_musicgen() - assert mg.frame_rate == 25 - assert mg.sample_rate == 32000 - assert mg.audio_channels == 1 - - def test_generate_unconditional(self): - mg = self.get_musicgen() - wav = mg.generate_unconditional(3) - assert list(wav.shape) == [3, 1, 64000] - - def test_generate_continuation(self): - mg = self.get_musicgen() - prompt = torch.randn(3, 1, 32000) - wav = mg.generate_continuation(prompt, 32000) - assert list(wav.shape) == [3, 1, 64000] - - prompt = torch.randn(2, 1, 32000) - wav = mg.generate_continuation( - prompt, 32000, ['youpi', 'lapin dort']) - assert list(wav.shape) == [2, 1, 64000] - - prompt = torch.randn(2, 1, 32000) - with pytest.raises(AssertionError): - wav = mg.generate_continuation( - prompt, 32000, ['youpi', 'lapin dort', 'one too many']) - - def test_generate(self): - mg = self.get_musicgen() - wav = mg.generate( - ['youpi', 'lapin dort']) - assert list(wav.shape) == [2, 1, 64000] - - def test_generate_long(self): - mg = self.get_musicgen() - mg.max_duration = 3. - mg.set_generation_params(duration=4., extend_stride=2.) - wav = mg.generate( - ['youpi', 'lapin dort']) - assert list(wav.shape) == [2, 1, 32000 * 4] diff --git a/spaces/Grazon/ChitChat/README.md b/spaces/Grazon/ChitChat/README.md deleted file mode 100644 index cebcb6707599a03538e1da46060b83b11fa10859..0000000000000000000000000000000000000000 --- a/spaces/Grazon/ChitChat/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: ChitChat -emoji: 🏢 -colorFrom: yellow -colorTo: indigo -sdk: gradio -sdk_version: 3.40.1 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/HaloMaster/chinesesummary/fengshen/examples/pretrain_t5/finetune_unimc_randeng_t5_char_57M.sh b/spaces/HaloMaster/chinesesummary/fengshen/examples/pretrain_t5/finetune_unimc_randeng_t5_char_57M.sh deleted file mode 100644 index fccf833bdc954707bdc94d6bef3821239006a2c6..0000000000000000000000000000000000000000 --- a/spaces/HaloMaster/chinesesummary/fengshen/examples/pretrain_t5/finetune_unimc_randeng_t5_char_57M.sh +++ /dev/null @@ -1,129 +0,0 @@ -#!/bin/bash -#SBATCH --job-name=finetune_unimc_randeng_t5_char_57M -#SBATCH --nodes=1 -#SBATCH --ntasks-per-node=8 -#SBATCH --gres=gpu:8 # number of gpus -#SBATCH --cpus-per-task=32 # cpu-cores per task (>1 if multi-threaded tasks) -#SBATCH -o /cognitive_comp/ganruyi/experiments/randeng_t5_char_57M/%x-%j.log -#SBATCH -e /cognitive_comp/ganruyi/experiments/randeng_t5_char_57M/%x-%j.err - -set -x -e - -echo "START TIME: $(date)" -MICRO_BATCH_SIZE=64 -ROOT_DIR=/cognitive_comp/ganruyi/experiments/finetune_unimc_randeng_t5_char_57M/ -if [ ! -d ${ROOT_DIR} ];then - mkdir ${ROOT_DIR} - echo ${ROOT_DIR} created!!!!!!!!!!!!!! -else - echo ${ROOT_DIR} exist!!!!!!!!!!!!!!! -fi - -ZERO_STAGE=1 - -config_json="$ROOT_DIR/ds_config.finetune_unimc_randeng_t5_char_57M.$SLURM_JOBID.json" -export MASTER_PORT=$[RANDOM%10000+30000] -export CUDA_VISIBLE_DEVICES='6' - -cat < $config_json -{ - "train_micro_batch_size_per_gpu": ${MICRO_BATCH_SIZE}, - "steps_per_print": 100, - "gradient_clipping": 1.0, - "zero_optimization": { - "stage": $ZERO_STAGE, - "contiguous_gradients": false, - "overlap_comm": true, - "reduce_scatter": true, - "reduce_bucket_size": 50000000, - "allgather_bucket_size": 500000000 - }, - "optimizer": { - "type": "Adam", - "params": { - "lr": 1e-4, - "weight_decay": 1e-2 - } - }, - "scheduler": { - "params": { - "warmup_max_lr": 1e-04, - "warmup_min_lr": 1e-05, - "total_num_steps": 240000, - "warmup_num_steps" : 10000 - }, - "type": "WarmupDecayLR" - }, - "zero_allow_untested_optimizer": false, - "fp16": { - "enabled": true, - "loss_scale": 0, - "loss_scale_window": 1000, - "hysteresis": 2, - "min_loss_scale": 1 - }, - "activation_checkpointing": { - "partition_activations": false, - "contiguous_memory_optimization": false - }, - "wall_clock_breakdown": false -} -EOT - -export PL_DEEPSPEED_CONFIG_PATH=$config_json -export TORCH_EXTENSIONS_DIR=/cognitive_comp/ganruyi/tmp/torch_extendsions -# strategy=ddp -strategy=deepspeed_stage_1 - -TRAINER_ARGS=" - --max_epochs 1 \ - --gpus 1 \ - --num_nodes 1 \ - --strategy ${strategy} \ - --default_root_dir $ROOT_DIR \ - --dirpath $ROOT_DIR/ckpt \ - --save_top_k 3 \ - --every_n_train_steps 100000 \ - --monitor train_loss \ - --mode min \ - --save_last \ - --val_check_interval 0.1 \ - --dataset_num_workers 4 \ - --dataloader_num_workers 4 \ - --replace_sampler_ddp False \ -" -# --accumulate_grad_batches 8 \ -TRAIN_DATA_DIR=/cognitive_comp/yangping/data/unidata/multiplechoice/pretraining_alldata/alldata/train.json -VALID_DATA_DIR=/cognitive_comp/yangping/data/unidata/multiplechoice/pretraining_alldata/alldata/dev.json - -DATA_ARGS=" - --train_batchsize $MICRO_BATCH_SIZE \ - --valid_batchsize $MICRO_BATCH_SIZE \ - --train_data_path ${TRAIN_DATA_DIR} \ - --valid_data_path ${TRAIN_DATA_DIR} \ - --max_seq_length 512 \ -" - -MODEL_ARGS=" - --pretrained_model_path /cognitive_comp/ganruyi/experiments/randeng_t5_char_57M/randeng_t5_char_57M \ - --tokenizer_type bert_tokenizer \ -" - -SCRIPTS_PATH=/cognitive_comp/ganruyi/Fengshenbang-LM/fengshen/examples/pretrain_t5/finetune_t5.py - -export CMD=" \ - $SCRIPTS_PATH \ - $TRAINER_ARGS \ - $MODEL_ARGS \ - $DATA_ARGS \ - " - -echo $CMD -/home/ganruyi/anaconda3/bin/python $CMD -# SINGULARITY_PATH=/cognitive_comp/ganruyi/pytorch21_06_py3_docker_image_v2.sif -# srun singularity exec --nv -B /cognitive_comp/:/cognitive_comp/ $SINGULARITY_PATH bash -c '/home/ganruyi/anaconda3/bin/python $CMD' - -# source activate base -# python $CMD -# srun --nodes=1 --gres=gpu:8 --ntasks-per-node=8 --cpus-per-task=30 --jobid=171866 -e %x-%j.err -o %x-%j.log python $CMD - diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/modules/character_token_embedder.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/modules/character_token_embedder.py deleted file mode 100644 index 181221b61b9f76453b67e3b848b198620dce912c..0000000000000000000000000000000000000000 --- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/modules/character_token_embedder.py +++ /dev/null @@ -1,214 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import logging -from typing import List, Tuple - -import torch -import torch.nn.functional as F -from fairseq.data import Dictionary -from torch import nn - - -CHAR_PAD_IDX = 0 -CHAR_EOS_IDX = 257 - - -logger = logging.getLogger(__name__) - - -class CharacterTokenEmbedder(torch.nn.Module): - def __init__( - self, - vocab: Dictionary, - filters: List[Tuple[int, int]], - char_embed_dim: int, - word_embed_dim: int, - highway_layers: int, - max_char_len: int = 50, - char_inputs: bool = False, - ): - super(CharacterTokenEmbedder, self).__init__() - - self.onnx_trace = False - self.embedding_dim = word_embed_dim - self.max_char_len = max_char_len - self.char_embeddings = nn.Embedding(257, char_embed_dim, padding_idx=0) - self.symbol_embeddings = nn.Parameter(torch.FloatTensor(2, word_embed_dim)) - self.eos_idx, self.unk_idx = 0, 1 - self.char_inputs = char_inputs - - self.convolutions = nn.ModuleList() - for width, out_c in filters: - self.convolutions.append( - nn.Conv1d(char_embed_dim, out_c, kernel_size=width) - ) - - last_dim = sum(f[1] for f in filters) - - self.highway = Highway(last_dim, highway_layers) if highway_layers > 0 else None - - self.projection = nn.Linear(last_dim, word_embed_dim) - - assert ( - vocab is not None or char_inputs - ), "vocab must be set if not using char inputs" - self.vocab = None - if vocab is not None: - self.set_vocab(vocab, max_char_len) - - self.reset_parameters() - - def prepare_for_onnx_export_(self): - self.onnx_trace = True - - def set_vocab(self, vocab, max_char_len): - word_to_char = torch.LongTensor(len(vocab), max_char_len) - - truncated = 0 - for i in range(len(vocab)): - if i < vocab.nspecial: - char_idxs = [0] * max_char_len - else: - chars = vocab[i].encode() - # +1 for padding - char_idxs = [c + 1 for c in chars] + [0] * (max_char_len - len(chars)) - if len(char_idxs) > max_char_len: - truncated += 1 - char_idxs = char_idxs[:max_char_len] - word_to_char[i] = torch.LongTensor(char_idxs) - - if truncated > 0: - logger.info( - "truncated {} words longer than {} characters".format( - truncated, max_char_len - ) - ) - - self.vocab = vocab - self.word_to_char = word_to_char - - @property - def padding_idx(self): - return Dictionary().pad() if self.vocab is None else self.vocab.pad() - - def reset_parameters(self): - nn.init.xavier_normal_(self.char_embeddings.weight) - nn.init.xavier_normal_(self.symbol_embeddings) - nn.init.xavier_uniform_(self.projection.weight) - - nn.init.constant_( - self.char_embeddings.weight[self.char_embeddings.padding_idx], 0.0 - ) - nn.init.constant_(self.projection.bias, 0.0) - - def forward( - self, - input: torch.Tensor, - ): - if self.char_inputs: - chars = input.view(-1, self.max_char_len) - pads = chars[:, 0].eq(CHAR_PAD_IDX) - eos = chars[:, 0].eq(CHAR_EOS_IDX) - if eos.any(): - if self.onnx_trace: - chars = torch.where(eos.unsqueeze(1), chars.new_zeros(1), chars) - else: - chars[eos] = 0 - - unk = None - else: - flat_words = input.view(-1) - chars = self.word_to_char[flat_words.type_as(self.word_to_char)].type_as( - input - ) - pads = flat_words.eq(self.vocab.pad()) - eos = flat_words.eq(self.vocab.eos()) - unk = flat_words.eq(self.vocab.unk()) - - word_embs = self._convolve(chars) - if self.onnx_trace: - if pads.any(): - word_embs = torch.where( - pads.unsqueeze(1), word_embs.new_zeros(1), word_embs - ) - if eos.any(): - word_embs = torch.where( - eos.unsqueeze(1), self.symbol_embeddings[self.eos_idx], word_embs - ) - if unk is not None and unk.any(): - word_embs = torch.where( - unk.unsqueeze(1), self.symbol_embeddings[self.unk_idx], word_embs - ) - else: - if pads.any(): - word_embs[pads] = 0 - if eos.any(): - word_embs[eos] = self.symbol_embeddings[self.eos_idx] - if unk is not None and unk.any(): - word_embs[unk] = self.symbol_embeddings[self.unk_idx] - - return word_embs.view(input.size()[:2] + (-1,)) - - def _convolve( - self, - char_idxs: torch.Tensor, - ): - char_embs = self.char_embeddings(char_idxs) - char_embs = char_embs.transpose(1, 2) # BTC -> BCT - - conv_result = [] - - for conv in self.convolutions: - x = conv(char_embs) - x, _ = torch.max(x, -1) - x = F.relu(x) - conv_result.append(x) - - x = torch.cat(conv_result, dim=-1) - - if self.highway is not None: - x = self.highway(x) - x = self.projection(x) - - return x - - -class Highway(torch.nn.Module): - """ - A `Highway layer `_. - Adopted from the AllenNLP implementation. - """ - - def __init__(self, input_dim: int, num_layers: int = 1): - super(Highway, self).__init__() - self.input_dim = input_dim - self.layers = nn.ModuleList( - [nn.Linear(input_dim, input_dim * 2) for _ in range(num_layers)] - ) - self.activation = nn.ReLU() - - self.reset_parameters() - - def reset_parameters(self): - for layer in self.layers: - # As per comment in AllenNLP: - # We should bias the highway layer to just carry its input forward. We do that by - # setting the bias on `B(x)` to be positive, because that means `g` will be biased to - # be high, so we will carry the input forward. The bias on `B(x)` is the second half - # of the bias vector in each Linear layer. - nn.init.constant_(layer.bias[self.input_dim :], 1) - - nn.init.constant_(layer.bias[: self.input_dim], 0) - nn.init.xavier_normal_(layer.weight) - - def forward(self, x: torch.Tensor): - for layer in self.layers: - projection = layer(x) - proj_x, gate = projection.chunk(2, dim=-1) - proj_x = self.activation(proj_x) - gate = torch.sigmoid(gate) - x = gate * x + (gate.new_tensor([1]) - gate) * proj_x - return x diff --git a/spaces/Harveenchadha/Vakyansh-Malayalam-TTS/ttsv/install.sh b/spaces/Harveenchadha/Vakyansh-Malayalam-TTS/ttsv/install.sh deleted file mode 100644 index 51e038d5a0098f21d4efd8051a15b7f0cdeb4b73..0000000000000000000000000000000000000000 --- a/spaces/Harveenchadha/Vakyansh-Malayalam-TTS/ttsv/install.sh +++ /dev/null @@ -1,6 +0,0 @@ -cd src/glow_tts/monotonic_align/ -pip install . -cd ../../../ - -# torch -pip install torch==1.7.1+cu110 torchvision==0.8.2+cu110 torchaudio==0.7.2 -f https://download.pytorch.org/whl/torch_stable.html diff --git a/spaces/Harveenchadha/Vakyansh-Malayalam-TTS/ttsv/src/hifi_gan/train.py b/spaces/Harveenchadha/Vakyansh-Malayalam-TTS/ttsv/src/hifi_gan/train.py deleted file mode 100644 index 709e085d019eb98006b26555f7fe2582d759efa6..0000000000000000000000000000000000000000 --- a/spaces/Harveenchadha/Vakyansh-Malayalam-TTS/ttsv/src/hifi_gan/train.py +++ /dev/null @@ -1,400 +0,0 @@ -import warnings - -warnings.simplefilter(action="ignore", category=FutureWarning) -import itertools -import os -import time -import argparse -import json -import torch -import torch.nn.functional as F -from torch.utils.tensorboard import SummaryWriter -from torch.utils.data import DistributedSampler, DataLoader -import torch.multiprocessing as mp -from torch.distributed import init_process_group -from torch.nn.parallel import DistributedDataParallel -from env import AttrDict, build_env -from meldataset import MelDataset, mel_spectrogram, get_dataset_filelist -from models import ( - Generator, - MultiPeriodDiscriminator, - MultiScaleDiscriminator, - feature_loss, - generator_loss, - discriminator_loss, -) -from utils import plot_spectrogram, scan_checkpoint, load_checkpoint, save_checkpoint - -torch.backends.cudnn.benchmark = True - - -def train(rank, a, h): - if h.num_gpus > 1: - init_process_group( - backend=h.dist_config["dist_backend"], - init_method=h.dist_config["dist_url"], - world_size=h.dist_config["world_size"] * h.num_gpus, - rank=rank, - ) - - torch.cuda.manual_seed(h.seed) - device = torch.device("cuda:{:d}".format(rank)) - - generator = Generator(h).to(device) - mpd = MultiPeriodDiscriminator().to(device) - msd = MultiScaleDiscriminator().to(device) - - if rank == 0: - print(generator) - os.makedirs(a.checkpoint_path, exist_ok=True) - print("checkpoints directory : ", a.checkpoint_path) - - if os.path.isdir(a.checkpoint_path): - cp_g = scan_checkpoint(a.checkpoint_path, "g_") - cp_do = scan_checkpoint(a.checkpoint_path, "do_") - - steps = 0 - if cp_g is None or cp_do is None: - state_dict_do = None - last_epoch = -1 - else: - state_dict_g = load_checkpoint(cp_g, device) - state_dict_do = load_checkpoint(cp_do, device) - generator.load_state_dict(state_dict_g["generator"]) - mpd.load_state_dict(state_dict_do["mpd"]) - msd.load_state_dict(state_dict_do["msd"]) - steps = state_dict_do["steps"] + 1 - last_epoch = state_dict_do["epoch"] - - if h.num_gpus > 1: - generator = DistributedDataParallel(generator, device_ids=[rank]).to(device) - mpd = DistributedDataParallel(mpd, device_ids=[rank]).to(device) - msd = DistributedDataParallel(msd, device_ids=[rank]).to(device) - - optim_g = torch.optim.AdamW( - generator.parameters(), h.learning_rate, betas=[h.adam_b1, h.adam_b2] - ) - optim_d = torch.optim.AdamW( - itertools.chain(msd.parameters(), mpd.parameters()), - h.learning_rate, - betas=[h.adam_b1, h.adam_b2], - ) - - if state_dict_do is not None: - optim_g.load_state_dict(state_dict_do["optim_g"]) - optim_d.load_state_dict(state_dict_do["optim_d"]) - - scheduler_g = torch.optim.lr_scheduler.ExponentialLR( - optim_g, gamma=h.lr_decay, last_epoch=last_epoch - ) - scheduler_d = torch.optim.lr_scheduler.ExponentialLR( - optim_d, gamma=h.lr_decay, last_epoch=last_epoch - ) - - training_filelist, validation_filelist = get_dataset_filelist(a) - - trainset = MelDataset( - training_filelist, - h.segment_size, - h.n_fft, - h.num_mels, - h.hop_size, - h.win_size, - h.sampling_rate, - h.fmin, - h.fmax, - n_cache_reuse=0, - shuffle=False if h.num_gpus > 1 else True, - fmax_loss=h.fmax_for_loss, - device=device, - fine_tuning=a.fine_tuning, - base_mels_path=a.input_mels_dir, - ) - - train_sampler = DistributedSampler(trainset) if h.num_gpus > 1 else None - - train_loader = DataLoader( - trainset, - num_workers=h.num_workers, - shuffle=False, - sampler=train_sampler, - batch_size=h.batch_size, - pin_memory=True, - drop_last=True, - ) - - if rank == 0: - validset = MelDataset( - validation_filelist, - h.segment_size, - h.n_fft, - h.num_mels, - h.hop_size, - h.win_size, - h.sampling_rate, - h.fmin, - h.fmax, - False, - False, - n_cache_reuse=0, - fmax_loss=h.fmax_for_loss, - device=device, - fine_tuning=a.fine_tuning, - base_mels_path=a.input_mels_dir, - ) - validation_loader = DataLoader( - validset, - num_workers=1, - shuffle=False, - sampler=None, - batch_size=1, - pin_memory=True, - drop_last=True, - ) - - sw = SummaryWriter(os.path.join(a.logs_path)) - - generator.train() - mpd.train() - msd.train() - for epoch in range(max(0, last_epoch), a.training_epochs): - if rank == 0: - start = time.time() - print("Epoch: {}".format(epoch + 1)) - - if h.num_gpus > 1: - train_sampler.set_epoch(epoch) - - for i, batch in enumerate(train_loader): - if rank == 0: - start_b = time.time() - x, y, _, y_mel = batch - x = torch.autograd.Variable(x.to(device, non_blocking=True)) - y = torch.autograd.Variable(y.to(device, non_blocking=True)) - y_mel = torch.autograd.Variable(y_mel.to(device, non_blocking=True)) - y = y.unsqueeze(1) - - y_g_hat = generator(x) - y_g_hat_mel = mel_spectrogram( - y_g_hat.squeeze(1), - h.n_fft, - h.num_mels, - h.sampling_rate, - h.hop_size, - h.win_size, - h.fmin, - h.fmax_for_loss, - ) - - optim_d.zero_grad() - - # MPD - y_df_hat_r, y_df_hat_g, _, _ = mpd(y, y_g_hat.detach()) - loss_disc_f, losses_disc_f_r, losses_disc_f_g = discriminator_loss( - y_df_hat_r, y_df_hat_g - ) - - # MSD - y_ds_hat_r, y_ds_hat_g, _, _ = msd(y, y_g_hat.detach()) - loss_disc_s, losses_disc_s_r, losses_disc_s_g = discriminator_loss( - y_ds_hat_r, y_ds_hat_g - ) - - loss_disc_all = loss_disc_s + loss_disc_f - - loss_disc_all.backward() - optim_d.step() - - # Generator - optim_g.zero_grad() - - # L1 Mel-Spectrogram Loss - loss_mel = F.l1_loss(y_mel, y_g_hat_mel) * 45 - - y_df_hat_r, y_df_hat_g, fmap_f_r, fmap_f_g = mpd(y, y_g_hat) - y_ds_hat_r, y_ds_hat_g, fmap_s_r, fmap_s_g = msd(y, y_g_hat) - loss_fm_f = feature_loss(fmap_f_r, fmap_f_g) - loss_fm_s = feature_loss(fmap_s_r, fmap_s_g) - loss_gen_f, losses_gen_f = generator_loss(y_df_hat_g) - loss_gen_s, losses_gen_s = generator_loss(y_ds_hat_g) - loss_gen_all = loss_gen_s + loss_gen_f + loss_fm_s + loss_fm_f + loss_mel - - loss_gen_all.backward() - optim_g.step() - - if rank == 0: - # STDOUT logging - if steps % a.stdout_interval == 0: - with torch.no_grad(): - mel_error = F.l1_loss(y_mel, y_g_hat_mel).item() - - print( - "Steps : {:d}, Gen Loss Total : {:4.3f}, Mel-Spec. Error : {:4.3f}, s/b : {:4.3f}".format( - steps, loss_gen_all, mel_error, time.time() - start_b - ) - ) - - # checkpointing - if steps % a.checkpoint_interval == 0 and steps != 0: - checkpoint_path = "{}/g_{:08d}".format(a.checkpoint_path, steps) - save_checkpoint( - checkpoint_path, - { - "generator": ( - generator.module if h.num_gpus > 1 else generator - ).state_dict() - }, - ) - checkpoint_path = "{}/do_{:08d}".format(a.checkpoint_path, steps) - save_checkpoint( - checkpoint_path, - { - "mpd": (mpd.module if h.num_gpus > 1 else mpd).state_dict(), - "msd": (msd.module if h.num_gpus > 1 else msd).state_dict(), - "optim_g": optim_g.state_dict(), - "optim_d": optim_d.state_dict(), - "steps": steps, - "epoch": epoch, - }, - ) - - # Tensorboard summary logging - if steps % a.summary_interval == 0: - sw.add_scalar("training/gen_loss_total", loss_gen_all, steps) - sw.add_scalar("training/mel_spec_error", mel_error, steps) - - # Validation - if steps % a.validation_interval == 0: # and steps != 0: - generator.eval() - torch.cuda.empty_cache() - val_err_tot = 0 - with torch.no_grad(): - for j, batch in enumerate(validation_loader): - x, y, _, y_mel = batch - y_g_hat = generator(x.to(device)) - y_mel = torch.autograd.Variable( - y_mel.to(device, non_blocking=True) - ) - y_g_hat_mel = mel_spectrogram( - y_g_hat.squeeze(1), - h.n_fft, - h.num_mels, - h.sampling_rate, - h.hop_size, - h.win_size, - h.fmin, - h.fmax_for_loss, - ) - val_err_tot += F.l1_loss(y_mel, y_g_hat_mel).item() - - if j <= 4: - if steps == 0: - sw.add_audio( - "gt/y_{}".format(j), - y[0], - steps, - h.sampling_rate, - ) - sw.add_figure( - "gt/y_spec_{}".format(j), - plot_spectrogram(x[0]), - steps, - ) - - sw.add_audio( - "generated/y_hat_{}".format(j), - y_g_hat[0], - steps, - h.sampling_rate, - ) - y_hat_spec = mel_spectrogram( - y_g_hat.squeeze(1), - h.n_fft, - h.num_mels, - h.sampling_rate, - h.hop_size, - h.win_size, - h.fmin, - h.fmax, - ) - sw.add_figure( - "generated/y_hat_spec_{}".format(j), - plot_spectrogram( - y_hat_spec.squeeze(0).cpu().numpy() - ), - steps, - ) - - val_err = val_err_tot / (j + 1) - sw.add_scalar("validation/mel_spec_error", val_err, steps) - - generator.train() - - steps += 1 - - scheduler_g.step() - scheduler_d.step() - - if rank == 0: - print( - "Time taken for epoch {} is {} sec\n".format( - epoch + 1, int(time.time() - start) - ) - ) - - -def main(): - print("Initializing Training Process..") - - parser = argparse.ArgumentParser() - - parser.add_argument("--group_name", default=None) - parser.add_argument("--input_wavs_dir", default="LJSpeech-1.1/wavs") - parser.add_argument("--input_mels_dir", default="ft_dataset") - parser.add_argument("--input_training_file", default="LJSpeech-1.1/training.txt") - parser.add_argument( - "--input_validation_file", default="LJSpeech-1.1/validation.txt" - ) - parser.add_argument("--checkpoint_path", default="cp_hifigan") - parser.add_argument("--logs_path", default="") - parser.add_argument("--config", default="") - parser.add_argument("--training_epochs", default=3100, type=int) - parser.add_argument("--stdout_interval", default=5, type=int) - parser.add_argument("--checkpoint_interval", default=5000, type=int) - parser.add_argument("--summary_interval", default=100, type=int) - parser.add_argument("--validation_interval", default=1000, type=int) - parser.add_argument("--fine_tuning", default=False, type=bool) - - a = parser.parse_args() - - with open(a.config) as f: - data = f.read() - - json_config = json.loads(data) - h = AttrDict(json_config) - build_env(a.config, "config.json", a.checkpoint_path) - - torch.manual_seed(h.seed) - if torch.cuda.is_available(): - torch.cuda.manual_seed(h.seed) - h.num_gpus = torch.cuda.device_count() - h.batch_size = int(h.batch_size / h.num_gpus) - print("Batch size per GPU :", h.batch_size) - else: - pass - - if h.num_gpus > 1: - mp.spawn( - train, - nprocs=h.num_gpus, - args=( - a, - h, - ), - ) - else: - train(0, a, h) - - -if __name__ == "__main__": - main() diff --git a/spaces/Heshwa/html-code-generation-from-images-with-deep-neural-networks/README.md b/spaces/Heshwa/html-code-generation-from-images-with-deep-neural-networks/README.md deleted file mode 100644 index 759995579c16ebbe1887add04be3ceba493e6ba7..0000000000000000000000000000000000000000 --- a/spaces/Heshwa/html-code-generation-from-images-with-deep-neural-networks/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: Image to HTML Code Demo -emoji: 🧑‍💻 -colorFrom: pink -colorTo: purple -sdk: gradio -sdk_version: 3.1.4 -app_file: app.py -pinned: false -license: afl-3.0 -duplicated_from: taneemishere/html-code-generation-from-images-with-deep-neural-networks ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Hila/RobustViT/SegmentationTest/data/Imagenet.py b/spaces/Hila/RobustViT/SegmentationTest/data/Imagenet.py deleted file mode 100644 index 82348dd35c559087db90ae241f84659ad4e6af25..0000000000000000000000000000000000000000 --- a/spaces/Hila/RobustViT/SegmentationTest/data/Imagenet.py +++ /dev/null @@ -1,74 +0,0 @@ -import os -import torch -import torch.utils.data as data -import numpy as np - -from PIL import Image -import h5py - -__all__ = ['ImagenetResults'] - - -class Imagenet_Segmentation(data.Dataset): - CLASSES = 2 - - def __init__(self, - path, - transform=None, - target_transform=None): - self.path = path - self.transform = transform - self.target_transform = target_transform - self.h5py = None - tmp = h5py.File(path, 'r') - self.data_length = len(tmp['/value/img']) - tmp.close() - del tmp - - def __getitem__(self, index): - - if self.h5py is None: - self.h5py = h5py.File(self.path, 'r') - - img = np.array(self.h5py[self.h5py['/value/img'][index, 0]]).transpose((2, 1, 0)) - target = np.array(self.h5py[self.h5py[self.h5py['/value/gt'][index, 0]][0, 0]]).transpose((1, 0)) - - img = Image.fromarray(img).convert('RGB') - target = Image.fromarray(target) - - if self.transform is not None: - img = self.transform(img) - - if self.target_transform is not None: - target = np.array(self.target_transform(target)).astype('int32') - target = torch.from_numpy(target).long() - - return img, target - - def __len__(self): - return self.data_length - - -class ImagenetResults(data.Dataset): - def __init__(self, path): - super(ImagenetResults, self).__init__() - - self.path = os.path.join(path, 'results.hdf5') - self.data = None - - print('Reading dataset length...') - with h5py.File(self.path, 'r') as f: - self.data_length = len(f['/image']) - - def __len__(self): - return self.data_length - - def __getitem__(self, item): - if self.data is None: - self.data = h5py.File(self.path, 'r') - - image = torch.tensor(self.data['image'][item]) - vis = torch.tensor(self.data['vis'][item]) - target = torch.tensor(self.data['target'][item]).long() - - return image, vis, target diff --git a/spaces/Hoodady/3DFuse/voxnerf/render.py b/spaces/Hoodady/3DFuse/voxnerf/render.py deleted file mode 100644 index fbd3a226214186f75fe70749ecb5182963f1f75c..0000000000000000000000000000000000000000 --- a/spaces/Hoodady/3DFuse/voxnerf/render.py +++ /dev/null @@ -1,228 +0,0 @@ -import numpy as np -import torch -from my3d import unproject - - -def subpixel_rays_from_img(H, W, K, c2w_pose, normalize_dir=True, f=8): - assert c2w_pose[3, 3] == 1. - H, W = H * f, W * f - n = H * W - ys, xs = np.meshgrid(range(H), range(W), indexing="ij") - xy_coords = np.stack([xs, ys], axis=-1).reshape(n, 2) - - top_left = np.array([-0.5, -0.5]) + 1 / (2 * f) - xy_coords = top_left + xy_coords / f - - ro = c2w_pose[:, -1] - pts = unproject(K, xy_coords, depth=1) - pts = pts @ c2w_pose.T - rd = pts - ro - rd = rd[:, :3] - if normalize_dir: - rd = rd / np.linalg.norm(rd, axis=-1, keepdims=True) - ro = np.tile(ro[:3], (n, 1)) - return ro, rd - - -def rays_from_img(H, W, K, c2w_pose, normalize_dir=True): - assert c2w_pose[3, 3] == 1. - n = H * W - ys, xs = np.meshgrid(range(H), range(W), indexing="ij") - xy_coords = np.stack([xs, ys], axis=-1).reshape(n, 2) - - ro = c2w_pose[:, -1] - pts = unproject(K, xy_coords, depth=1) - pts = pts @ c2w_pose.T - rd = pts - ro # equivalently can subtract [0,0,0,1] before pose transform - rd = rd[:, :3] - if normalize_dir: - rd = rd / np.linalg.norm(rd, axis=-1, keepdims=True) - ro = np.tile(ro[:3], (n, 1)) - return ro, rd - - -def ray_box_intersect(ro, rd, aabb): - """ - Intersection of ray with axis-aligned bounding box - This routine works for arbitrary dimensions; commonly d = 2 or 3 - only works for numpy, not torch (which has slightly diff api for min, max, and clone) - - Args: - ro: [n, d] ray origin - rd: [n, d] ray direction (assumed to be already normalized; - if not still fine, meaning of t as time of flight holds true) - aabb: [d, 2] bbox bound on each dim - Return: - is_intersect: [n,] of bool, whether the particular ray intersects the bbox - t_min: [n,] ray entrance time - t_max: [n,] ray exit time - """ - n = ro.shape[0] - d = aabb.shape[0] - assert aabb.shape == (d, 2) - assert ro.shape == (n, d) and rd.shape == (n, d) - - rd = rd.copy() - rd[rd == 0] = 1e-6 # avoid div overflow; logically safe to give it big t - - ro = ro.reshape(n, d, 1) - rd = rd.reshape(n, d, 1) - ts = (aabb - ro) / rd # [n, d, 2] - t_min = ts.min(-1).max(-1) # [n,] last of entrance - t_max = ts.max(-1).min(-1) # [n,] first of exit - is_intersect = t_min < t_max - - return is_intersect, t_min, t_max - - -def as_torch_tsrs(device, *args): - ret = [] - for elem in args: - target_dtype = torch.float32 if np.issubdtype(elem.dtype, np.floating) else None - ret.append( - torch.as_tensor(elem, dtype=target_dtype, device=device) - ) - return ret - - -def group_mask_filter(mask, *items): - return [elem[mask] for elem in items] - - -def mask_back_fill(tsr, N, inds, base_value=1.0): - shape = [N, *tsr.shape[1:]] - canvas = base_value * np.ones_like(tsr, shape=shape) - canvas[inds] = tsr - return canvas - - -def render_one_view(model, aabb, H, W, K, pose): - N = H * W - bs = max(W * 5, 4096) # render 5 rows; original batch size 4096, now 4000; - - ro, rd = rays_from_img(H, W, K, pose) - ro, rd, t_min, t_max, intsct_inds = scene_box_filter(ro, rd, aabb) - n = len(ro) - # print(f"{n} vs {N}") # n can be smaller than N since some rays do not intsct aabb - - # n = n // 1 # actual number of rays to render; only needed for fast debugging - - dev = model.device - ro, rd, t_min, t_max = as_torch_tsrs(dev, ro, rd, t_min, t_max) - rgbs = torch.zeros(n, 3, device=dev) - depth = torch.zeros(n, 1, device=dev) - - with torch.no_grad(): - for i in range(int(np.ceil(n / bs))): - s = i * bs - e = min(n, s + bs) - _rgbs, _depth, _ = render_ray_bundle( - model, ro[s:e], rd[s:e], t_min[s:e], t_max[s:e] - ) - rgbs[s:e] = _rgbs - depth[s:e] = _depth - - rgbs, depth = rgbs.cpu().numpy(), depth.cpu().numpy() - - base_color = 1.0 # empty region needs to be white - rgbs = mask_back_fill(rgbs, N, intsct_inds, base_color).reshape(H, W, 3) - depth = mask_back_fill(depth, N, intsct_inds, base_color).reshape(H, W) - return rgbs, depth - - -def scene_box_filter(ro, rd, aabb): - N = len(ro) - - _, t_min, t_max = ray_box_intersect(ro, rd, aabb) - # do not render what's behind the ray origin - t_min, t_max = np.maximum(t_min, 0), np.maximum(t_max, 0) - # can test intersect logic by reducing the focal length - is_intsct = t_min < t_max - ro, rd, t_min, t_max = group_mask_filter(is_intsct, ro, rd, t_min, t_max) - intsct_inds = np.arange(N)[is_intsct] - return ro, rd, t_min, t_max, intsct_inds - - -def render_ray_bundle(model, ro, rd, t_min, t_max): - """ - The working shape is (k, n, 3) where k is num of samples per ray, n the ray batch size - During integration the reduction is applied on k - - chain of filtering - starting with ro, rd (from cameras), and a scene bbox - - rays that do not intersect scene bbox; sample pts that fall outside the bbox - - samples that do not fall within alpha mask - - samples whose densities are very low; no need to compute colors on them - """ - num_samples, step_size = model.get_num_samples((t_max - t_min).max()) - # print(num_samples) - n, k = len(ro), num_samples - # print(n,k) - ticks = step_size * torch.arange(k, device=ro.device) - ticks = ticks.view(k, 1, 1) - t_min = t_min.view(n, 1) - # t_min = t_min + step_size * torch.rand_like(t_min) # NOTE seems useless - t_max = t_max.view(n, 1) - dists = t_min + ticks # [n, 1], [k, 1, 1] -> [k, n, 1] - pts = ro + rd * dists # [n, 3], [n, 3], [k, n, 1] -> [k, n, 3] - mask = (ticks < (t_max - t_min)).squeeze(-1) # [k, 1, 1], [n, 1] -> [k, n, 1] -> [k, n] - smp_pts = pts[mask] - - if model.alphaMask is not None: - alphas = model.alphaMask.sample_alpha(smp_pts) - alpha_mask = alphas > 0 - mask[mask.clone()] = alpha_mask - smp_pts = pts[mask] - - σ = torch.zeros(k, n, device=ro.device) - σ[mask] = model.compute_density_feats(smp_pts) - weights = volume_rend_weights(σ, step_size) - mask = weights > model.ray_march_weight_thres - smp_pts = pts[mask] - - app_feats = model.compute_app_feats(smp_pts) - # viewdirs = rd.view(1, n, 3).expand(k, n, 3)[mask] # ray dirs for each point - # additional wild factors here as in nerf-w; wild factors are optimizable - c_dim = app_feats.shape[-1] - colors = torch.zeros(k, n, c_dim, device=ro.device) - colors[mask] = model.feats2color(app_feats) - - weights = weights.view(k, n, 1) # can be used to compute other expected vals e.g. depth - bg_weight = 1. - weights.sum(dim=0) # [n, 1] - - rgbs = (weights * colors).sum(dim=0) # [n, 3] - - if model.blend_bg_texture: - uv = spherical_xyz_to_uv(rd) - bg_feats = model.compute_bg(uv) - bg_color = model.feats2color(bg_feats) - rgbs = rgbs + bg_weight * bg_color - else: - rgbs = rgbs + bg_weight * 1. # blend white bg color - # print(rgbs.shape) - # rgbs = rgbs.clamp(0, 1) # don't clamp since this is can be SD latent features - - E_dists = (weights * dists).sum(dim=0) - bg_dist = 10. # blend bg distance; just don't make it too large - E_dists = E_dists + bg_weight * bg_dist - return rgbs, E_dists, weights.squeeze(-1) - - -def spherical_xyz_to_uv(xyz): - # xyz is Tensor of shape [N, 3], uv in [-1, 1] - x, y, z = xyz.t() # [N] - xy = (x ** 2 + y ** 2) ** 0.5 - u = torch.atan2(xy, z) / torch.pi # [N] - v = torch.atan2(y, x) / (torch.pi * 2) + 0.5 # [N] - uv = torch.stack([u, v], -1) # [N, 2] - uv = uv * 2 - 1 # [0, 1] -> [-1, 1] - return uv - - -def volume_rend_weights(σ, dist): - α = 1 - torch.exp(-σ * dist) - T = torch.ones_like(α) - T[1:] = (1 - α).cumprod(dim=0)[:-1] - assert (T >= 0).all() - weights = α * T - return weights diff --git a/spaces/HuggingFaceH4/open_llm_leaderboard/src/manage_collections.py b/spaces/HuggingFaceH4/open_llm_leaderboard/src/manage_collections.py deleted file mode 100644 index 294fb06af8d1eb522f48f4ecd83ec0e892fd5bb5..0000000000000000000000000000000000000000 --- a/spaces/HuggingFaceH4/open_llm_leaderboard/src/manage_collections.py +++ /dev/null @@ -1,75 +0,0 @@ -import os -import pandas as pd -from pandas import DataFrame -from huggingface_hub import get_collection, add_collection_item, update_collection_item, delete_collection_item -from huggingface_hub.utils._errors import HfHubHTTPError - -from src.get_model_info.hardocded_metadata.types import ModelType -from src.get_model_info.utils import AutoEvalColumn - -H4_TOKEN = os.environ.get("H4_TOKEN", None) - -path_to_collection = "open-llm-leaderboard/llm-leaderboard-best-models-652d6c7965a4619fb5c27a03" -intervals = { - "1B": pd.Interval(0, 1.5, closed="right"), - "3B": pd.Interval(2.5, 3.5, closed="neither"), - "7B": pd.Interval(6, 8, closed="neither"), - "13B": pd.Interval(10, 14, closed="neither"), - "30B":pd.Interval(25, 35, closed="neither"), - "65B": pd.Interval(60, 70, closed="neither"), -} - -def update_collections(df: DataFrame): - """This function updates the Open LLM Leaderboard model collection with the latest best models for - each size category and type. - """ - collection = get_collection(collection_slug=path_to_collection, token=H4_TOKEN) - params_column = pd.to_numeric(df[AutoEvalColumn.params.name], errors="coerce") - - cur_best_models = [] - - ix = 0 - for type in ModelType: - if type.value.name == "": continue - for size in intervals: - # We filter the df to gather the relevant models - type_emoji = [t[0] for t in type.value.symbol] - filtered_df = df[df[AutoEvalColumn.model_type_symbol.name].isin(type_emoji)] - - numeric_interval = pd.IntervalIndex([intervals[size]]) - mask = params_column.apply(lambda x: any(numeric_interval.contains(x))) - filtered_df = filtered_df.loc[mask] - - best_models = list(filtered_df.sort_values(AutoEvalColumn.average.name, ascending=False)[AutoEvalColumn.dummy.name]) - print(type.value.symbol, size, best_models[:10]) - - # We add them one by one to the leaderboard - for model in best_models: - ix += 1 - cur_len_collection = len(collection.items) - try: - collection = add_collection_item( - path_to_collection, - item_id=model, - item_type="model", - exists_ok=True, - note=f"Best {type.to_str(' ')} model of around {size} on the leaderboard today!", - token=H4_TOKEN - ) - if len(collection.items) > cur_len_collection: # we added an item - we make sure its position is correct - item_object_id = collection.items[-1].item_object_id - update_collection_item(collection_slug=path_to_collection, item_object_id=item_object_id, position=ix) - cur_len_collection = len(collection.items) - cur_best_models.append(model) - break - except HfHubHTTPError: - continue - - collection = get_collection(path_to_collection, token=H4_TOKEN) - for item in collection.items: - if item.item_id not in cur_best_models: - try: - delete_collection_item(collection_slug=path_to_collection, item_object_id=item.item_object_id, token=H4_TOKEN) - except HfHubHTTPError: - continue - diff --git a/spaces/HugoDzz/spaceship_drift/static/game/index.js b/spaces/HugoDzz/spaceship_drift/static/game/index.js deleted file mode 100644 index cfdeff503c5457680d39230d3341b71c27457752..0000000000000000000000000000000000000000 --- a/spaces/HugoDzz/spaceship_drift/static/game/index.js +++ /dev/null @@ -1,796 +0,0 @@ - -var Godot = (() => { - var _scriptDir = typeof document !== 'undefined' && document.currentScript ? document.currentScript.src : undefined; - - return ( -function(Godot) { - Godot = Godot || {}; - -var Module=typeof Godot!="undefined"?Godot:{};var readyPromiseResolve,readyPromiseReject;Module["ready"]=new Promise(function(resolve,reject){readyPromiseResolve=resolve;readyPromiseReject=reject});var moduleOverrides=Object.assign({},Module);var arguments_=[];var thisProgram="./this.program";var quit_=(status,toThrow)=>{throw toThrow};var ENVIRONMENT_IS_WEB=typeof window=="object";var ENVIRONMENT_IS_WORKER=typeof importScripts=="function";var ENVIRONMENT_IS_NODE=typeof process=="object"&&typeof process.versions=="object"&&typeof process.versions.node=="string";var scriptDirectory="";function locateFile(path){if(Module["locateFile"]){return Module["locateFile"](path,scriptDirectory)}return scriptDirectory+path}var read_,readAsync,readBinary,setWindowTitle;if(ENVIRONMENT_IS_WEB||ENVIRONMENT_IS_WORKER){if(ENVIRONMENT_IS_WORKER){scriptDirectory=self.location.href}else if(typeof document!="undefined"&&document.currentScript){scriptDirectory=document.currentScript.src}if(_scriptDir){scriptDirectory=_scriptDir}if(scriptDirectory.indexOf("blob:")!==0){scriptDirectory=scriptDirectory.substr(0,scriptDirectory.replace(/[?#].*/,"").lastIndexOf("/")+1)}else{scriptDirectory=""}{read_=url=>{var xhr=new XMLHttpRequest;xhr.open("GET",url,false);xhr.send(null);return xhr.responseText};if(ENVIRONMENT_IS_WORKER){readBinary=url=>{var xhr=new XMLHttpRequest;xhr.open("GET",url,false);xhr.responseType="arraybuffer";xhr.send(null);return new Uint8Array(xhr.response)}}readAsync=(url,onload,onerror)=>{var xhr=new XMLHttpRequest;xhr.open("GET",url,true);xhr.responseType="arraybuffer";xhr.onload=()=>{if(xhr.status==200||xhr.status==0&&xhr.response){onload(xhr.response);return}onerror()};xhr.onerror=onerror;xhr.send(null)}}setWindowTitle=title=>document.title=title}else{}var out=Module["print"]||console.log.bind(console);var err=Module["printErr"]||console.warn.bind(console);Object.assign(Module,moduleOverrides);moduleOverrides=null;if(Module["arguments"])arguments_=Module["arguments"];if(Module["thisProgram"])thisProgram=Module["thisProgram"];if(Module["quit"])quit_=Module["quit"];function warnOnce(text){if(!warnOnce.shown)warnOnce.shown={};if(!warnOnce.shown[text]){warnOnce.shown[text]=1;err(text)}}var tempRet0=0;var setTempRet0=value=>{tempRet0=value};var getTempRet0=()=>tempRet0;var wasmBinary;if(Module["wasmBinary"])wasmBinary=Module["wasmBinary"];var noExitRuntime=Module["noExitRuntime"]||false;if(typeof WebAssembly!="object"){abort("no native wasm support detected")}var wasmMemory;var ABORT=false;var EXITSTATUS;function assert(condition,text){if(!condition){abort(text)}}function getCFunc(ident){var func=Module["_"+ident];return func}function ccall(ident,returnType,argTypes,args,opts){var toC={"string":function(str){var ret=0;if(str!==null&&str!==undefined&&str!==0){var len=(str.length<<2)+1;ret=stackAlloc(len);stringToUTF8(str,ret,len)}return ret},"array":function(arr){var ret=stackAlloc(arr.length);writeArrayToMemory(arr,ret);return ret}};function convertReturnValue(ret){if(returnType==="string"){return UTF8ToString(ret)}if(returnType==="boolean")return Boolean(ret);return ret}var func=getCFunc(ident);var cArgs=[];var stack=0;if(args){for(var i=0;i=endIdx))++endPtr;if(endPtr-idx>16&&heapOrArray.buffer&&UTF8Decoder){return UTF8Decoder.decode(heapOrArray.subarray(idx,endPtr))}else{var str="";while(idx>10,56320|ch&1023)}}}return str}function UTF8ToString(ptr,maxBytesToRead){return ptr?UTF8ArrayToString(HEAPU8,ptr,maxBytesToRead):""}function stringToUTF8Array(str,heap,outIdx,maxBytesToWrite){if(!(maxBytesToWrite>0))return 0;var startIdx=outIdx;var endIdx=outIdx+maxBytesToWrite-1;for(var i=0;i=55296&&u<=57343){var u1=str.charCodeAt(++i);u=65536+((u&1023)<<10)|u1&1023}if(u<=127){if(outIdx>=endIdx)break;heap[outIdx++]=u}else if(u<=2047){if(outIdx+1>=endIdx)break;heap[outIdx++]=192|u>>6;heap[outIdx++]=128|u&63}else if(u<=65535){if(outIdx+2>=endIdx)break;heap[outIdx++]=224|u>>12;heap[outIdx++]=128|u>>6&63;heap[outIdx++]=128|u&63}else{if(outIdx+3>=endIdx)break;heap[outIdx++]=240|u>>18;heap[outIdx++]=128|u>>12&63;heap[outIdx++]=128|u>>6&63;heap[outIdx++]=128|u&63}}heap[outIdx]=0;return outIdx-startIdx}function stringToUTF8(str,outPtr,maxBytesToWrite){return stringToUTF8Array(str,HEAPU8,outPtr,maxBytesToWrite)}function lengthBytesUTF8(str){var len=0;for(var i=0;i=55296&&u<=57343)u=65536+((u&1023)<<10)|str.charCodeAt(++i)&1023;if(u<=127)++len;else if(u<=2047)len+=2;else if(u<=65535)len+=3;else len+=4}return len}function allocateUTF8(str){var size=lengthBytesUTF8(str)+1;var ret=_malloc(size);if(ret)stringToUTF8Array(str,HEAP8,ret,size);return ret}function allocateUTF8OnStack(str){var size=lengthBytesUTF8(str)+1;var ret=stackAlloc(size);stringToUTF8Array(str,HEAP8,ret,size);return ret}function writeArrayToMemory(array,buffer){HEAP8.set(array,buffer)}function writeAsciiToMemory(str,buffer,dontAddNull){for(var i=0;i>0]=str.charCodeAt(i)}if(!dontAddNull)HEAP8[buffer>>0]=0}var buffer,HEAP8,HEAPU8,HEAP16,HEAPU16,HEAP32,HEAPU32,HEAPF32,HEAPF64;function updateGlobalBufferAndViews(buf){buffer=buf;Module["HEAP8"]=HEAP8=new Int8Array(buf);Module["HEAP16"]=HEAP16=new Int16Array(buf);Module["HEAP32"]=HEAP32=new Int32Array(buf);Module["HEAPU8"]=HEAPU8=new Uint8Array(buf);Module["HEAPU16"]=HEAPU16=new Uint16Array(buf);Module["HEAPU32"]=HEAPU32=new Uint32Array(buf);Module["HEAPF32"]=HEAPF32=new Float32Array(buf);Module["HEAPF64"]=HEAPF64=new Float64Array(buf)}var INITIAL_MEMORY=Module["INITIAL_MEMORY"]||33554432;var wasmTable;var __ATPRERUN__=[];var __ATINIT__=[];var __ATMAIN__=[];var __ATEXIT__=[];var __ATPOSTRUN__=[];var runtimeInitialized=false;var runtimeExited=false;var runtimeKeepaliveCounter=0;function keepRuntimeAlive(){return noExitRuntime||runtimeKeepaliveCounter>0}function preRun(){if(Module["preRun"]){if(typeof Module["preRun"]=="function")Module["preRun"]=[Module["preRun"]];while(Module["preRun"].length){addOnPreRun(Module["preRun"].shift())}}callRuntimeCallbacks(__ATPRERUN__)}function initRuntime(){runtimeInitialized=true;if(!Module["noFSInit"]&&!FS.init.initialized)FS.init();FS.ignorePermissions=false;TTY.init();SOCKFS.root=FS.mount(SOCKFS,{},null);callRuntimeCallbacks(__ATINIT__)}function preMain(){callRuntimeCallbacks(__ATMAIN__)}function exitRuntime(){___funcs_on_exit();callRuntimeCallbacks(__ATEXIT__);FS.quit();TTY.shutdown();IDBFS.quit();runtimeExited=true}function postRun(){if(Module["postRun"]){if(typeof Module["postRun"]=="function")Module["postRun"]=[Module["postRun"]];while(Module["postRun"].length){addOnPostRun(Module["postRun"].shift())}}callRuntimeCallbacks(__ATPOSTRUN__)}function addOnPreRun(cb){__ATPRERUN__.unshift(cb)}function addOnInit(cb){__ATINIT__.unshift(cb)}function addOnPostRun(cb){__ATPOSTRUN__.unshift(cb)}var runDependencies=0;var runDependencyWatcher=null;var dependenciesFulfilled=null;function getUniqueRunDependency(id){return id}function addRunDependency(id){runDependencies++;if(Module["monitorRunDependencies"]){Module["monitorRunDependencies"](runDependencies)}}function removeRunDependency(id){runDependencies--;if(Module["monitorRunDependencies"]){Module["monitorRunDependencies"](runDependencies)}if(runDependencies==0){if(runDependencyWatcher!==null){clearInterval(runDependencyWatcher);runDependencyWatcher=null}if(dependenciesFulfilled){var callback=dependenciesFulfilled;dependenciesFulfilled=null;callback()}}}function abort(what){{if(Module["onAbort"]){Module["onAbort"](what)}}what="Aborted("+what+")";err(what);ABORT=true;EXITSTATUS=1;what+=". Build with -sASSERTIONS for more info.";var e=new WebAssembly.RuntimeError(what);readyPromiseReject(e);throw e}var dataURIPrefix="data:application/octet-stream;base64,";function isDataURI(filename){return filename.startsWith(dataURIPrefix)}var wasmBinaryFile;wasmBinaryFile="godot.javascript.opt.debug.wasm";if(!isDataURI(wasmBinaryFile)){wasmBinaryFile=locateFile(wasmBinaryFile)}function getBinary(file){try{if(file==wasmBinaryFile&&wasmBinary){return new Uint8Array(wasmBinary)}if(readBinary){return readBinary(file)}else{throw"both async and sync fetching of the wasm failed"}}catch(err){abort(err)}}function getBinaryPromise(){if(!wasmBinary&&(ENVIRONMENT_IS_WEB||ENVIRONMENT_IS_WORKER)){if(typeof fetch=="function"){return fetch(wasmBinaryFile,{credentials:"same-origin"}).then(function(response){if(!response["ok"]){throw"failed to load wasm binary file at '"+wasmBinaryFile+"'"}return response["arrayBuffer"]()}).catch(function(){return getBinary(wasmBinaryFile)})}}return Promise.resolve().then(function(){return getBinary(wasmBinaryFile)})}function createWasm(){var info={"a":asmLibraryArg};function receiveInstance(instance,module){var exports=instance.exports;Module["asm"]=exports;wasmMemory=Module["asm"]["ek"];updateGlobalBufferAndViews(wasmMemory.buffer);wasmTable=Module["asm"]["sk"];addOnInit(Module["asm"]["fk"]);removeRunDependency("wasm-instantiate")}addRunDependency("wasm-instantiate");function receiveInstantiationResult(result){receiveInstance(result["instance"])}function instantiateArrayBuffer(receiver){return getBinaryPromise().then(function(binary){return WebAssembly.instantiate(binary,info)}).then(function(instance){return instance}).then(receiver,function(reason){err("failed to asynchronously prepare wasm: "+reason);abort(reason)})}function instantiateAsync(){if(!wasmBinary&&typeof WebAssembly.instantiateStreaming=="function"&&!isDataURI(wasmBinaryFile)&&typeof fetch=="function"){return fetch(wasmBinaryFile,{credentials:"same-origin"}).then(function(response){var result=WebAssembly.instantiateStreaming(response,info);return result.then(receiveInstantiationResult,function(reason){err("wasm streaming compile failed: "+reason);err("falling back to ArrayBuffer instantiation");return instantiateArrayBuffer(receiveInstantiationResult)})})}else{return instantiateArrayBuffer(receiveInstantiationResult)}}if(Module["instantiateWasm"]){try{var exports=Module["instantiateWasm"](info,receiveInstance);return exports}catch(e){err("Module.instantiateWasm callback failed with error: "+e);return false}}instantiateAsync().catch(readyPromiseReject);return{}}var tempDouble;var tempI64;function callRuntimeCallbacks(callbacks){while(callbacks.length>0){var callback=callbacks.shift();if(typeof callback=="function"){callback(Module);continue}var func=callback.func;if(typeof func=="number"){if(callback.arg===undefined){getWasmTableEntry(func)()}else{getWasmTableEntry(func)(callback.arg)}}else{func(callback.arg===undefined?null:callback.arg)}}}function getValue(ptr,type="i8"){if(type.endsWith("*"))type="i32";switch(type){case"i1":return HEAP8[ptr>>0];case"i8":return HEAP8[ptr>>0];case"i16":return HEAP16[ptr>>1];case"i32":return HEAP32[ptr>>2];case"i64":return HEAP32[ptr>>2];case"float":return HEAPF32[ptr>>2];case"double":return Number(HEAPF64[ptr>>3]);default:abort("invalid type for getValue: "+type)}return null}function getWasmTableEntry(funcPtr){return wasmTable.get(funcPtr)}function handleException(e){if(e instanceof ExitStatus||e=="unwind"){return EXITSTATUS}quit_(1,e)}function setValue(ptr,value,type="i8"){if(type.endsWith("*"))type="i32";switch(type){case"i1":HEAP8[ptr>>0]=value;break;case"i8":HEAP8[ptr>>0]=value;break;case"i16":HEAP16[ptr>>1]=value;break;case"i32":HEAP32[ptr>>2]=value;break;case"i64":tempI64=[value>>>0,(tempDouble=value,+Math.abs(tempDouble)>=1?tempDouble>0?(Math.min(+Math.floor(tempDouble/4294967296),4294967295)|0)>>>0:~~+Math.ceil((tempDouble-+(~~tempDouble>>>0))/4294967296)>>>0:0)],HEAP32[ptr>>2]=tempI64[0],HEAP32[ptr+4>>2]=tempI64[1];break;case"float":HEAPF32[ptr>>2]=value;break;case"double":HEAPF64[ptr>>3]=value;break;default:abort("invalid type for setValue: "+type)}}function ___assert_fail(condition,filename,line,func){abort("Assertion failed: "+UTF8ToString(condition)+", at: "+[filename?UTF8ToString(filename):"unknown filename",line,func?UTF8ToString(func):"unknown function"])}function ___call_sighandler(fp,sig){getWasmTableEntry(fp)(sig)}var PATH={isAbs:path=>path.charAt(0)==="/",splitPath:filename=>{var splitPathRe=/^(\/?|)([\s\S]*?)((?:\.{1,2}|[^\/]+?|)(\.[^.\/]*|))(?:[\/]*)$/;return splitPathRe.exec(filename).slice(1)},normalizeArray:(parts,allowAboveRoot)=>{var up=0;for(var i=parts.length-1;i>=0;i--){var last=parts[i];if(last==="."){parts.splice(i,1)}else if(last===".."){parts.splice(i,1);up++}else if(up){parts.splice(i,1);up--}}if(allowAboveRoot){for(;up;up--){parts.unshift("..")}}return parts},normalize:path=>{var isAbsolute=PATH.isAbs(path),trailingSlash=path.substr(-1)==="/";path=PATH.normalizeArray(path.split("/").filter(p=>!!p),!isAbsolute).join("/");if(!path&&!isAbsolute){path="."}if(path&&trailingSlash){path+="/"}return(isAbsolute?"/":"")+path},dirname:path=>{var result=PATH.splitPath(path),root=result[0],dir=result[1];if(!root&&!dir){return"."}if(dir){dir=dir.substr(0,dir.length-1)}return root+dir},basename:path=>{if(path==="/")return"/";path=PATH.normalize(path);path=path.replace(/\/$/,"");var lastSlash=path.lastIndexOf("/");if(lastSlash===-1)return path;return path.substr(lastSlash+1)},join:function(){var paths=Array.prototype.slice.call(arguments,0);return PATH.normalize(paths.join("/"))},join2:(l,r)=>{return PATH.normalize(l+"/"+r)}};function getRandomDevice(){if(typeof crypto=="object"&&typeof crypto["getRandomValues"]=="function"){var randomBuffer=new Uint8Array(1);return function(){crypto.getRandomValues(randomBuffer);return randomBuffer[0]}}else return function(){abort("randomDevice")}}var PATH_FS={resolve:function(){var resolvedPath="",resolvedAbsolute=false;for(var i=arguments.length-1;i>=-1&&!resolvedAbsolute;i--){var path=i>=0?arguments[i]:FS.cwd();if(typeof path!="string"){throw new TypeError("Arguments to path.resolve must be strings")}else if(!path){return""}resolvedPath=path+"/"+resolvedPath;resolvedAbsolute=PATH.isAbs(path)}resolvedPath=PATH.normalizeArray(resolvedPath.split("/").filter(p=>!!p),!resolvedAbsolute).join("/");return(resolvedAbsolute?"/":"")+resolvedPath||"."},relative:(from,to)=>{from=PATH_FS.resolve(from).substr(1);to=PATH_FS.resolve(to).substr(1);function trim(arr){var start=0;for(;start=0;end--){if(arr[end]!=="")break}if(start>end)return[];return arr.slice(start,end-start+1)}var fromParts=trim(from.split("/"));var toParts=trim(to.split("/"));var length=Math.min(fromParts.length,toParts.length);var samePartsLength=length;for(var i=0;i0){out(UTF8ArrayToString(tty.output,0));tty.output=[]}}},default_tty1_ops:{put_char:function(tty,val){if(val===null||val===10){err(UTF8ArrayToString(tty.output,0));tty.output=[]}else{if(val!=0)tty.output.push(val)}},flush:function(tty){if(tty.output&&tty.output.length>0){err(UTF8ArrayToString(tty.output,0));tty.output=[]}}}};function zeroMemory(address,size){HEAPU8.fill(0,address,address+size)}function mmapAlloc(size){abort()}var MEMFS={ops_table:null,mount:function(mount){return MEMFS.createNode(null,"/",16384|511,0)},createNode:function(parent,name,mode,dev){if(FS.isBlkdev(mode)||FS.isFIFO(mode)){throw new FS.ErrnoError(63)}if(!MEMFS.ops_table){MEMFS.ops_table={dir:{node:{getattr:MEMFS.node_ops.getattr,setattr:MEMFS.node_ops.setattr,lookup:MEMFS.node_ops.lookup,mknod:MEMFS.node_ops.mknod,rename:MEMFS.node_ops.rename,unlink:MEMFS.node_ops.unlink,rmdir:MEMFS.node_ops.rmdir,readdir:MEMFS.node_ops.readdir,symlink:MEMFS.node_ops.symlink},stream:{llseek:MEMFS.stream_ops.llseek}},file:{node:{getattr:MEMFS.node_ops.getattr,setattr:MEMFS.node_ops.setattr},stream:{llseek:MEMFS.stream_ops.llseek,read:MEMFS.stream_ops.read,write:MEMFS.stream_ops.write,allocate:MEMFS.stream_ops.allocate,mmap:MEMFS.stream_ops.mmap,msync:MEMFS.stream_ops.msync}},link:{node:{getattr:MEMFS.node_ops.getattr,setattr:MEMFS.node_ops.setattr,readlink:MEMFS.node_ops.readlink},stream:{}},chrdev:{node:{getattr:MEMFS.node_ops.getattr,setattr:MEMFS.node_ops.setattr},stream:FS.chrdev_stream_ops}}}var node=FS.createNode(parent,name,mode,dev);if(FS.isDir(node.mode)){node.node_ops=MEMFS.ops_table.dir.node;node.stream_ops=MEMFS.ops_table.dir.stream;node.contents={}}else if(FS.isFile(node.mode)){node.node_ops=MEMFS.ops_table.file.node;node.stream_ops=MEMFS.ops_table.file.stream;node.usedBytes=0;node.contents=null}else if(FS.isLink(node.mode)){node.node_ops=MEMFS.ops_table.link.node;node.stream_ops=MEMFS.ops_table.link.stream}else if(FS.isChrdev(node.mode)){node.node_ops=MEMFS.ops_table.chrdev.node;node.stream_ops=MEMFS.ops_table.chrdev.stream}node.timestamp=Date.now();if(parent){parent.contents[name]=node;parent.timestamp=node.timestamp}return node},getFileDataAsTypedArray:function(node){if(!node.contents)return new Uint8Array(0);if(node.contents.subarray)return node.contents.subarray(0,node.usedBytes);return new Uint8Array(node.contents)},expandFileStorage:function(node,newCapacity){var prevCapacity=node.contents?node.contents.length:0;if(prevCapacity>=newCapacity)return;var CAPACITY_DOUBLING_MAX=1024*1024;newCapacity=Math.max(newCapacity,prevCapacity*(prevCapacity>>0);if(prevCapacity!=0)newCapacity=Math.max(newCapacity,256);var oldContents=node.contents;node.contents=new Uint8Array(newCapacity);if(node.usedBytes>0)node.contents.set(oldContents.subarray(0,node.usedBytes),0)},resizeFileStorage:function(node,newSize){if(node.usedBytes==newSize)return;if(newSize==0){node.contents=null;node.usedBytes=0}else{var oldContents=node.contents;node.contents=new Uint8Array(newSize);if(oldContents){node.contents.set(oldContents.subarray(0,Math.min(newSize,node.usedBytes)))}node.usedBytes=newSize}},node_ops:{getattr:function(node){var attr={};attr.dev=FS.isChrdev(node.mode)?node.id:1;attr.ino=node.id;attr.mode=node.mode;attr.nlink=1;attr.uid=0;attr.gid=0;attr.rdev=node.rdev;if(FS.isDir(node.mode)){attr.size=4096}else if(FS.isFile(node.mode)){attr.size=node.usedBytes}else if(FS.isLink(node.mode)){attr.size=node.link.length}else{attr.size=0}attr.atime=new Date(node.timestamp);attr.mtime=new Date(node.timestamp);attr.ctime=new Date(node.timestamp);attr.blksize=4096;attr.blocks=Math.ceil(attr.size/attr.blksize);return attr},setattr:function(node,attr){if(attr.mode!==undefined){node.mode=attr.mode}if(attr.timestamp!==undefined){node.timestamp=attr.timestamp}if(attr.size!==undefined){MEMFS.resizeFileStorage(node,attr.size)}},lookup:function(parent,name){throw FS.genericErrors[44]},mknod:function(parent,name,mode,dev){return MEMFS.createNode(parent,name,mode,dev)},rename:function(old_node,new_dir,new_name){if(FS.isDir(old_node.mode)){var new_node;try{new_node=FS.lookupNode(new_dir,new_name)}catch(e){}if(new_node){for(var i in new_node.contents){throw new FS.ErrnoError(55)}}}delete old_node.parent.contents[old_node.name];old_node.parent.timestamp=Date.now();old_node.name=new_name;new_dir.contents[new_name]=old_node;new_dir.timestamp=old_node.parent.timestamp;old_node.parent=new_dir},unlink:function(parent,name){delete parent.contents[name];parent.timestamp=Date.now()},rmdir:function(parent,name){var node=FS.lookupNode(parent,name);for(var i in node.contents){throw new FS.ErrnoError(55)}delete parent.contents[name];parent.timestamp=Date.now()},readdir:function(node){var entries=[".",".."];for(var key in node.contents){if(!node.contents.hasOwnProperty(key)){continue}entries.push(key)}return entries},symlink:function(parent,newname,oldpath){var node=MEMFS.createNode(parent,newname,511|40960,0);node.link=oldpath;return node},readlink:function(node){if(!FS.isLink(node.mode)){throw new FS.ErrnoError(28)}return node.link}},stream_ops:{read:function(stream,buffer,offset,length,position){var contents=stream.node.contents;if(position>=stream.node.usedBytes)return 0;var size=Math.min(stream.node.usedBytes-position,length);if(size>8&&contents.subarray){buffer.set(contents.subarray(position,position+size),offset)}else{for(var i=0;i0||position+length{if(typeof indexedDB!="undefined")return indexedDB;var ret=null;if(typeof window=="object")ret=window.indexedDB||window.mozIndexedDB||window.webkitIndexedDB||window.msIndexedDB;assert(ret,"IDBFS used, but indexedDB not supported");return ret},DB_VERSION:21,DB_STORE_NAME:"FILE_DATA",mount:function(mount){return MEMFS.mount.apply(null,arguments)},syncfs:(mount,populate,callback)=>{IDBFS.getLocalSet(mount,(err,local)=>{if(err)return callback(err);IDBFS.getRemoteSet(mount,(err,remote)=>{if(err)return callback(err);var src=populate?remote:local;var dst=populate?local:remote;IDBFS.reconcile(src,dst,callback)})})},quit:()=>{Object.values(IDBFS.dbs).forEach(value=>value.close());IDBFS.dbs={}},getDB:(name,callback)=>{var db=IDBFS.dbs[name];if(db){return callback(null,db)}var req;try{req=IDBFS.indexedDB().open(name,IDBFS.DB_VERSION)}catch(e){return callback(e)}if(!req){return callback("Unable to connect to IndexedDB")}req.onupgradeneeded=e=>{var db=e.target.result;var transaction=e.target.transaction;var fileStore;if(db.objectStoreNames.contains(IDBFS.DB_STORE_NAME)){fileStore=transaction.objectStore(IDBFS.DB_STORE_NAME)}else{fileStore=db.createObjectStore(IDBFS.DB_STORE_NAME)}if(!fileStore.indexNames.contains("timestamp")){fileStore.createIndex("timestamp","timestamp",{unique:false})}};req.onsuccess=()=>{db=req.result;IDBFS.dbs[name]=db;callback(null,db)};req.onerror=e=>{callback(this.error);e.preventDefault()}},getLocalSet:(mount,callback)=>{var entries={};function isRealDir(p){return p!=="."&&p!==".."}function toAbsolute(root){return p=>{return PATH.join2(root,p)}}var check=FS.readdir(mount.mountpoint).filter(isRealDir).map(toAbsolute(mount.mountpoint));while(check.length){var path=check.pop();var stat;try{stat=FS.stat(path)}catch(e){return callback(e)}if(FS.isDir(stat.mode)){check.push.apply(check,FS.readdir(path).filter(isRealDir).map(toAbsolute(path)))}entries[path]={"timestamp":stat.mtime}}return callback(null,{type:"local",entries:entries})},getRemoteSet:(mount,callback)=>{var entries={};IDBFS.getDB(mount.mountpoint,(err,db)=>{if(err)return callback(err);try{var transaction=db.transaction([IDBFS.DB_STORE_NAME],"readonly");transaction.onerror=e=>{callback(this.error);e.preventDefault()};var store=transaction.objectStore(IDBFS.DB_STORE_NAME);var index=store.index("timestamp");index.openKeyCursor().onsuccess=event=>{var cursor=event.target.result;if(!cursor){return callback(null,{type:"remote",db:db,entries:entries})}entries[cursor.primaryKey]={"timestamp":cursor.key};cursor.continue()}}catch(e){return callback(e)}})},loadLocalEntry:(path,callback)=>{var stat,node;try{var lookup=FS.lookupPath(path);node=lookup.node;stat=FS.stat(path)}catch(e){return callback(e)}if(FS.isDir(stat.mode)){return callback(null,{"timestamp":stat.mtime,"mode":stat.mode})}else if(FS.isFile(stat.mode)){node.contents=MEMFS.getFileDataAsTypedArray(node);return callback(null,{"timestamp":stat.mtime,"mode":stat.mode,"contents":node.contents})}else{return callback(new Error("node type not supported"))}},storeLocalEntry:(path,entry,callback)=>{try{if(FS.isDir(entry["mode"])){FS.mkdirTree(path,entry["mode"])}else if(FS.isFile(entry["mode"])){FS.writeFile(path,entry["contents"],{canOwn:true})}else{return callback(new Error("node type not supported"))}FS.chmod(path,entry["mode"]);FS.utime(path,entry["timestamp"],entry["timestamp"])}catch(e){return callback(e)}callback(null)},removeLocalEntry:(path,callback)=>{try{var stat=FS.stat(path);if(FS.isDir(stat.mode)){FS.rmdir(path)}else if(FS.isFile(stat.mode)){FS.unlink(path)}}catch(e){return callback(e)}callback(null)},loadRemoteEntry:(store,path,callback)=>{var req=store.get(path);req.onsuccess=event=>{callback(null,event.target.result)};req.onerror=e=>{callback(this.error);e.preventDefault()}},storeRemoteEntry:(store,path,entry,callback)=>{try{var req=store.put(entry,path)}catch(e){callback(e);return}req.onsuccess=()=>{callback(null)};req.onerror=e=>{callback(this.error);e.preventDefault()}},removeRemoteEntry:(store,path,callback)=>{var req=store.delete(path);req.onsuccess=()=>{callback(null)};req.onerror=e=>{callback(this.error);e.preventDefault()}},reconcile:(src,dst,callback)=>{var total=0;var create=[];Object.keys(src.entries).forEach(function(key){var e=src.entries[key];var e2=dst.entries[key];if(!e2||e["timestamp"].getTime()!=e2["timestamp"].getTime()){create.push(key);total++}});var remove=[];Object.keys(dst.entries).forEach(function(key){if(!src.entries[key]){remove.push(key);total++}});if(!total){return callback(null)}var errored=false;var db=src.type==="remote"?src.db:dst.db;var transaction=db.transaction([IDBFS.DB_STORE_NAME],"readwrite");var store=transaction.objectStore(IDBFS.DB_STORE_NAME);function done(err){if(err&&!errored){errored=true;return callback(err)}}transaction.onerror=e=>{done(this.error);e.preventDefault()};transaction.oncomplete=e=>{if(!errored){callback(null)}};create.sort().forEach(path=>{if(dst.type==="local"){IDBFS.loadRemoteEntry(store,path,(err,entry)=>{if(err)return done(err);IDBFS.storeLocalEntry(path,entry,done)})}else{IDBFS.loadLocalEntry(path,(err,entry)=>{if(err)return done(err);IDBFS.storeRemoteEntry(store,path,entry,done)})}});remove.sort().reverse().forEach(path=>{if(dst.type==="local"){IDBFS.removeLocalEntry(path,done)}else{IDBFS.removeRemoteEntry(store,path,done)}})}};var FS={root:null,mounts:[],devices:{},streams:[],nextInode:1,nameTable:null,currentPath:"/",initialized:false,ignorePermissions:true,ErrnoError:null,genericErrors:{},filesystems:null,syncFSRequests:0,lookupPath:(path,opts={})=>{path=PATH_FS.resolve(FS.cwd(),path);if(!path)return{path:"",node:null};var defaults={follow_mount:true,recurse_count:0};opts=Object.assign(defaults,opts);if(opts.recurse_count>8){throw new FS.ErrnoError(32)}var parts=PATH.normalizeArray(path.split("/").filter(p=>!!p),false);var current=FS.root;var current_path="/";for(var i=0;i40){throw new FS.ErrnoError(32)}}}}return{path:current_path,node:current}},getPath:node=>{var path;while(true){if(FS.isRoot(node)){var mount=node.mount.mountpoint;if(!path)return mount;return mount[mount.length-1]!=="/"?mount+"/"+path:mount+path}path=path?node.name+"/"+path:node.name;node=node.parent}},hashName:(parentid,name)=>{var hash=0;for(var i=0;i>>0)%FS.nameTable.length},hashAddNode:node=>{var hash=FS.hashName(node.parent.id,node.name);node.name_next=FS.nameTable[hash];FS.nameTable[hash]=node},hashRemoveNode:node=>{var hash=FS.hashName(node.parent.id,node.name);if(FS.nameTable[hash]===node){FS.nameTable[hash]=node.name_next}else{var current=FS.nameTable[hash];while(current){if(current.name_next===node){current.name_next=node.name_next;break}current=current.name_next}}},lookupNode:(parent,name)=>{var errCode=FS.mayLookup(parent);if(errCode){throw new FS.ErrnoError(errCode,parent)}var hash=FS.hashName(parent.id,name);for(var node=FS.nameTable[hash];node;node=node.name_next){var nodeName=node.name;if(node.parent.id===parent.id&&nodeName===name){return node}}return FS.lookup(parent,name)},createNode:(parent,name,mode,rdev)=>{var node=new FS.FSNode(parent,name,mode,rdev);FS.hashAddNode(node);return node},destroyNode:node=>{FS.hashRemoveNode(node)},isRoot:node=>{return node===node.parent},isMountpoint:node=>{return!!node.mounted},isFile:mode=>{return(mode&61440)===32768},isDir:mode=>{return(mode&61440)===16384},isLink:mode=>{return(mode&61440)===40960},isChrdev:mode=>{return(mode&61440)===8192},isBlkdev:mode=>{return(mode&61440)===24576},isFIFO:mode=>{return(mode&61440)===4096},isSocket:mode=>{return(mode&49152)===49152},flagModes:{"r":0,"r+":2,"w":577,"w+":578,"a":1089,"a+":1090},modeStringToFlags:str=>{var flags=FS.flagModes[str];if(typeof flags=="undefined"){throw new Error("Unknown file open mode: "+str)}return flags},flagsToPermissionString:flag=>{var perms=["r","w","rw"][flag&3];if(flag&512){perms+="w"}return perms},nodePermissions:(node,perms)=>{if(FS.ignorePermissions){return 0}if(perms.includes("r")&&!(node.mode&292)){return 2}else if(perms.includes("w")&&!(node.mode&146)){return 2}else if(perms.includes("x")&&!(node.mode&73)){return 2}return 0},mayLookup:dir=>{var errCode=FS.nodePermissions(dir,"x");if(errCode)return errCode;if(!dir.node_ops.lookup)return 2;return 0},mayCreate:(dir,name)=>{try{var node=FS.lookupNode(dir,name);return 20}catch(e){}return FS.nodePermissions(dir,"wx")},mayDelete:(dir,name,isdir)=>{var node;try{node=FS.lookupNode(dir,name)}catch(e){return e.errno}var errCode=FS.nodePermissions(dir,"wx");if(errCode){return errCode}if(isdir){if(!FS.isDir(node.mode)){return 54}if(FS.isRoot(node)||FS.getPath(node)===FS.cwd()){return 10}}else{if(FS.isDir(node.mode)){return 31}}return 0},mayOpen:(node,flags)=>{if(!node){return 44}if(FS.isLink(node.mode)){return 32}else if(FS.isDir(node.mode)){if(FS.flagsToPermissionString(flags)!=="r"||flags&512){return 31}}return FS.nodePermissions(node,FS.flagsToPermissionString(flags))},MAX_OPEN_FDS:4096,nextfd:(fd_start=0,fd_end=FS.MAX_OPEN_FDS)=>{for(var fd=fd_start;fd<=fd_end;fd++){if(!FS.streams[fd]){return fd}}throw new FS.ErrnoError(33)},getStream:fd=>FS.streams[fd],createStream:(stream,fd_start,fd_end)=>{if(!FS.FSStream){FS.FSStream=function(){this.shared={}};FS.FSStream.prototype={};Object.defineProperties(FS.FSStream.prototype,{object:{get:function(){return this.node},set:function(val){this.node=val}},isRead:{get:function(){return(this.flags&2097155)!==1}},isWrite:{get:function(){return(this.flags&2097155)!==0}},isAppend:{get:function(){return this.flags&1024}},flags:{get:function(){return this.shared.flags},set:function(val){this.shared.flags=val}},position:{get:function(){return this.shared.position},set:function(val){this.shared.position=val}}})}stream=Object.assign(new FS.FSStream,stream);var fd=FS.nextfd(fd_start,fd_end);stream.fd=fd;FS.streams[fd]=stream;return stream},closeStream:fd=>{FS.streams[fd]=null},chrdev_stream_ops:{open:stream=>{var device=FS.getDevice(stream.node.rdev);stream.stream_ops=device.stream_ops;if(stream.stream_ops.open){stream.stream_ops.open(stream)}},llseek:()=>{throw new FS.ErrnoError(70)}},major:dev=>dev>>8,minor:dev=>dev&255,makedev:(ma,mi)=>ma<<8|mi,registerDevice:(dev,ops)=>{FS.devices[dev]={stream_ops:ops}},getDevice:dev=>FS.devices[dev],getMounts:mount=>{var mounts=[];var check=[mount];while(check.length){var m=check.pop();mounts.push(m);check.push.apply(check,m.mounts)}return mounts},syncfs:(populate,callback)=>{if(typeof populate=="function"){callback=populate;populate=false}FS.syncFSRequests++;if(FS.syncFSRequests>1){err("warning: "+FS.syncFSRequests+" FS.syncfs operations in flight at once, probably just doing extra work")}var mounts=FS.getMounts(FS.root.mount);var completed=0;function doCallback(errCode){FS.syncFSRequests--;return callback(errCode)}function done(errCode){if(errCode){if(!done.errored){done.errored=true;return doCallback(errCode)}return}if(++completed>=mounts.length){doCallback(null)}}mounts.forEach(mount=>{if(!mount.type.syncfs){return done(null)}mount.type.syncfs(mount,populate,done)})},mount:(type,opts,mountpoint)=>{var root=mountpoint==="/";var pseudo=!mountpoint;var node;if(root&&FS.root){throw new FS.ErrnoError(10)}else if(!root&&!pseudo){var lookup=FS.lookupPath(mountpoint,{follow_mount:false});mountpoint=lookup.path;node=lookup.node;if(FS.isMountpoint(node)){throw new FS.ErrnoError(10)}if(!FS.isDir(node.mode)){throw new FS.ErrnoError(54)}}var mount={type:type,opts:opts,mountpoint:mountpoint,mounts:[]};var mountRoot=type.mount(mount);mountRoot.mount=mount;mount.root=mountRoot;if(root){FS.root=mountRoot}else if(node){node.mounted=mount;if(node.mount){node.mount.mounts.push(mount)}}return mountRoot},unmount:mountpoint=>{var lookup=FS.lookupPath(mountpoint,{follow_mount:false});if(!FS.isMountpoint(lookup.node)){throw new FS.ErrnoError(28)}var node=lookup.node;var mount=node.mounted;var mounts=FS.getMounts(mount);Object.keys(FS.nameTable).forEach(hash=>{var current=FS.nameTable[hash];while(current){var next=current.name_next;if(mounts.includes(current.mount)){FS.destroyNode(current)}current=next}});node.mounted=null;var idx=node.mount.mounts.indexOf(mount);node.mount.mounts.splice(idx,1)},lookup:(parent,name)=>{return parent.node_ops.lookup(parent,name)},mknod:(path,mode,dev)=>{var lookup=FS.lookupPath(path,{parent:true});var parent=lookup.node;var name=PATH.basename(path);if(!name||name==="."||name===".."){throw new FS.ErrnoError(28)}var errCode=FS.mayCreate(parent,name);if(errCode){throw new FS.ErrnoError(errCode)}if(!parent.node_ops.mknod){throw new FS.ErrnoError(63)}return parent.node_ops.mknod(parent,name,mode,dev)},create:(path,mode)=>{mode=mode!==undefined?mode:438;mode&=4095;mode|=32768;return FS.mknod(path,mode,0)},mkdir:(path,mode)=>{mode=mode!==undefined?mode:511;mode&=511|512;mode|=16384;return FS.mknod(path,mode,0)},mkdirTree:(path,mode)=>{var dirs=path.split("/");var d="";for(var i=0;i{if(typeof dev=="undefined"){dev=mode;mode=438}mode|=8192;return FS.mknod(path,mode,dev)},symlink:(oldpath,newpath)=>{if(!PATH_FS.resolve(oldpath)){throw new FS.ErrnoError(44)}var lookup=FS.lookupPath(newpath,{parent:true});var parent=lookup.node;if(!parent){throw new FS.ErrnoError(44)}var newname=PATH.basename(newpath);var errCode=FS.mayCreate(parent,newname);if(errCode){throw new FS.ErrnoError(errCode)}if(!parent.node_ops.symlink){throw new FS.ErrnoError(63)}return parent.node_ops.symlink(parent,newname,oldpath)},rename:(old_path,new_path)=>{var old_dirname=PATH.dirname(old_path);var new_dirname=PATH.dirname(new_path);var old_name=PATH.basename(old_path);var new_name=PATH.basename(new_path);var lookup,old_dir,new_dir;lookup=FS.lookupPath(old_path,{parent:true});old_dir=lookup.node;lookup=FS.lookupPath(new_path,{parent:true});new_dir=lookup.node;if(!old_dir||!new_dir)throw new FS.ErrnoError(44);if(old_dir.mount!==new_dir.mount){throw new FS.ErrnoError(75)}var old_node=FS.lookupNode(old_dir,old_name);var relative=PATH_FS.relative(old_path,new_dirname);if(relative.charAt(0)!=="."){throw new FS.ErrnoError(28)}relative=PATH_FS.relative(new_path,old_dirname);if(relative.charAt(0)!=="."){throw new FS.ErrnoError(55)}var new_node;try{new_node=FS.lookupNode(new_dir,new_name)}catch(e){}if(old_node===new_node){return}var isdir=FS.isDir(old_node.mode);var errCode=FS.mayDelete(old_dir,old_name,isdir);if(errCode){throw new FS.ErrnoError(errCode)}errCode=new_node?FS.mayDelete(new_dir,new_name,isdir):FS.mayCreate(new_dir,new_name);if(errCode){throw new FS.ErrnoError(errCode)}if(!old_dir.node_ops.rename){throw new FS.ErrnoError(63)}if(FS.isMountpoint(old_node)||new_node&&FS.isMountpoint(new_node)){throw new FS.ErrnoError(10)}if(new_dir!==old_dir){errCode=FS.nodePermissions(old_dir,"w");if(errCode){throw new FS.ErrnoError(errCode)}}FS.hashRemoveNode(old_node);try{old_dir.node_ops.rename(old_node,new_dir,new_name)}catch(e){throw e}finally{FS.hashAddNode(old_node)}},rmdir:path=>{var lookup=FS.lookupPath(path,{parent:true});var parent=lookup.node;var name=PATH.basename(path);var node=FS.lookupNode(parent,name);var errCode=FS.mayDelete(parent,name,true);if(errCode){throw new FS.ErrnoError(errCode)}if(!parent.node_ops.rmdir){throw new FS.ErrnoError(63)}if(FS.isMountpoint(node)){throw new FS.ErrnoError(10)}parent.node_ops.rmdir(parent,name);FS.destroyNode(node)},readdir:path=>{var lookup=FS.lookupPath(path,{follow:true});var node=lookup.node;if(!node.node_ops.readdir){throw new FS.ErrnoError(54)}return node.node_ops.readdir(node)},unlink:path=>{var lookup=FS.lookupPath(path,{parent:true});var parent=lookup.node;if(!parent){throw new FS.ErrnoError(44)}var name=PATH.basename(path);var node=FS.lookupNode(parent,name);var errCode=FS.mayDelete(parent,name,false);if(errCode){throw new FS.ErrnoError(errCode)}if(!parent.node_ops.unlink){throw new FS.ErrnoError(63)}if(FS.isMountpoint(node)){throw new FS.ErrnoError(10)}parent.node_ops.unlink(parent,name);FS.destroyNode(node)},readlink:path=>{var lookup=FS.lookupPath(path);var link=lookup.node;if(!link){throw new FS.ErrnoError(44)}if(!link.node_ops.readlink){throw new FS.ErrnoError(28)}return PATH_FS.resolve(FS.getPath(link.parent),link.node_ops.readlink(link))},stat:(path,dontFollow)=>{var lookup=FS.lookupPath(path,{follow:!dontFollow});var node=lookup.node;if(!node){throw new FS.ErrnoError(44)}if(!node.node_ops.getattr){throw new FS.ErrnoError(63)}return node.node_ops.getattr(node)},lstat:path=>{return FS.stat(path,true)},chmod:(path,mode,dontFollow)=>{var node;if(typeof path=="string"){var lookup=FS.lookupPath(path,{follow:!dontFollow});node=lookup.node}else{node=path}if(!node.node_ops.setattr){throw new FS.ErrnoError(63)}node.node_ops.setattr(node,{mode:mode&4095|node.mode&~4095,timestamp:Date.now()})},lchmod:(path,mode)=>{FS.chmod(path,mode,true)},fchmod:(fd,mode)=>{var stream=FS.getStream(fd);if(!stream){throw new FS.ErrnoError(8)}FS.chmod(stream.node,mode)},chown:(path,uid,gid,dontFollow)=>{var node;if(typeof path=="string"){var lookup=FS.lookupPath(path,{follow:!dontFollow});node=lookup.node}else{node=path}if(!node.node_ops.setattr){throw new FS.ErrnoError(63)}node.node_ops.setattr(node,{timestamp:Date.now()})},lchown:(path,uid,gid)=>{FS.chown(path,uid,gid,true)},fchown:(fd,uid,gid)=>{var stream=FS.getStream(fd);if(!stream){throw new FS.ErrnoError(8)}FS.chown(stream.node,uid,gid)},truncate:(path,len)=>{if(len<0){throw new FS.ErrnoError(28)}var node;if(typeof path=="string"){var lookup=FS.lookupPath(path,{follow:true});node=lookup.node}else{node=path}if(!node.node_ops.setattr){throw new FS.ErrnoError(63)}if(FS.isDir(node.mode)){throw new FS.ErrnoError(31)}if(!FS.isFile(node.mode)){throw new FS.ErrnoError(28)}var errCode=FS.nodePermissions(node,"w");if(errCode){throw new FS.ErrnoError(errCode)}node.node_ops.setattr(node,{size:len,timestamp:Date.now()})},ftruncate:(fd,len)=>{var stream=FS.getStream(fd);if(!stream){throw new FS.ErrnoError(8)}if((stream.flags&2097155)===0){throw new FS.ErrnoError(28)}FS.truncate(stream.node,len)},utime:(path,atime,mtime)=>{var lookup=FS.lookupPath(path,{follow:true});var node=lookup.node;node.node_ops.setattr(node,{timestamp:Math.max(atime,mtime)})},open:(path,flags,mode)=>{if(path===""){throw new FS.ErrnoError(44)}flags=typeof flags=="string"?FS.modeStringToFlags(flags):flags;mode=typeof mode=="undefined"?438:mode;if(flags&64){mode=mode&4095|32768}else{mode=0}var node;if(typeof path=="object"){node=path}else{path=PATH.normalize(path);try{var lookup=FS.lookupPath(path,{follow:!(flags&131072)});node=lookup.node}catch(e){}}var created=false;if(flags&64){if(node){if(flags&128){throw new FS.ErrnoError(20)}}else{node=FS.mknod(path,mode,0);created=true}}if(!node){throw new FS.ErrnoError(44)}if(FS.isChrdev(node.mode)){flags&=~512}if(flags&65536&&!FS.isDir(node.mode)){throw new FS.ErrnoError(54)}if(!created){var errCode=FS.mayOpen(node,flags);if(errCode){throw new FS.ErrnoError(errCode)}}if(flags&512&&!created){FS.truncate(node,0)}flags&=~(128|512|131072);var stream=FS.createStream({node:node,path:FS.getPath(node),flags:flags,seekable:true,position:0,stream_ops:node.stream_ops,ungotten:[],error:false});if(stream.stream_ops.open){stream.stream_ops.open(stream)}if(Module["logReadFiles"]&&!(flags&1)){if(!FS.readFiles)FS.readFiles={};if(!(path in FS.readFiles)){FS.readFiles[path]=1}}return stream},close:stream=>{if(FS.isClosed(stream)){throw new FS.ErrnoError(8)}if(stream.getdents)stream.getdents=null;try{if(stream.stream_ops.close){stream.stream_ops.close(stream)}}catch(e){throw e}finally{FS.closeStream(stream.fd)}stream.fd=null},isClosed:stream=>{return stream.fd===null},llseek:(stream,offset,whence)=>{if(FS.isClosed(stream)){throw new FS.ErrnoError(8)}if(!stream.seekable||!stream.stream_ops.llseek){throw new FS.ErrnoError(70)}if(whence!=0&&whence!=1&&whence!=2){throw new FS.ErrnoError(28)}stream.position=stream.stream_ops.llseek(stream,offset,whence);stream.ungotten=[];return stream.position},read:(stream,buffer,offset,length,position)=>{if(length<0||position<0){throw new FS.ErrnoError(28)}if(FS.isClosed(stream)){throw new FS.ErrnoError(8)}if((stream.flags&2097155)===1){throw new FS.ErrnoError(8)}if(FS.isDir(stream.node.mode)){throw new FS.ErrnoError(31)}if(!stream.stream_ops.read){throw new FS.ErrnoError(28)}var seeking=typeof position!="undefined";if(!seeking){position=stream.position}else if(!stream.seekable){throw new FS.ErrnoError(70)}var bytesRead=stream.stream_ops.read(stream,buffer,offset,length,position);if(!seeking)stream.position+=bytesRead;return bytesRead},write:(stream,buffer,offset,length,position,canOwn)=>{if(length<0||position<0){throw new FS.ErrnoError(28)}if(FS.isClosed(stream)){throw new FS.ErrnoError(8)}if((stream.flags&2097155)===0){throw new FS.ErrnoError(8)}if(FS.isDir(stream.node.mode)){throw new FS.ErrnoError(31)}if(!stream.stream_ops.write){throw new FS.ErrnoError(28)}if(stream.seekable&&stream.flags&1024){FS.llseek(stream,0,2)}var seeking=typeof position!="undefined";if(!seeking){position=stream.position}else if(!stream.seekable){throw new FS.ErrnoError(70)}var bytesWritten=stream.stream_ops.write(stream,buffer,offset,length,position,canOwn);if(!seeking)stream.position+=bytesWritten;return bytesWritten},allocate:(stream,offset,length)=>{if(FS.isClosed(stream)){throw new FS.ErrnoError(8)}if(offset<0||length<=0){throw new FS.ErrnoError(28)}if((stream.flags&2097155)===0){throw new FS.ErrnoError(8)}if(!FS.isFile(stream.node.mode)&&!FS.isDir(stream.node.mode)){throw new FS.ErrnoError(43)}if(!stream.stream_ops.allocate){throw new FS.ErrnoError(138)}stream.stream_ops.allocate(stream,offset,length)},mmap:(stream,length,position,prot,flags)=>{if((prot&2)!==0&&(flags&2)===0&&(stream.flags&2097155)!==2){throw new FS.ErrnoError(2)}if((stream.flags&2097155)===1){throw new FS.ErrnoError(2)}if(!stream.stream_ops.mmap){throw new FS.ErrnoError(43)}return stream.stream_ops.mmap(stream,length,position,prot,flags)},msync:(stream,buffer,offset,length,mmapFlags)=>{if(!stream||!stream.stream_ops.msync){return 0}return stream.stream_ops.msync(stream,buffer,offset,length,mmapFlags)},munmap:stream=>0,ioctl:(stream,cmd,arg)=>{if(!stream.stream_ops.ioctl){throw new FS.ErrnoError(59)}return stream.stream_ops.ioctl(stream,cmd,arg)},readFile:(path,opts={})=>{opts.flags=opts.flags||0;opts.encoding=opts.encoding||"binary";if(opts.encoding!=="utf8"&&opts.encoding!=="binary"){throw new Error('Invalid encoding type "'+opts.encoding+'"')}var ret;var stream=FS.open(path,opts.flags);var stat=FS.stat(path);var length=stat.size;var buf=new Uint8Array(length);FS.read(stream,buf,0,length,0);if(opts.encoding==="utf8"){ret=UTF8ArrayToString(buf,0)}else if(opts.encoding==="binary"){ret=buf}FS.close(stream);return ret},writeFile:(path,data,opts={})=>{opts.flags=opts.flags||577;var stream=FS.open(path,opts.flags,opts.mode);if(typeof data=="string"){var buf=new Uint8Array(lengthBytesUTF8(data)+1);var actualNumBytes=stringToUTF8Array(data,buf,0,buf.length);FS.write(stream,buf,0,actualNumBytes,undefined,opts.canOwn)}else if(ArrayBuffer.isView(data)){FS.write(stream,data,0,data.byteLength,undefined,opts.canOwn)}else{throw new Error("Unsupported data type")}FS.close(stream)},cwd:()=>FS.currentPath,chdir:path=>{var lookup=FS.lookupPath(path,{follow:true});if(lookup.node===null){throw new FS.ErrnoError(44)}if(!FS.isDir(lookup.node.mode)){throw new FS.ErrnoError(54)}var errCode=FS.nodePermissions(lookup.node,"x");if(errCode){throw new FS.ErrnoError(errCode)}FS.currentPath=lookup.path},createDefaultDirectories:()=>{FS.mkdir("/tmp");FS.mkdir("/home");FS.mkdir("/home/web_user")},createDefaultDevices:()=>{FS.mkdir("/dev");FS.registerDevice(FS.makedev(1,3),{read:()=>0,write:(stream,buffer,offset,length,pos)=>length});FS.mkdev("/dev/null",FS.makedev(1,3));TTY.register(FS.makedev(5,0),TTY.default_tty_ops);TTY.register(FS.makedev(6,0),TTY.default_tty1_ops);FS.mkdev("/dev/tty",FS.makedev(5,0));FS.mkdev("/dev/tty1",FS.makedev(6,0));var random_device=getRandomDevice();FS.createDevice("/dev","random",random_device);FS.createDevice("/dev","urandom",random_device);FS.mkdir("/dev/shm");FS.mkdir("/dev/shm/tmp")},createSpecialDirectories:()=>{FS.mkdir("/proc");var proc_self=FS.mkdir("/proc/self");FS.mkdir("/proc/self/fd");FS.mount({mount:()=>{var node=FS.createNode(proc_self,"fd",16384|511,73);node.node_ops={lookup:(parent,name)=>{var fd=+name;var stream=FS.getStream(fd);if(!stream)throw new FS.ErrnoError(8);var ret={parent:null,mount:{mountpoint:"fake"},node_ops:{readlink:()=>stream.path}};ret.parent=ret;return ret}};return node}},{},"/proc/self/fd")},createStandardStreams:()=>{if(Module["stdin"]){FS.createDevice("/dev","stdin",Module["stdin"])}else{FS.symlink("/dev/tty","/dev/stdin")}if(Module["stdout"]){FS.createDevice("/dev","stdout",null,Module["stdout"])}else{FS.symlink("/dev/tty","/dev/stdout")}if(Module["stderr"]){FS.createDevice("/dev","stderr",null,Module["stderr"])}else{FS.symlink("/dev/tty1","/dev/stderr")}var stdin=FS.open("/dev/stdin",0);var stdout=FS.open("/dev/stdout",1);var stderr=FS.open("/dev/stderr",1)},ensureErrnoError:()=>{if(FS.ErrnoError)return;FS.ErrnoError=function ErrnoError(errno,node){this.node=node;this.setErrno=function(errno){this.errno=errno};this.setErrno(errno);this.message="FS error"};FS.ErrnoError.prototype=new Error;FS.ErrnoError.prototype.constructor=FS.ErrnoError;[44].forEach(code=>{FS.genericErrors[code]=new FS.ErrnoError(code);FS.genericErrors[code].stack=""})},staticInit:()=>{FS.ensureErrnoError();FS.nameTable=new Array(4096);FS.mount(MEMFS,{},"/");FS.createDefaultDirectories();FS.createDefaultDevices();FS.createSpecialDirectories();FS.filesystems={"MEMFS":MEMFS,"IDBFS":IDBFS}},init:(input,output,error)=>{FS.init.initialized=true;FS.ensureErrnoError();Module["stdin"]=input||Module["stdin"];Module["stdout"]=output||Module["stdout"];Module["stderr"]=error||Module["stderr"];FS.createStandardStreams()},quit:()=>{FS.init.initialized=false;_fflush(0);for(var i=0;i{var mode=0;if(canRead)mode|=292|73;if(canWrite)mode|=146;return mode},findObject:(path,dontResolveLastLink)=>{var ret=FS.analyzePath(path,dontResolveLastLink);if(ret.exists){return ret.object}else{return null}},analyzePath:(path,dontResolveLastLink)=>{try{var lookup=FS.lookupPath(path,{follow:!dontResolveLastLink});path=lookup.path}catch(e){}var ret={isRoot:false,exists:false,error:0,name:null,path:null,object:null,parentExists:false,parentPath:null,parentObject:null};try{var lookup=FS.lookupPath(path,{parent:true});ret.parentExists=true;ret.parentPath=lookup.path;ret.parentObject=lookup.node;ret.name=PATH.basename(path);lookup=FS.lookupPath(path,{follow:!dontResolveLastLink});ret.exists=true;ret.path=lookup.path;ret.object=lookup.node;ret.name=lookup.node.name;ret.isRoot=lookup.path==="/"}catch(e){ret.error=e.errno}return ret},createPath:(parent,path,canRead,canWrite)=>{parent=typeof parent=="string"?parent:FS.getPath(parent);var parts=path.split("/").reverse();while(parts.length){var part=parts.pop();if(!part)continue;var current=PATH.join2(parent,part);try{FS.mkdir(current)}catch(e){}parent=current}return current},createFile:(parent,name,properties,canRead,canWrite)=>{var path=PATH.join2(typeof parent=="string"?parent:FS.getPath(parent),name);var mode=FS.getMode(canRead,canWrite);return FS.create(path,mode)},createDataFile:(parent,name,data,canRead,canWrite,canOwn)=>{var path=name;if(parent){parent=typeof parent=="string"?parent:FS.getPath(parent);path=name?PATH.join2(parent,name):parent}var mode=FS.getMode(canRead,canWrite);var node=FS.create(path,mode);if(data){if(typeof data=="string"){var arr=new Array(data.length);for(var i=0,len=data.length;i{var path=PATH.join2(typeof parent=="string"?parent:FS.getPath(parent),name);var mode=FS.getMode(!!input,!!output);if(!FS.createDevice.major)FS.createDevice.major=64;var dev=FS.makedev(FS.createDevice.major++,0);FS.registerDevice(dev,{open:stream=>{stream.seekable=false},close:stream=>{if(output&&output.buffer&&output.buffer.length){output(10)}},read:(stream,buffer,offset,length,pos)=>{var bytesRead=0;for(var i=0;i{for(var i=0;i{if(obj.isDevice||obj.isFolder||obj.link||obj.contents)return true;if(typeof XMLHttpRequest!="undefined"){throw new Error("Lazy loading should have been performed (contents set) in createLazyFile, but it was not. Lazy loading only works in web workers. Use --embed-file or --preload-file in emcc on the main thread.")}else if(read_){try{obj.contents=intArrayFromString(read_(obj.url),true);obj.usedBytes=obj.contents.length}catch(e){throw new FS.ErrnoError(29)}}else{throw new Error("Cannot load without read() or XMLHttpRequest.")}},createLazyFile:(parent,name,url,canRead,canWrite)=>{function LazyUint8Array(){this.lengthKnown=false;this.chunks=[]}LazyUint8Array.prototype.get=function LazyUint8Array_get(idx){if(idx>this.length-1||idx<0){return undefined}var chunkOffset=idx%this.chunkSize;var chunkNum=idx/this.chunkSize|0;return this.getter(chunkNum)[chunkOffset]};LazyUint8Array.prototype.setDataGetter=function LazyUint8Array_setDataGetter(getter){this.getter=getter};LazyUint8Array.prototype.cacheLength=function LazyUint8Array_cacheLength(){var xhr=new XMLHttpRequest;xhr.open("HEAD",url,false);xhr.send(null);if(!(xhr.status>=200&&xhr.status<300||xhr.status===304))throw new Error("Couldn't load "+url+". Status: "+xhr.status);var datalength=Number(xhr.getResponseHeader("Content-length"));var header;var hasByteServing=(header=xhr.getResponseHeader("Accept-Ranges"))&&header==="bytes";var usesGzip=(header=xhr.getResponseHeader("Content-Encoding"))&&header==="gzip";var chunkSize=1024*1024;if(!hasByteServing)chunkSize=datalength;var doXHR=(from,to)=>{if(from>to)throw new Error("invalid range ("+from+", "+to+") or no bytes requested!");if(to>datalength-1)throw new Error("only "+datalength+" bytes available! programmer error!");var xhr=new XMLHttpRequest;xhr.open("GET",url,false);if(datalength!==chunkSize)xhr.setRequestHeader("Range","bytes="+from+"-"+to);xhr.responseType="arraybuffer";if(xhr.overrideMimeType){xhr.overrideMimeType("text/plain; charset=x-user-defined")}xhr.send(null);if(!(xhr.status>=200&&xhr.status<300||xhr.status===304))throw new Error("Couldn't load "+url+". Status: "+xhr.status);if(xhr.response!==undefined){return new Uint8Array(xhr.response||[])}else{return intArrayFromString(xhr.responseText||"",true)}};var lazyArray=this;lazyArray.setDataGetter(chunkNum=>{var start=chunkNum*chunkSize;var end=(chunkNum+1)*chunkSize-1;end=Math.min(end,datalength-1);if(typeof lazyArray.chunks[chunkNum]=="undefined"){lazyArray.chunks[chunkNum]=doXHR(start,end)}if(typeof lazyArray.chunks[chunkNum]=="undefined")throw new Error("doXHR failed!");return lazyArray.chunks[chunkNum]});if(usesGzip||!datalength){chunkSize=datalength=1;datalength=this.getter(0).length;chunkSize=datalength;out("LazyFiles on gzip forces download of the whole file when length is accessed")}this._length=datalength;this._chunkSize=chunkSize;this.lengthKnown=true};if(typeof XMLHttpRequest!="undefined"){if(!ENVIRONMENT_IS_WORKER)throw"Cannot do synchronous binary XHRs outside webworkers in modern browsers. Use --embed-file or --preload-file in emcc";var lazyArray=new LazyUint8Array;Object.defineProperties(lazyArray,{length:{get:function(){if(!this.lengthKnown){this.cacheLength()}return this._length}},chunkSize:{get:function(){if(!this.lengthKnown){this.cacheLength()}return this._chunkSize}}});var properties={isDevice:false,contents:lazyArray}}else{var properties={isDevice:false,url:url}}var node=FS.createFile(parent,name,properties,canRead,canWrite);if(properties.contents){node.contents=properties.contents}else if(properties.url){node.contents=null;node.url=properties.url}Object.defineProperties(node,{usedBytes:{get:function(){return this.contents.length}}});var stream_ops={};var keys=Object.keys(node.stream_ops);keys.forEach(key=>{var fn=node.stream_ops[key];stream_ops[key]=function forceLoadLazyFile(){FS.forceLoadFile(node);return fn.apply(null,arguments)}});function writeChunks(stream,buffer,offset,length,position){var contents=stream.node.contents;if(position>=contents.length)return 0;var size=Math.min(contents.length-position,length);if(contents.slice){for(var i=0;i{FS.forceLoadFile(node);return writeChunks(stream,buffer,offset,length,position)};stream_ops.mmap=(stream,length,position,prot,flags)=>{FS.forceLoadFile(node);var ptr=mmapAlloc(length);if(!ptr){throw new FS.ErrnoError(48)}writeChunks(stream,HEAP8,ptr,length,position);return{ptr:ptr,allocated:true}};node.stream_ops=stream_ops;return node},createPreloadedFile:(parent,name,url,canRead,canWrite,onload,onerror,dontCreateFile,canOwn,preFinish)=>{var fullname=name?PATH_FS.resolve(PATH.join2(parent,name)):parent;var dep=getUniqueRunDependency("cp "+fullname);function processData(byteArray){function finish(byteArray){if(preFinish)preFinish();if(!dontCreateFile){FS.createDataFile(parent,name,byteArray,canRead,canWrite,canOwn)}if(onload)onload();removeRunDependency(dep)}if(Browser.handledByPreloadPlugin(byteArray,fullname,finish,()=>{if(onerror)onerror();removeRunDependency(dep)})){return}finish(byteArray)}addRunDependency(dep);if(typeof url=="string"){asyncLoad(url,byteArray=>processData(byteArray),onerror)}else{processData(url)}},indexedDB:()=>{return window.indexedDB||window.mozIndexedDB||window.webkitIndexedDB||window.msIndexedDB},DB_NAME:()=>{return"EM_FS_"+window.location.pathname},DB_VERSION:20,DB_STORE_NAME:"FILE_DATA",saveFilesToDB:(paths,onload,onerror)=>{onload=onload||(()=>{});onerror=onerror||(()=>{});var indexedDB=FS.indexedDB();try{var openRequest=indexedDB.open(FS.DB_NAME(),FS.DB_VERSION)}catch(e){return onerror(e)}openRequest.onupgradeneeded=()=>{out("creating db");var db=openRequest.result;db.createObjectStore(FS.DB_STORE_NAME)};openRequest.onsuccess=()=>{var db=openRequest.result;var transaction=db.transaction([FS.DB_STORE_NAME],"readwrite");var files=transaction.objectStore(FS.DB_STORE_NAME);var ok=0,fail=0,total=paths.length;function finish(){if(fail==0)onload();else onerror()}paths.forEach(path=>{var putRequest=files.put(FS.analyzePath(path).object.contents,path);putRequest.onsuccess=()=>{ok++;if(ok+fail==total)finish()};putRequest.onerror=()=>{fail++;if(ok+fail==total)finish()}});transaction.onerror=onerror};openRequest.onerror=onerror},loadFilesFromDB:(paths,onload,onerror)=>{onload=onload||(()=>{});onerror=onerror||(()=>{});var indexedDB=FS.indexedDB();try{var openRequest=indexedDB.open(FS.DB_NAME(),FS.DB_VERSION)}catch(e){return onerror(e)}openRequest.onupgradeneeded=onerror;openRequest.onsuccess=()=>{var db=openRequest.result;try{var transaction=db.transaction([FS.DB_STORE_NAME],"readonly")}catch(e){onerror(e);return}var files=transaction.objectStore(FS.DB_STORE_NAME);var ok=0,fail=0,total=paths.length;function finish(){if(fail==0)onload();else onerror()}paths.forEach(path=>{var getRequest=files.get(path);getRequest.onsuccess=()=>{if(FS.analyzePath(path).exists){FS.unlink(path)}FS.createDataFile(PATH.dirname(path),PATH.basename(path),getRequest.result,true,true,true);ok++;if(ok+fail==total)finish()};getRequest.onerror=()=>{fail++;if(ok+fail==total)finish()}});transaction.onerror=onerror};openRequest.onerror=onerror}};var SYSCALLS={DEFAULT_POLLMASK:5,calculateAt:function(dirfd,path,allowEmpty){if(PATH.isAbs(path)){return path}var dir;if(dirfd===-100){dir=FS.cwd()}else{var dirstream=FS.getStream(dirfd);if(!dirstream)throw new FS.ErrnoError(8);dir=dirstream.path}if(path.length==0){if(!allowEmpty){throw new FS.ErrnoError(44)}return dir}return PATH.join2(dir,path)},doStat:function(func,path,buf){try{var stat=func(path)}catch(e){if(e&&e.node&&PATH.normalize(path)!==PATH.normalize(FS.getPath(e.node))){return-54}throw e}HEAP32[buf>>2]=stat.dev;HEAP32[buf+4>>2]=0;HEAP32[buf+8>>2]=stat.ino;HEAP32[buf+12>>2]=stat.mode;HEAP32[buf+16>>2]=stat.nlink;HEAP32[buf+20>>2]=stat.uid;HEAP32[buf+24>>2]=stat.gid;HEAP32[buf+28>>2]=stat.rdev;HEAP32[buf+32>>2]=0;tempI64=[stat.size>>>0,(tempDouble=stat.size,+Math.abs(tempDouble)>=1?tempDouble>0?(Math.min(+Math.floor(tempDouble/4294967296),4294967295)|0)>>>0:~~+Math.ceil((tempDouble-+(~~tempDouble>>>0))/4294967296)>>>0:0)],HEAP32[buf+40>>2]=tempI64[0],HEAP32[buf+44>>2]=tempI64[1];HEAP32[buf+48>>2]=4096;HEAP32[buf+52>>2]=stat.blocks;HEAP32[buf+56>>2]=stat.atime.getTime()/1e3|0;HEAP32[buf+60>>2]=0;HEAP32[buf+64>>2]=stat.mtime.getTime()/1e3|0;HEAP32[buf+68>>2]=0;HEAP32[buf+72>>2]=stat.ctime.getTime()/1e3|0;HEAP32[buf+76>>2]=0;tempI64=[stat.ino>>>0,(tempDouble=stat.ino,+Math.abs(tempDouble)>=1?tempDouble>0?(Math.min(+Math.floor(tempDouble/4294967296),4294967295)|0)>>>0:~~+Math.ceil((tempDouble-+(~~tempDouble>>>0))/4294967296)>>>0:0)],HEAP32[buf+80>>2]=tempI64[0],HEAP32[buf+84>>2]=tempI64[1];return 0},doMsync:function(addr,stream,len,flags,offset){var buffer=HEAPU8.slice(addr,addr+len);FS.msync(stream,buffer,offset,len,flags)},varargs:undefined,get:function(){SYSCALLS.varargs+=4;var ret=HEAP32[SYSCALLS.varargs-4>>2];return ret},getStr:function(ptr){var ret=UTF8ToString(ptr);return ret},getStreamFromFD:function(fd){var stream=FS.getStream(fd);if(!stream)throw new FS.ErrnoError(8);return stream}};function ___syscall__newselect(nfds,readfds,writefds,exceptfds,timeout){try{var total=0;var srcReadLow=readfds?HEAP32[readfds>>2]:0,srcReadHigh=readfds?HEAP32[readfds+4>>2]:0;var srcWriteLow=writefds?HEAP32[writefds>>2]:0,srcWriteHigh=writefds?HEAP32[writefds+4>>2]:0;var srcExceptLow=exceptfds?HEAP32[exceptfds>>2]:0,srcExceptHigh=exceptfds?HEAP32[exceptfds+4>>2]:0;var dstReadLow=0,dstReadHigh=0;var dstWriteLow=0,dstWriteHigh=0;var dstExceptLow=0,dstExceptHigh=0;var allLow=(readfds?HEAP32[readfds>>2]:0)|(writefds?HEAP32[writefds>>2]:0)|(exceptfds?HEAP32[exceptfds>>2]:0);var allHigh=(readfds?HEAP32[readfds+4>>2]:0)|(writefds?HEAP32[writefds+4>>2]:0)|(exceptfds?HEAP32[exceptfds+4>>2]:0);var check=function(fd,low,high,val){return fd<32?low&val:high&val};for(var fd=0;fd>2]=dstReadLow;HEAP32[readfds+4>>2]=dstReadHigh}if(writefds){HEAP32[writefds>>2]=dstWriteLow;HEAP32[writefds+4>>2]=dstWriteHigh}if(exceptfds){HEAP32[exceptfds>>2]=dstExceptLow;HEAP32[exceptfds+4>>2]=dstExceptHigh}return total}catch(e){if(typeof FS=="undefined"||!(e instanceof FS.ErrnoError))throw e;return-e.errno}}var SOCKFS={mount:function(mount){Module["websocket"]=Module["websocket"]&&"object"===typeof Module["websocket"]?Module["websocket"]:{};Module["websocket"]._callbacks={};Module["websocket"]["on"]=function(event,callback){if("function"===typeof callback){this._callbacks[event]=callback}return this};Module["websocket"].emit=function(event,param){if("function"===typeof this._callbacks[event]){this._callbacks[event].call(this,param)}};return FS.createNode(null,"/",16384|511,0)},createSocket:function(family,type,protocol){type&=~526336;var streaming=type==1;if(streaming&&protocol&&protocol!=6){throw new FS.ErrnoError(66)}var sock={family:family,type:type,protocol:protocol,server:null,error:null,peers:{},pending:[],recv_queue:[],sock_ops:SOCKFS.websocket_sock_ops};var name=SOCKFS.nextname();var node=FS.createNode(SOCKFS.root,name,49152,0);node.sock=sock;var stream=FS.createStream({path:name,node:node,flags:2,seekable:false,stream_ops:SOCKFS.stream_ops});sock.stream=stream;return sock},getSocket:function(fd){var stream=FS.getStream(fd);if(!stream||!FS.isSocket(stream.node.mode)){return null}return stream.node.sock},stream_ops:{poll:function(stream){var sock=stream.node.sock;return sock.sock_ops.poll(sock)},ioctl:function(stream,request,varargs){var sock=stream.node.sock;return sock.sock_ops.ioctl(sock,request,varargs)},read:function(stream,buffer,offset,length,position){var sock=stream.node.sock;var msg=sock.sock_ops.recvmsg(sock,length);if(!msg){return 0}buffer.set(msg.buffer,offset);return msg.buffer.length},write:function(stream,buffer,offset,length,position){var sock=stream.node.sock;return sock.sock_ops.sendmsg(sock,buffer,offset,length)},close:function(stream){var sock=stream.node.sock;sock.sock_ops.close(sock)}},nextname:function(){if(!SOCKFS.nextname.current){SOCKFS.nextname.current=0}return"socket["+SOCKFS.nextname.current+++"]"},websocket_sock_ops:{createPeer:function(sock,addr,port){var ws;if(typeof addr=="object"){ws=addr;addr=null;port=null}if(ws){if(ws._socket){addr=ws._socket.remoteAddress;port=ws._socket.remotePort}else{var result=/ws[s]?:\/\/([^:]+):(\d+)/.exec(ws.url);if(!result){throw new Error("WebSocket URL must be in the format ws(s)://address:port")}addr=result[1];port=parseInt(result[2],10)}}else{try{var runtimeConfig=Module["websocket"]&&"object"===typeof Module["websocket"];var url="ws:#".replace("#","//");if(runtimeConfig){if("string"===typeof Module["websocket"]["url"]){url=Module["websocket"]["url"]}}if(url==="ws://"||url==="wss://"){var parts=addr.split("/");url=url+parts[0]+":"+port+"/"+parts.slice(1).join("/")}var subProtocols="binary";if(runtimeConfig){if("string"===typeof Module["websocket"]["subprotocol"]){subProtocols=Module["websocket"]["subprotocol"]}}var opts=undefined;if(subProtocols!=="null"){subProtocols=subProtocols.replace(/^ +| +$/g,"").split(/ *, */);opts=subProtocols}if(runtimeConfig&&null===Module["websocket"]["subprotocol"]){subProtocols="null";opts=undefined}var WebSocketConstructor;{WebSocketConstructor=WebSocket}ws=new WebSocketConstructor(url,opts);ws.binaryType="arraybuffer"}catch(e){throw new FS.ErrnoError(23)}}var peer={addr:addr,port:port,socket:ws,dgram_send_queue:[]};SOCKFS.websocket_sock_ops.addPeer(sock,peer);SOCKFS.websocket_sock_ops.handlePeerEvents(sock,peer);if(sock.type===2&&typeof sock.sport!="undefined"){peer.dgram_send_queue.push(new Uint8Array([255,255,255,255,"p".charCodeAt(0),"o".charCodeAt(0),"r".charCodeAt(0),"t".charCodeAt(0),(sock.sport&65280)>>8,sock.sport&255]))}return peer},getPeer:function(sock,addr,port){return sock.peers[addr+":"+port]},addPeer:function(sock,peer){sock.peers[peer.addr+":"+peer.port]=peer},removePeer:function(sock,peer){delete sock.peers[peer.addr+":"+peer.port]},handlePeerEvents:function(sock,peer){var first=true;var handleOpen=function(){Module["websocket"].emit("open",sock.stream.fd);try{var queued=peer.dgram_send_queue.shift();while(queued){peer.socket.send(queued);queued=peer.dgram_send_queue.shift()}}catch(e){peer.socket.close()}};function handleMessage(data){if(typeof data=="string"){var encoder=new TextEncoder;data=encoder.encode(data)}else{assert(data.byteLength!==undefined);if(data.byteLength==0){return}else{data=new Uint8Array(data)}}var wasfirst=first;first=false;if(wasfirst&&data.length===10&&data[0]===255&&data[1]===255&&data[2]===255&&data[3]===255&&data[4]==="p".charCodeAt(0)&&data[5]==="o".charCodeAt(0)&&data[6]==="r".charCodeAt(0)&&data[7]==="t".charCodeAt(0)){var newport=data[8]<<8|data[9];SOCKFS.websocket_sock_ops.removePeer(sock,peer);peer.port=newport;SOCKFS.websocket_sock_ops.addPeer(sock,peer);return}sock.recv_queue.push({addr:peer.addr,port:peer.port,data:data});Module["websocket"].emit("message",sock.stream.fd)}if(ENVIRONMENT_IS_NODE){peer.socket.on("open",handleOpen);peer.socket.on("message",function(data,isBinary){if(!isBinary){return}handleMessage(new Uint8Array(data).buffer)});peer.socket.on("close",function(){Module["websocket"].emit("close",sock.stream.fd)});peer.socket.on("error",function(error){sock.error=14;Module["websocket"].emit("error",[sock.stream.fd,sock.error,"ECONNREFUSED: Connection refused"])})}else{peer.socket.onopen=handleOpen;peer.socket.onclose=function(){Module["websocket"].emit("close",sock.stream.fd)};peer.socket.onmessage=function peer_socket_onmessage(event){handleMessage(event.data)};peer.socket.onerror=function(error){sock.error=14;Module["websocket"].emit("error",[sock.stream.fd,sock.error,"ECONNREFUSED: Connection refused"])}}},poll:function(sock){if(sock.type===1&&sock.server){return sock.pending.length?64|1:0}var mask=0;var dest=sock.type===1?SOCKFS.websocket_sock_ops.getPeer(sock,sock.daddr,sock.dport):null;if(sock.recv_queue.length||!dest||dest&&dest.socket.readyState===dest.socket.CLOSING||dest&&dest.socket.readyState===dest.socket.CLOSED){mask|=64|1}if(!dest||dest&&dest.socket.readyState===dest.socket.OPEN){mask|=4}if(dest&&dest.socket.readyState===dest.socket.CLOSING||dest&&dest.socket.readyState===dest.socket.CLOSED){mask|=16}return mask},ioctl:function(sock,request,arg){switch(request){case 21531:var bytes=0;if(sock.recv_queue.length){bytes=sock.recv_queue[0].data.length}HEAP32[arg>>2]=bytes;return 0;default:return 28}},close:function(sock){if(sock.server){try{sock.server.close()}catch(e){}sock.server=null}var peers=Object.keys(sock.peers);for(var i=0;i>2]=value;return value}function inetPton4(str){var b=str.split(".");for(var i=0;i<4;i++){var tmp=Number(b[i]);if(isNaN(tmp))return null;b[i]=tmp}return(b[0]|b[1]<<8|b[2]<<16|b[3]<<24)>>>0}function jstoi_q(str){return parseInt(str)}function inetPton6(str){var words;var w,offset,z;var valid6regx=/^((?=.*::)(?!.*::.+::)(::)?([\dA-F]{1,4}:(:|\b)|){5}|([\dA-F]{1,4}:){6})((([\dA-F]{1,4}((?!\3)::|:\b|$))|(?!\2\3)){2}|(((2[0-4]|1\d|[1-9])?\d|25[0-5])\.?\b){4})$/i;var parts=[];if(!valid6regx.test(str)){return null}if(str==="::"){return[0,0,0,0,0,0,0,0]}if(str.startsWith("::")){str=str.replace("::","Z:")}else{str=str.replace("::",":Z:")}if(str.indexOf(".")>0){str=str.replace(new RegExp("[.]","g"),":");words=str.split(":");words[words.length-4]=jstoi_q(words[words.length-4])+jstoi_q(words[words.length-3])*256;words[words.length-3]=jstoi_q(words[words.length-2])+jstoi_q(words[words.length-1])*256;words=words.slice(0,words.length-2)}else{words=str.split(":")}offset=0;z=0;for(w=0;w>2]=16}HEAP16[sa>>1]=family;HEAP32[sa+4>>2]=addr;HEAP16[sa+2>>1]=_htons(port);break;case 10:addr=inetPton6(addr);zeroMemory(sa,28);if(addrlen){HEAP32[addrlen>>2]=28}HEAP32[sa>>2]=family;HEAP32[sa+8>>2]=addr[0];HEAP32[sa+12>>2]=addr[1];HEAP32[sa+16>>2]=addr[2];HEAP32[sa+20>>2]=addr[3];HEAP16[sa+2>>1]=_htons(port);break;default:return 5}return 0}var DNS={address_map:{id:1,addrs:{},names:{}},lookup_name:function(name){var res=inetPton4(name);if(res!==null){return name}res=inetPton6(name);if(res!==null){return name}var addr;if(DNS.address_map.addrs[name]){addr=DNS.address_map.addrs[name]}else{var id=DNS.address_map.id++;assert(id<65535,"exceeded max address mappings of 65535");addr="172.29."+(id&255)+"."+(id&65280);DNS.address_map.names[addr]=name;DNS.address_map.addrs[name]=addr}return addr},lookup_addr:function(addr){if(DNS.address_map.names[addr]){return DNS.address_map.names[addr]}return null}};function ___syscall_accept4(fd,addr,addrlen,flags){try{var sock=getSocketFromFD(fd);var newsock=sock.sock_ops.accept(sock);if(addr){var errno=writeSockaddr(addr,newsock.family,DNS.lookup_name(newsock.daddr),newsock.dport,addrlen)}return newsock.stream.fd}catch(e){if(typeof FS=="undefined"||!(e instanceof FS.ErrnoError))throw e;return-e.errno}}function inetNtop4(addr){return(addr&255)+"."+(addr>>8&255)+"."+(addr>>16&255)+"."+(addr>>24&255)}function inetNtop6(ints){var str="";var word=0;var longest=0;var lastzero=0;var zstart=0;var len=0;var i=0;var parts=[ints[0]&65535,ints[0]>>16,ints[1]&65535,ints[1]>>16,ints[2]&65535,ints[2]>>16,ints[3]&65535,ints[3]>>16];var hasipv4=true;var v4part="";for(i=0;i<5;i++){if(parts[i]!==0){hasipv4=false;break}}if(hasipv4){v4part=inetNtop4(parts[6]|parts[7]<<16);if(parts[5]===-1){str="::ffff:";str+=v4part;return str}if(parts[5]===0){str="::";if(v4part==="0.0.0.0")v4part="";if(v4part==="0.0.0.1")v4part="1";str+=v4part;return str}}for(word=0;word<8;word++){if(parts[word]===0){if(word-lastzero>1){len=0}lastzero=word;len++}if(len>longest){longest=len;zstart=word-longest+1}}for(word=0;word<8;word++){if(longest>1){if(parts[word]===0&&word>=zstart&&word>1];var port=_ntohs(HEAPU16[sa+2>>1]);var addr;switch(family){case 2:if(salen!==16){return{errno:28}}addr=HEAP32[sa+4>>2];addr=inetNtop4(addr);break;case 10:if(salen!==28){return{errno:28}}addr=[HEAP32[sa+8>>2],HEAP32[sa+12>>2],HEAP32[sa+16>>2],HEAP32[sa+20>>2]];addr=inetNtop6(addr);break;default:return{errno:5}}return{family:family,addr:addr,port:port}}function getSocketAddress(addrp,addrlen,allowNull){if(allowNull&&addrp===0)return null;var info=readSockaddr(addrp,addrlen);if(info.errno)throw new FS.ErrnoError(info.errno);info.addr=DNS.lookup_addr(info.addr)||info.addr;return info}function ___syscall_bind(fd,addr,addrlen){try{var sock=getSocketFromFD(fd);var info=getSocketAddress(addr,addrlen);sock.sock_ops.bind(sock,info.addr,info.port);return 0}catch(e){if(typeof FS=="undefined"||!(e instanceof FS.ErrnoError))throw e;return-e.errno}}function ___syscall_chdir(path){try{path=SYSCALLS.getStr(path);FS.chdir(path);return 0}catch(e){if(typeof FS=="undefined"||!(e instanceof FS.ErrnoError))throw e;return-e.errno}}function ___syscall_chmod(path,mode){try{path=SYSCALLS.getStr(path);FS.chmod(path,mode);return 0}catch(e){if(typeof FS=="undefined"||!(e instanceof FS.ErrnoError))throw e;return-e.errno}}function ___syscall_connect(fd,addr,addrlen){try{var sock=getSocketFromFD(fd);var info=getSocketAddress(addr,addrlen);sock.sock_ops.connect(sock,info.addr,info.port);return 0}catch(e){if(typeof FS=="undefined"||!(e instanceof FS.ErrnoError))throw e;return-e.errno}}function ___syscall_faccessat(dirfd,path,amode,flags){try{path=SYSCALLS.getStr(path);path=SYSCALLS.calculateAt(dirfd,path);if(amode&~7){return-28}var lookup=FS.lookupPath(path,{follow:true});var node=lookup.node;if(!node){return-44}var perms="";if(amode&4)perms+="r";if(amode&2)perms+="w";if(amode&1)perms+="x";if(perms&&FS.nodePermissions(node,perms)){return-2}return 0}catch(e){if(typeof FS=="undefined"||!(e instanceof FS.ErrnoError))throw e;return-e.errno}}function ___syscall_fcntl64(fd,cmd,varargs){SYSCALLS.varargs=varargs;try{var stream=SYSCALLS.getStreamFromFD(fd);switch(cmd){case 0:{var arg=SYSCALLS.get();if(arg<0){return-28}var newStream;newStream=FS.createStream(stream,arg);return newStream.fd}case 1:case 2:return 0;case 3:return stream.flags;case 4:{var arg=SYSCALLS.get();stream.flags|=arg;return 0}case 5:{var arg=SYSCALLS.get();var offset=0;HEAP16[arg+offset>>1]=2;return 0}case 6:case 7:return 0;case 16:case 8:return-28;case 9:setErrNo(28);return-1;default:{return-28}}}catch(e){if(typeof FS=="undefined"||!(e instanceof FS.ErrnoError))throw e;return-e.errno}}function ___syscall_getcwd(buf,size){try{if(size===0)return-28;var cwd=FS.cwd();var cwdLengthInBytes=lengthBytesUTF8(cwd)+1;if(size>>0,(tempDouble=id,+Math.abs(tempDouble)>=1?tempDouble>0?(Math.min(+Math.floor(tempDouble/4294967296),4294967295)|0)>>>0:~~+Math.ceil((tempDouble-+(~~tempDouble>>>0))/4294967296)>>>0:0)],HEAP32[dirp+pos>>2]=tempI64[0],HEAP32[dirp+pos+4>>2]=tempI64[1];tempI64=[(idx+1)*struct_size>>>0,(tempDouble=(idx+1)*struct_size,+Math.abs(tempDouble)>=1?tempDouble>0?(Math.min(+Math.floor(tempDouble/4294967296),4294967295)|0)>>>0:~~+Math.ceil((tempDouble-+(~~tempDouble>>>0))/4294967296)>>>0:0)],HEAP32[dirp+pos+8>>2]=tempI64[0],HEAP32[dirp+pos+12>>2]=tempI64[1];HEAP16[dirp+pos+16>>1]=280;HEAP8[dirp+pos+18>>0]=type;stringToUTF8(name,dirp+pos+19,256);pos+=struct_size;idx+=1}FS.llseek(stream,idx*struct_size,0);return pos}catch(e){if(typeof FS=="undefined"||!(e instanceof FS.ErrnoError))throw e;return-e.errno}}function ___syscall_getsockname(fd,addr,addrlen){try{err("__syscall_getsockname "+fd);var sock=getSocketFromFD(fd);var errno=writeSockaddr(addr,sock.family,DNS.lookup_name(sock.saddr||"0.0.0.0"),sock.sport,addrlen);return 0}catch(e){if(typeof FS=="undefined"||!(e instanceof FS.ErrnoError))throw e;return-e.errno}}function ___syscall_getsockopt(fd,level,optname,optval,optlen){try{var sock=getSocketFromFD(fd);if(level===1){if(optname===4){HEAP32[optval>>2]=sock.error;HEAP32[optlen>>2]=4;sock.error=null;return 0}}return-50}catch(e){if(typeof FS=="undefined"||!(e instanceof FS.ErrnoError))throw e;return-e.errno}}function ___syscall_ioctl(fd,op,varargs){SYSCALLS.varargs=varargs;try{var stream=SYSCALLS.getStreamFromFD(fd);switch(op){case 21509:case 21505:{if(!stream.tty)return-59;return 0}case 21510:case 21511:case 21512:case 21506:case 21507:case 21508:{if(!stream.tty)return-59;return 0}case 21519:{if(!stream.tty)return-59;var argp=SYSCALLS.get();HEAP32[argp>>2]=0;return 0}case 21520:{if(!stream.tty)return-59;return-28}case 21531:{var argp=SYSCALLS.get();return FS.ioctl(stream,op,argp)}case 21523:{if(!stream.tty)return-59;return 0}case 21524:{if(!stream.tty)return-59;return 0}default:abort("bad ioctl syscall "+op)}}catch(e){if(typeof FS=="undefined"||!(e instanceof FS.ErrnoError))throw e;return-e.errno}}function ___syscall_listen(fd,backlog){try{var sock=getSocketFromFD(fd);sock.sock_ops.listen(sock,backlog);return 0}catch(e){if(typeof FS=="undefined"||!(e instanceof FS.ErrnoError))throw e;return-e.errno}}function ___syscall_lstat64(path,buf){try{path=SYSCALLS.getStr(path);return SYSCALLS.doStat(FS.lstat,path,buf)}catch(e){if(typeof FS=="undefined"||!(e instanceof FS.ErrnoError))throw e;return-e.errno}}function ___syscall_mkdirat(dirfd,path,mode){try{path=SYSCALLS.getStr(path);path=SYSCALLS.calculateAt(dirfd,path);path=PATH.normalize(path);if(path[path.length-1]==="/")path=path.substr(0,path.length-1);FS.mkdir(path,mode,0);return 0}catch(e){if(typeof FS=="undefined"||!(e instanceof FS.ErrnoError))throw e;return-e.errno}}function ___syscall_newfstatat(dirfd,path,buf,flags){try{path=SYSCALLS.getStr(path);var nofollow=flags&256;var allowEmpty=flags&4096;flags=flags&~4352;path=SYSCALLS.calculateAt(dirfd,path,allowEmpty);return SYSCALLS.doStat(nofollow?FS.lstat:FS.stat,path,buf)}catch(e){if(typeof FS=="undefined"||!(e instanceof FS.ErrnoError))throw e;return-e.errno}}function ___syscall_openat(dirfd,path,flags,varargs){SYSCALLS.varargs=varargs;try{path=SYSCALLS.getStr(path);path=SYSCALLS.calculateAt(dirfd,path);var mode=varargs?SYSCALLS.get():0;return FS.open(path,flags,mode).fd}catch(e){if(typeof FS=="undefined"||!(e instanceof FS.ErrnoError))throw e;return-e.errno}}function ___syscall_poll(fds,nfds,timeout){try{var nonzero=0;for(var i=0;i>2];var events=HEAP16[pollfd+4>>1];var mask=32;var stream=FS.getStream(fd);if(stream){mask=SYSCALLS.DEFAULT_POLLMASK;if(stream.stream_ops.poll){mask=stream.stream_ops.poll(stream)}}mask&=events|8|16;if(mask)nonzero++;HEAP16[pollfd+6>>1]=mask}return nonzero}catch(e){if(typeof FS=="undefined"||!(e instanceof FS.ErrnoError))throw e;return-e.errno}}function ___syscall_readlinkat(dirfd,path,buf,bufsize){try{path=SYSCALLS.getStr(path);path=SYSCALLS.calculateAt(dirfd,path);if(bufsize<=0)return-28;var ret=FS.readlink(path);var len=Math.min(bufsize,lengthBytesUTF8(ret));var endChar=HEAP8[buf+len];stringToUTF8(ret,buf,bufsize+1);HEAP8[buf+len]=endChar;return len}catch(e){if(typeof FS=="undefined"||!(e instanceof FS.ErrnoError))throw e;return-e.errno}}function ___syscall_recvfrom(fd,buf,len,flags,addr,addrlen){try{var sock=getSocketFromFD(fd);var msg=sock.sock_ops.recvmsg(sock,len);if(!msg)return 0;if(addr){var errno=writeSockaddr(addr,sock.family,DNS.lookup_name(msg.addr),msg.port,addrlen)}HEAPU8.set(msg.buffer,buf);return msg.buffer.byteLength}catch(e){if(typeof FS=="undefined"||!(e instanceof FS.ErrnoError))throw e;return-e.errno}}function ___syscall_renameat(olddirfd,oldpath,newdirfd,newpath){try{oldpath=SYSCALLS.getStr(oldpath);newpath=SYSCALLS.getStr(newpath);oldpath=SYSCALLS.calculateAt(olddirfd,oldpath);newpath=SYSCALLS.calculateAt(newdirfd,newpath);FS.rename(oldpath,newpath);return 0}catch(e){if(typeof FS=="undefined"||!(e instanceof FS.ErrnoError))throw e;return-e.errno}}function ___syscall_rmdir(path){try{path=SYSCALLS.getStr(path);FS.rmdir(path);return 0}catch(e){if(typeof FS=="undefined"||!(e instanceof FS.ErrnoError))throw e;return-e.errno}}function ___syscall_sendto(fd,message,length,flags,addr,addr_len){try{var sock=getSocketFromFD(fd);var dest=getSocketAddress(addr,addr_len,true);if(!dest){return FS.write(sock.stream,HEAP8,message,length)}else{return sock.sock_ops.sendmsg(sock,HEAP8,message,length,dest.addr,dest.port)}}catch(e){if(typeof FS=="undefined"||!(e instanceof FS.ErrnoError))throw e;return-e.errno}}function ___syscall_socket(domain,type,protocol){try{var sock=SOCKFS.createSocket(domain,type,protocol);return sock.stream.fd}catch(e){if(typeof FS=="undefined"||!(e instanceof FS.ErrnoError))throw e;return-e.errno}}function ___syscall_stat64(path,buf){try{path=SYSCALLS.getStr(path);return SYSCALLS.doStat(FS.stat,path,buf)}catch(e){if(typeof FS=="undefined"||!(e instanceof FS.ErrnoError))throw e;return-e.errno}}function ___syscall_statfs64(path,size,buf){try{path=SYSCALLS.getStr(path);HEAP32[buf+4>>2]=4096;HEAP32[buf+40>>2]=4096;HEAP32[buf+8>>2]=1e6;HEAP32[buf+12>>2]=5e5;HEAP32[buf+16>>2]=5e5;HEAP32[buf+20>>2]=FS.nextInode;HEAP32[buf+24>>2]=1e6;HEAP32[buf+28>>2]=42;HEAP32[buf+44>>2]=2;HEAP32[buf+36>>2]=255;return 0}catch(e){if(typeof FS=="undefined"||!(e instanceof FS.ErrnoError))throw e;return-e.errno}}function ___syscall_symlink(target,linkpath){try{target=SYSCALLS.getStr(target);linkpath=SYSCALLS.getStr(linkpath);FS.symlink(target,linkpath);return 0}catch(e){if(typeof FS=="undefined"||!(e instanceof FS.ErrnoError))throw e;return-e.errno}}function ___syscall_unlinkat(dirfd,path,flags){try{path=SYSCALLS.getStr(path);path=SYSCALLS.calculateAt(dirfd,path);if(flags===0){FS.unlink(path)}else if(flags===512){FS.rmdir(path)}else{abort("Invalid flags passed to unlinkat")}return 0}catch(e){if(typeof FS=="undefined"||!(e instanceof FS.ErrnoError))throw e;return-e.errno}}function __dlinit(main_dso_handle){}var dlopenMissingError="To use dlopen, you need enable dynamic linking, see https://github.com/emscripten-core/emscripten/wiki/Linking";function __dlopen_js(filename,flag){abort(dlopenMissingError)}function __dlsym_js(handle,symbol){abort(dlopenMissingError)}function __emscripten_date_now(){return Date.now()}var nowIsMonotonic=true;function __emscripten_get_now_is_monotonic(){return nowIsMonotonic}function __emscripten_throw_longjmp(){throw Infinity}function __gmtime_js(time,tmPtr){var date=new Date(HEAP32[time>>2]*1e3);HEAP32[tmPtr>>2]=date.getUTCSeconds();HEAP32[tmPtr+4>>2]=date.getUTCMinutes();HEAP32[tmPtr+8>>2]=date.getUTCHours();HEAP32[tmPtr+12>>2]=date.getUTCDate();HEAP32[tmPtr+16>>2]=date.getUTCMonth();HEAP32[tmPtr+20>>2]=date.getUTCFullYear()-1900;HEAP32[tmPtr+24>>2]=date.getUTCDay();var start=Date.UTC(date.getUTCFullYear(),0,1,0,0,0,0);var yday=(date.getTime()-start)/(1e3*60*60*24)|0;HEAP32[tmPtr+28>>2]=yday}function __localtime_js(time,tmPtr){var date=new Date(HEAP32[time>>2]*1e3);HEAP32[tmPtr>>2]=date.getSeconds();HEAP32[tmPtr+4>>2]=date.getMinutes();HEAP32[tmPtr+8>>2]=date.getHours();HEAP32[tmPtr+12>>2]=date.getDate();HEAP32[tmPtr+16>>2]=date.getMonth();HEAP32[tmPtr+20>>2]=date.getFullYear()-1900;HEAP32[tmPtr+24>>2]=date.getDay();var start=new Date(date.getFullYear(),0,1);var yday=(date.getTime()-start.getTime())/(1e3*60*60*24)|0;HEAP32[tmPtr+28>>2]=yday;HEAP32[tmPtr+36>>2]=-(date.getTimezoneOffset()*60);var summerOffset=new Date(date.getFullYear(),6,1).getTimezoneOffset();var winterOffset=start.getTimezoneOffset();var dst=(summerOffset!=winterOffset&&date.getTimezoneOffset()==Math.min(winterOffset,summerOffset))|0;HEAP32[tmPtr+32>>2]=dst}function _tzset_impl(timezone,daylight,tzname){var currentYear=(new Date).getFullYear();var winter=new Date(currentYear,0,1);var summer=new Date(currentYear,6,1);var winterOffset=winter.getTimezoneOffset();var summerOffset=summer.getTimezoneOffset();var stdTimezoneOffset=Math.max(winterOffset,summerOffset);HEAP32[timezone>>2]=stdTimezoneOffset*60;HEAP32[daylight>>2]=Number(winterOffset!=summerOffset);function extractZone(date){var match=date.toTimeString().match(/\(([A-Za-z ]+)\)$/);return match?match[1]:"GMT"}var winterName=extractZone(winter);var summerName=extractZone(summer);var winterNamePtr=allocateUTF8(winterName);var summerNamePtr=allocateUTF8(summerName);if(summerOffset>2]=winterNamePtr;HEAPU32[tzname+4>>2]=summerNamePtr}else{HEAPU32[tzname>>2]=summerNamePtr;HEAPU32[tzname+4>>2]=winterNamePtr}}function __tzset_js(timezone,daylight,tzname){if(__tzset_js.called)return;__tzset_js.called=true;_tzset_impl(timezone,daylight,tzname)}function _abort(){abort("")}function _emscripten_set_main_loop_timing(mode,value){Browser.mainLoop.timingMode=mode;Browser.mainLoop.timingValue=value;if(!Browser.mainLoop.func){return 1}if(!Browser.mainLoop.running){runtimeKeepalivePush();Browser.mainLoop.running=true}if(mode==0){Browser.mainLoop.scheduler=function Browser_mainLoop_scheduler_setTimeout(){var timeUntilNextTick=Math.max(0,Browser.mainLoop.tickStartTime+value-_emscripten_get_now())|0;setTimeout(Browser.mainLoop.runner,timeUntilNextTick)};Browser.mainLoop.method="timeout"}else if(mode==1){Browser.mainLoop.scheduler=function Browser_mainLoop_scheduler_rAF(){Browser.requestAnimationFrame(Browser.mainLoop.runner)};Browser.mainLoop.method="rAF"}else if(mode==2){if(typeof setImmediate=="undefined"){var setImmediates=[];var emscriptenMainLoopMessageId="setimmediate";var Browser_setImmediate_messageHandler=function(event){if(event.data===emscriptenMainLoopMessageId||event.data.target===emscriptenMainLoopMessageId){event.stopPropagation();setImmediates.shift()()}};addEventListener("message",Browser_setImmediate_messageHandler,true);setImmediate=function Browser_emulated_setImmediate(func){setImmediates.push(func);if(ENVIRONMENT_IS_WORKER){if(Module["setImmediates"]===undefined)Module["setImmediates"]=[];Module["setImmediates"].push(func);postMessage({target:emscriptenMainLoopMessageId})}else postMessage(emscriptenMainLoopMessageId,"*")}}Browser.mainLoop.scheduler=function Browser_mainLoop_scheduler_setImmediate(){setImmediate(Browser.mainLoop.runner)};Browser.mainLoop.method="immediate"}return 0}var _emscripten_get_now;_emscripten_get_now=()=>performance.now();function _emscripten_webgl_do_commit_frame(){if(!GL.currentContext||!GL.currentContext.GLctx){return-3}if(GL.currentContext.defaultFbo){GL.blitOffscreenFramebuffer(GL.currentContext);return 0}if(!GL.currentContext.attributes.explicitSwapControl){return-3}return 0}function _emscripten_webgl_commit_frame(){return _emscripten_webgl_do_commit_frame()}function runtimeKeepalivePush(){runtimeKeepaliveCounter+=1}function _exit(status){exit(status)}function maybeExit(){if(!keepRuntimeAlive()){try{_exit(EXITSTATUS)}catch(e){handleException(e)}}}function setMainLoop(browserIterationFunc,fps,simulateInfiniteLoop,arg,noSetTiming){assert(!Browser.mainLoop.func,"emscripten_set_main_loop: there can only be one main loop function at once: call emscripten_cancel_main_loop to cancel the previous one before setting a new one with different parameters.");Browser.mainLoop.func=browserIterationFunc;Browser.mainLoop.arg=arg;var thisMainLoopId=Browser.mainLoop.currentlyRunningMainloop;function checkIsRunning(){if(thisMainLoopId0){var start=Date.now();var blocker=Browser.mainLoop.queue.shift();blocker.func(blocker.arg);if(Browser.mainLoop.remainingBlockers){var remaining=Browser.mainLoop.remainingBlockers;var next=remaining%1==0?remaining-1:Math.floor(remaining);if(blocker.counted){Browser.mainLoop.remainingBlockers=next}else{next=next+.5;Browser.mainLoop.remainingBlockers=(8*remaining+next)/9}}out('main loop blocker "'+blocker.name+'" took '+(Date.now()-start)+" ms");Browser.mainLoop.updateStatus();if(!checkIsRunning())return;setTimeout(Browser.mainLoop.runner,0);return}if(!checkIsRunning())return;Browser.mainLoop.currentFrameNumber=Browser.mainLoop.currentFrameNumber+1|0;if(Browser.mainLoop.timingMode==1&&Browser.mainLoop.timingValue>1&&Browser.mainLoop.currentFrameNumber%Browser.mainLoop.timingValue!=0){Browser.mainLoop.scheduler();return}else if(Browser.mainLoop.timingMode==0){Browser.mainLoop.tickStartTime=_emscripten_get_now()}Browser.mainLoop.runIter(browserIterationFunc);if(!checkIsRunning())return;if(typeof SDL=="object"&&SDL.audio&&SDL.audio.queueNewAudioData)SDL.audio.queueNewAudioData();Browser.mainLoop.scheduler()};if(!noSetTiming){if(fps&&fps>0)_emscripten_set_main_loop_timing(0,1e3/fps);else _emscripten_set_main_loop_timing(1,1);Browser.mainLoop.scheduler()}if(simulateInfiniteLoop){throw"unwind"}}function callUserCallback(func,synchronous){if(runtimeExited||ABORT){return}if(synchronous){func();return}try{func();maybeExit()}catch(e){handleException(e)}}function runtimeKeepalivePop(){runtimeKeepaliveCounter-=1}function safeSetTimeout(func,timeout){runtimeKeepalivePush();return setTimeout(function(){runtimeKeepalivePop();callUserCallback(func)},timeout)}var Browser={mainLoop:{running:false,scheduler:null,method:"",currentlyRunningMainloop:0,func:null,arg:0,timingMode:0,timingValue:0,currentFrameNumber:0,queue:[],pause:function(){Browser.mainLoop.scheduler=null;Browser.mainLoop.currentlyRunningMainloop++},resume:function(){Browser.mainLoop.currentlyRunningMainloop++;var timingMode=Browser.mainLoop.timingMode;var timingValue=Browser.mainLoop.timingValue;var func=Browser.mainLoop.func;Browser.mainLoop.func=null;setMainLoop(func,0,false,Browser.mainLoop.arg,true);_emscripten_set_main_loop_timing(timingMode,timingValue);Browser.mainLoop.scheduler()},updateStatus:function(){if(Module["setStatus"]){var message=Module["statusMessage"]||"Please wait...";var remaining=Browser.mainLoop.remainingBlockers;var expected=Browser.mainLoop.expectedBlockers;if(remaining){if(remaining{assert(img.complete,"Image "+name+" could not be decoded");var canvas=document.createElement("canvas");canvas.width=img.width;canvas.height=img.height;var ctx=canvas.getContext("2d");ctx.drawImage(img,0,0);preloadedImages[name]=canvas;Browser.URLObject.revokeObjectURL(url);if(onload)onload(byteArray)};img.onerror=event=>{out("Image "+url+" could not be decoded");if(onerror)onerror()};img.src=url};Module["preloadPlugins"].push(imagePlugin);var audioPlugin={};audioPlugin["canHandle"]=function audioPlugin_canHandle(name){return!Module.noAudioDecoding&&name.substr(-4)in{".ogg":1,".wav":1,".mp3":1}};audioPlugin["handle"]=function audioPlugin_handle(byteArray,name,onload,onerror){var done=false;function finish(audio){if(done)return;done=true;preloadedAudios[name]=audio;if(onload)onload(byteArray)}function fail(){if(done)return;done=true;preloadedAudios[name]=new Audio;if(onerror)onerror()}if(Browser.hasBlobConstructor){try{var b=new Blob([byteArray],{type:Browser.getMimetype(name)})}catch(e){return fail()}var url=Browser.URLObject.createObjectURL(b);var audio=new Audio;audio.addEventListener("canplaythrough",function(){finish(audio)},false);audio.onerror=function audio_onerror(event){if(done)return;out("warning: browser could not fully decode audio "+name+", trying slower base64 approach");function encode64(data){var BASE="ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/";var PAD="=";var ret="";var leftchar=0;var leftbits=0;for(var i=0;i=6){var curr=leftchar>>leftbits-6&63;leftbits-=6;ret+=BASE[curr]}}if(leftbits==2){ret+=BASE[(leftchar&3)<<4];ret+=PAD+PAD}else if(leftbits==4){ret+=BASE[(leftchar&15)<<2];ret+=PAD}return ret}audio.src="data:audio/x-"+name.substr(-3)+";base64,"+encode64(byteArray);finish(audio)};audio.src=url;safeSetTimeout(function(){finish(audio)},1e4)}else{return fail()}};Module["preloadPlugins"].push(audioPlugin);function pointerLockChange(){Browser.pointerLock=document["pointerLockElement"]===Module["canvas"]||document["mozPointerLockElement"]===Module["canvas"]||document["webkitPointerLockElement"]===Module["canvas"]||document["msPointerLockElement"]===Module["canvas"]}var canvas=Module["canvas"];if(canvas){canvas.requestPointerLock=canvas["requestPointerLock"]||canvas["mozRequestPointerLock"]||canvas["webkitRequestPointerLock"]||canvas["msRequestPointerLock"]||function(){};canvas.exitPointerLock=document["exitPointerLock"]||document["mozExitPointerLock"]||document["webkitExitPointerLock"]||document["msExitPointerLock"]||function(){};canvas.exitPointerLock=canvas.exitPointerLock.bind(document);document.addEventListener("pointerlockchange",pointerLockChange,false);document.addEventListener("mozpointerlockchange",pointerLockChange,false);document.addEventListener("webkitpointerlockchange",pointerLockChange,false);document.addEventListener("mspointerlockchange",pointerLockChange,false);if(Module["elementPointerLock"]){canvas.addEventListener("click",function(ev){if(!Browser.pointerLock&&Module["canvas"].requestPointerLock){Module["canvas"].requestPointerLock();ev.preventDefault()}},false)}}},handledByPreloadPlugin:function(byteArray,fullname,finish,onerror){Browser.init();var handled=false;Module["preloadPlugins"].forEach(function(plugin){if(handled)return;if(plugin["canHandle"](fullname)){plugin["handle"](byteArray,fullname,finish,onerror);handled=true}});return handled},createContext:function(canvas,useWebGL,setInModule,webGLContextAttributes){if(useWebGL&&Module.ctx&&canvas==Module.canvas)return Module.ctx;var ctx;var contextHandle;if(useWebGL){var contextAttributes={antialias:false,alpha:false,majorVersion:typeof WebGL2RenderingContext!="undefined"?2:1};if(webGLContextAttributes){for(var attribute in webGLContextAttributes){contextAttributes[attribute]=webGLContextAttributes[attribute]}}if(typeof GL!="undefined"){contextHandle=GL.createContext(canvas,contextAttributes);if(contextHandle){ctx=GL.getContext(contextHandle).GLctx}}}else{ctx=canvas.getContext("2d")}if(!ctx)return null;if(setInModule){if(!useWebGL)assert(typeof GLctx=="undefined","cannot set in module if GLctx is used, but we are a non-GL context that would replace it");Module.ctx=ctx;if(useWebGL)GL.makeContextCurrent(contextHandle);Module.useWebGL=useWebGL;Browser.moduleContextCreatedCallbacks.forEach(function(callback){callback()});Browser.init()}return ctx},destroyContext:function(canvas,useWebGL,setInModule){},fullscreenHandlersInstalled:false,lockPointer:undefined,resizeCanvas:undefined,requestFullscreen:function(lockPointer,resizeCanvas){Browser.lockPointer=lockPointer;Browser.resizeCanvas=resizeCanvas;if(typeof Browser.lockPointer=="undefined")Browser.lockPointer=true;if(typeof Browser.resizeCanvas=="undefined")Browser.resizeCanvas=false;var canvas=Module["canvas"];function fullscreenChange(){Browser.isFullscreen=false;var canvasContainer=canvas.parentNode;if((document["fullscreenElement"]||document["mozFullScreenElement"]||document["msFullscreenElement"]||document["webkitFullscreenElement"]||document["webkitCurrentFullScreenElement"])===canvasContainer){canvas.exitFullscreen=Browser.exitFullscreen;if(Browser.lockPointer)canvas.requestPointerLock();Browser.isFullscreen=true;if(Browser.resizeCanvas){Browser.setFullscreenCanvasSize()}else{Browser.updateCanvasDimensions(canvas)}}else{canvasContainer.parentNode.insertBefore(canvas,canvasContainer);canvasContainer.parentNode.removeChild(canvasContainer);if(Browser.resizeCanvas){Browser.setWindowedCanvasSize()}else{Browser.updateCanvasDimensions(canvas)}}if(Module["onFullScreen"])Module["onFullScreen"](Browser.isFullscreen);if(Module["onFullscreen"])Module["onFullscreen"](Browser.isFullscreen)}if(!Browser.fullscreenHandlersInstalled){Browser.fullscreenHandlersInstalled=true;document.addEventListener("fullscreenchange",fullscreenChange,false);document.addEventListener("mozfullscreenchange",fullscreenChange,false);document.addEventListener("webkitfullscreenchange",fullscreenChange,false);document.addEventListener("MSFullscreenChange",fullscreenChange,false)}var canvasContainer=document.createElement("div");canvas.parentNode.insertBefore(canvasContainer,canvas);canvasContainer.appendChild(canvas);canvasContainer.requestFullscreen=canvasContainer["requestFullscreen"]||canvasContainer["mozRequestFullScreen"]||canvasContainer["msRequestFullscreen"]||(canvasContainer["webkitRequestFullscreen"]?function(){canvasContainer["webkitRequestFullscreen"](Element["ALLOW_KEYBOARD_INPUT"])}:null)||(canvasContainer["webkitRequestFullScreen"]?function(){canvasContainer["webkitRequestFullScreen"](Element["ALLOW_KEYBOARD_INPUT"])}:null);canvasContainer.requestFullscreen()},exitFullscreen:function(){if(!Browser.isFullscreen){return false}var CFS=document["exitFullscreen"]||document["cancelFullScreen"]||document["mozCancelFullScreen"]||document["msExitFullscreen"]||document["webkitCancelFullScreen"]||function(){};CFS.apply(document,[]);return true},nextRAF:0,fakeRequestAnimationFrame:function(func){var now=Date.now();if(Browser.nextRAF===0){Browser.nextRAF=now+1e3/60}else{while(now+2>=Browser.nextRAF){Browser.nextRAF+=1e3/60}}var delay=Math.max(Browser.nextRAF-now,0);setTimeout(func,delay)},requestAnimationFrame:function(func){if(typeof requestAnimationFrame=="function"){requestAnimationFrame(func);return}var RAF=Browser.fakeRequestAnimationFrame;RAF(func)},safeSetTimeout:function(func){return safeSetTimeout(func)},safeRequestAnimationFrame:function(func){runtimeKeepalivePush();return Browser.requestAnimationFrame(function(){runtimeKeepalivePop();callUserCallback(func)})},getMimetype:function(name){return{"jpg":"image/jpeg","jpeg":"image/jpeg","png":"image/png","bmp":"image/bmp","ogg":"audio/ogg","wav":"audio/wav","mp3":"audio/mpeg"}[name.substr(name.lastIndexOf(".")+1)]},getUserMedia:function(func){if(!window.getUserMedia){window.getUserMedia=navigator["getUserMedia"]||navigator["mozGetUserMedia"]}window.getUserMedia(func)},getMovementX:function(event){return event["movementX"]||event["mozMovementX"]||event["webkitMovementX"]||0},getMovementY:function(event){return event["movementY"]||event["mozMovementY"]||event["webkitMovementY"]||0},getMouseWheelDelta:function(event){var delta=0;switch(event.type){case"DOMMouseScroll":delta=event.detail/3;break;case"mousewheel":delta=event.wheelDelta/120;break;case"wheel":delta=event.deltaY;switch(event.deltaMode){case 0:delta/=100;break;case 1:delta/=3;break;case 2:delta*=80;break;default:throw"unrecognized mouse wheel delta mode: "+event.deltaMode}break;default:throw"unrecognized mouse wheel event: "+event.type}return delta},mouseX:0,mouseY:0,mouseMovementX:0,mouseMovementY:0,touches:{},lastTouches:{},calculateMouseEvent:function(event){if(Browser.pointerLock){if(event.type!="mousemove"&&"mozMovementX"in event){Browser.mouseMovementX=Browser.mouseMovementY=0}else{Browser.mouseMovementX=Browser.getMovementX(event);Browser.mouseMovementY=Browser.getMovementY(event)}if(typeof SDL!="undefined"){Browser.mouseX=SDL.mouseX+Browser.mouseMovementX;Browser.mouseY=SDL.mouseY+Browser.mouseMovementY}else{Browser.mouseX+=Browser.mouseMovementX;Browser.mouseY+=Browser.mouseMovementY}}else{var rect=Module["canvas"].getBoundingClientRect();var cw=Module["canvas"].width;var ch=Module["canvas"].height;var scrollX=typeof window.scrollX!="undefined"?window.scrollX:window.pageXOffset;var scrollY=typeof window.scrollY!="undefined"?window.scrollY:window.pageYOffset;if(event.type==="touchstart"||event.type==="touchend"||event.type==="touchmove"){var touch=event.touch;if(touch===undefined){return}var adjustedX=touch.pageX-(scrollX+rect.left);var adjustedY=touch.pageY-(scrollY+rect.top);adjustedX=adjustedX*(cw/rect.width);adjustedY=adjustedY*(ch/rect.height);var coords={x:adjustedX,y:adjustedY};if(event.type==="touchstart"){Browser.lastTouches[touch.identifier]=coords;Browser.touches[touch.identifier]=coords}else if(event.type==="touchend"||event.type==="touchmove"){var last=Browser.touches[touch.identifier];if(!last)last=coords;Browser.lastTouches[touch.identifier]=last;Browser.touches[touch.identifier]=coords}return}var x=event.pageX-(scrollX+rect.left);var y=event.pageY-(scrollY+rect.top);x=x*(cw/rect.width);y=y*(ch/rect.height);Browser.mouseMovementX=x-Browser.mouseX;Browser.mouseMovementY=y-Browser.mouseY;Browser.mouseX=x;Browser.mouseY=y}},resizeListeners:[],updateResizeListeners:function(){var canvas=Module["canvas"];Browser.resizeListeners.forEach(function(listener){listener(canvas.width,canvas.height)})},setCanvasSize:function(width,height,noUpdates){var canvas=Module["canvas"];Browser.updateCanvasDimensions(canvas,width,height);if(!noUpdates)Browser.updateResizeListeners()},windowedWidth:0,windowedHeight:0,setFullscreenCanvasSize:function(){if(typeof SDL!="undefined"){var flags=HEAPU32[SDL.screen>>2];flags=flags|8388608;HEAP32[SDL.screen>>2]=flags}Browser.updateCanvasDimensions(Module["canvas"]);Browser.updateResizeListeners()},setWindowedCanvasSize:function(){if(typeof SDL!="undefined"){var flags=HEAPU32[SDL.screen>>2];flags=flags&~8388608;HEAP32[SDL.screen>>2]=flags}Browser.updateCanvasDimensions(Module["canvas"]);Browser.updateResizeListeners()},updateCanvasDimensions:function(canvas,wNative,hNative){if(wNative&&hNative){canvas.widthNative=wNative;canvas.heightNative=hNative}else{wNative=canvas.widthNative;hNative=canvas.heightNative}var w=wNative;var h=hNative;if(Module["forcedAspectRatio"]&&Module["forcedAspectRatio"]>0){if(w/h>2]:-1;source+=UTF8ToString(HEAP32[string+i*4>>2],len<0?undefined:len)}return source},createContext:function(canvas,webGLContextAttributes){if(webGLContextAttributes.renderViaOffscreenBackBuffer)webGLContextAttributes["preserveDrawingBuffer"]=true;if(!canvas.getContextSafariWebGL2Fixed){canvas.getContextSafariWebGL2Fixed=canvas.getContext;function fixedGetContext(ver,attrs){var gl=canvas.getContextSafariWebGL2Fixed(ver,attrs);return ver=="webgl"==gl instanceof WebGLRenderingContext?gl:null}canvas.getContext=fixedGetContext}var ctx=webGLContextAttributes.majorVersion>1?canvas.getContext("webgl2",webGLContextAttributes):canvas.getContext("webgl",webGLContextAttributes);if(!ctx)return 0;var handle=GL.registerContext(ctx,webGLContextAttributes);return handle},enableOffscreenFramebufferAttributes:function(webGLContextAttributes){webGLContextAttributes.renderViaOffscreenBackBuffer=true;webGLContextAttributes.preserveDrawingBuffer=true},createOffscreenFramebuffer:function(context){var gl=context.GLctx;var fbo=gl.createFramebuffer();gl.bindFramebuffer(36160,fbo);context.defaultFbo=fbo;context.defaultFboForbidBlitFramebuffer=false;if(gl.getContextAttributes().antialias){context.defaultFboForbidBlitFramebuffer=true}else{var firefoxMatch=navigator.userAgent.toLowerCase().match(/firefox\/(\d\d)/);if(firefoxMatch!=null){var firefoxVersion=firefoxMatch[1];context.defaultFboForbidBlitFramebuffer=firefoxVersion<67}}context.defaultColorTarget=gl.createTexture();context.defaultDepthTarget=gl.createRenderbuffer();GL.resizeOffscreenFramebuffer(context);gl.bindTexture(3553,context.defaultColorTarget);gl.texParameteri(3553,10241,9728);gl.texParameteri(3553,10240,9728);gl.texParameteri(3553,10242,33071);gl.texParameteri(3553,10243,33071);gl.texImage2D(3553,0,6408,gl.canvas.width,gl.canvas.height,0,6408,5121,null);gl.framebufferTexture2D(36160,36064,3553,context.defaultColorTarget,0);gl.bindTexture(3553,null);var depthTarget=gl.createRenderbuffer();gl.bindRenderbuffer(36161,context.defaultDepthTarget);gl.renderbufferStorage(36161,33189,gl.canvas.width,gl.canvas.height);gl.framebufferRenderbuffer(36160,36096,36161,context.defaultDepthTarget);gl.bindRenderbuffer(36161,null);var vertices=[-1,-1,-1,1,1,-1,1,1];var vb=gl.createBuffer();gl.bindBuffer(34962,vb);gl.bufferData(34962,new Float32Array(vertices),35044);gl.bindBuffer(34962,null);context.blitVB=vb;var vsCode="attribute vec2 pos;"+"varying lowp vec2 tex;"+"void main() { tex = pos * 0.5 + vec2(0.5,0.5); gl_Position = vec4(pos, 0.0, 1.0); }";var vs=gl.createShader(35633);gl.shaderSource(vs,vsCode);gl.compileShader(vs);var fsCode="varying lowp vec2 tex;"+"uniform sampler2D sampler;"+"void main() { gl_FragColor = texture2D(sampler, tex); }";var fs=gl.createShader(35632);gl.shaderSource(fs,fsCode);gl.compileShader(fs);var blitProgram=gl.createProgram();gl.attachShader(blitProgram,vs);gl.attachShader(blitProgram,fs);gl.linkProgram(blitProgram);context.blitProgram=blitProgram;context.blitPosLoc=gl.getAttribLocation(blitProgram,"pos");gl.useProgram(blitProgram);gl.uniform1i(gl.getUniformLocation(blitProgram,"sampler"),0);gl.useProgram(null);context.defaultVao=undefined;if(gl.createVertexArray){context.defaultVao=gl.createVertexArray();gl.bindVertexArray(context.defaultVao);gl.enableVertexAttribArray(context.blitPosLoc);gl.bindVertexArray(null)}},resizeOffscreenFramebuffer:function(context){var gl=context.GLctx;if(context.defaultColorTarget){var prevTextureBinding=gl.getParameter(32873);gl.bindTexture(3553,context.defaultColorTarget);gl.texImage2D(3553,0,6408,gl.drawingBufferWidth,gl.drawingBufferHeight,0,6408,5121,null);gl.bindTexture(3553,prevTextureBinding)}if(context.defaultDepthTarget){var prevRenderBufferBinding=gl.getParameter(36007);gl.bindRenderbuffer(36161,context.defaultDepthTarget);gl.renderbufferStorage(36161,33189,gl.drawingBufferWidth,gl.drawingBufferHeight);gl.bindRenderbuffer(36161,prevRenderBufferBinding)}},blitOffscreenFramebuffer:function(context){var gl=context.GLctx;var prevScissorTest=gl.getParameter(3089);if(prevScissorTest)gl.disable(3089);var prevFbo=gl.getParameter(36006);if(gl.blitFramebuffer&&!context.defaultFboForbidBlitFramebuffer){gl.bindFramebuffer(36008,context.defaultFbo);gl.bindFramebuffer(36009,null);gl.blitFramebuffer(0,0,gl.canvas.width,gl.canvas.height,0,0,gl.canvas.width,gl.canvas.height,16384,9728)}else{gl.bindFramebuffer(36160,null);var prevProgram=gl.getParameter(35725);gl.useProgram(context.blitProgram);var prevVB=gl.getParameter(34964);gl.bindBuffer(34962,context.blitVB);var prevActiveTexture=gl.getParameter(34016);gl.activeTexture(33984);var prevTextureBinding=gl.getParameter(32873);gl.bindTexture(3553,context.defaultColorTarget);var prevBlend=gl.getParameter(3042);if(prevBlend)gl.disable(3042);var prevCullFace=gl.getParameter(2884);if(prevCullFace)gl.disable(2884);var prevDepthTest=gl.getParameter(2929);if(prevDepthTest)gl.disable(2929);var prevStencilTest=gl.getParameter(2960);if(prevStencilTest)gl.disable(2960);function draw(){gl.vertexAttribPointer(context.blitPosLoc,2,5126,false,0,0);gl.drawArrays(5,0,4)}if(context.defaultVao){var prevVAO=gl.getParameter(34229);gl.bindVertexArray(context.defaultVao);draw();gl.bindVertexArray(prevVAO)}else{var prevVertexAttribPointer={buffer:gl.getVertexAttrib(context.blitPosLoc,34975),size:gl.getVertexAttrib(context.blitPosLoc,34339),stride:gl.getVertexAttrib(context.blitPosLoc,34340),type:gl.getVertexAttrib(context.blitPosLoc,34341),normalized:gl.getVertexAttrib(context.blitPosLoc,34922),pointer:gl.getVertexAttribOffset(context.blitPosLoc,34373)};var maxVertexAttribs=gl.getParameter(34921);var prevVertexAttribEnables=[];for(var i=0;i=2){GLctx.disjointTimerQueryExt=GLctx.getExtension("EXT_disjoint_timer_query_webgl2")}if(context.version<2||!GLctx.disjointTimerQueryExt){GLctx.disjointTimerQueryExt=GLctx.getExtension("EXT_disjoint_timer_query")}__webgl_enable_WEBGL_multi_draw(GLctx);var exts=GLctx.getSupportedExtensions()||[];exts.forEach(function(ext){if(!ext.includes("lose_context")&&!ext.includes("debug")){GLctx.getExtension(ext)}})}};function _emscripten_glActiveTexture(x0){GLctx["activeTexture"](x0)}function _emscripten_glAttachShader(program,shader){GLctx.attachShader(GL.programs[program],GL.shaders[shader])}function _emscripten_glBeginQuery(target,id){GLctx["beginQuery"](target,GL.queries[id])}function _emscripten_glBeginQueryEXT(target,id){GLctx.disjointTimerQueryExt["beginQueryEXT"](target,GL.queries[id])}function _emscripten_glBeginTransformFeedback(x0){GLctx["beginTransformFeedback"](x0)}function _emscripten_glBindAttribLocation(program,index,name){GLctx.bindAttribLocation(GL.programs[program],index,UTF8ToString(name))}function _emscripten_glBindBuffer(target,buffer){if(target==35051){GLctx.currentPixelPackBufferBinding=buffer}else if(target==35052){GLctx.currentPixelUnpackBufferBinding=buffer}GLctx.bindBuffer(target,GL.buffers[buffer])}function _emscripten_glBindBufferBase(target,index,buffer){GLctx["bindBufferBase"](target,index,GL.buffers[buffer])}function _emscripten_glBindBufferRange(target,index,buffer,offset,ptrsize){GLctx["bindBufferRange"](target,index,GL.buffers[buffer],offset,ptrsize)}function _emscripten_glBindFramebuffer(target,framebuffer){GLctx.bindFramebuffer(target,framebuffer?GL.framebuffers[framebuffer]:GL.currentContext.defaultFbo)}function _emscripten_glBindRenderbuffer(target,renderbuffer){GLctx.bindRenderbuffer(target,GL.renderbuffers[renderbuffer])}function _emscripten_glBindSampler(unit,sampler){GLctx["bindSampler"](unit,GL.samplers[sampler])}function _emscripten_glBindTexture(target,texture){GLctx.bindTexture(target,GL.textures[texture])}function _emscripten_glBindTransformFeedback(target,id){GLctx["bindTransformFeedback"](target,GL.transformFeedbacks[id])}function _emscripten_glBindVertexArray(vao){GLctx["bindVertexArray"](GL.vaos[vao])}function _emscripten_glBindVertexArrayOES(vao){GLctx["bindVertexArray"](GL.vaos[vao])}function _emscripten_glBlendColor(x0,x1,x2,x3){GLctx["blendColor"](x0,x1,x2,x3)}function _emscripten_glBlendEquation(x0){GLctx["blendEquation"](x0)}function _emscripten_glBlendEquationSeparate(x0,x1){GLctx["blendEquationSeparate"](x0,x1)}function _emscripten_glBlendFunc(x0,x1){GLctx["blendFunc"](x0,x1)}function _emscripten_glBlendFuncSeparate(x0,x1,x2,x3){GLctx["blendFuncSeparate"](x0,x1,x2,x3)}function _emscripten_glBlitFramebuffer(x0,x1,x2,x3,x4,x5,x6,x7,x8,x9){GLctx["blitFramebuffer"](x0,x1,x2,x3,x4,x5,x6,x7,x8,x9)}function _emscripten_glBufferData(target,size,data,usage){if(GL.currentContext.version>=2){if(data&&size){GLctx.bufferData(target,HEAPU8,usage,data,size)}else{GLctx.bufferData(target,size,usage)}}else{GLctx.bufferData(target,data?HEAPU8.subarray(data,data+size):size,usage)}}function _emscripten_glBufferSubData(target,offset,size,data){if(GL.currentContext.version>=2){size&&GLctx.bufferSubData(target,offset,HEAPU8,data,size);return}GLctx.bufferSubData(target,offset,HEAPU8.subarray(data,data+size))}function _emscripten_glCheckFramebufferStatus(x0){return GLctx["checkFramebufferStatus"](x0)}function _emscripten_glClear(x0){GLctx["clear"](x0)}function _emscripten_glClearBufferfi(x0,x1,x2,x3){GLctx["clearBufferfi"](x0,x1,x2,x3)}function _emscripten_glClearBufferfv(buffer,drawbuffer,value){GLctx["clearBufferfv"](buffer,drawbuffer,HEAPF32,value>>2)}function _emscripten_glClearBufferiv(buffer,drawbuffer,value){GLctx["clearBufferiv"](buffer,drawbuffer,HEAP32,value>>2)}function _emscripten_glClearBufferuiv(buffer,drawbuffer,value){GLctx["clearBufferuiv"](buffer,drawbuffer,HEAPU32,value>>2)}function _emscripten_glClearColor(x0,x1,x2,x3){GLctx["clearColor"](x0,x1,x2,x3)}function _emscripten_glClearDepthf(x0){GLctx["clearDepth"](x0)}function _emscripten_glClearStencil(x0){GLctx["clearStencil"](x0)}function convertI32PairToI53(lo,hi){return(lo>>>0)+hi*4294967296}function _emscripten_glClientWaitSync(sync,flags,timeoutLo,timeoutHi){return GLctx.clientWaitSync(GL.syncs[sync],flags,convertI32PairToI53(timeoutLo,timeoutHi))}function _emscripten_glColorMask(red,green,blue,alpha){GLctx.colorMask(!!red,!!green,!!blue,!!alpha)}function _emscripten_glCompileShader(shader){GLctx.compileShader(GL.shaders[shader])}function _emscripten_glCompressedTexImage2D(target,level,internalFormat,width,height,border,imageSize,data){if(GL.currentContext.version>=2){if(GLctx.currentPixelUnpackBufferBinding||!imageSize){GLctx["compressedTexImage2D"](target,level,internalFormat,width,height,border,imageSize,data)}else{GLctx["compressedTexImage2D"](target,level,internalFormat,width,height,border,HEAPU8,data,imageSize)}return}GLctx["compressedTexImage2D"](target,level,internalFormat,width,height,border,data?HEAPU8.subarray(data,data+imageSize):null)}function _emscripten_glCompressedTexImage3D(target,level,internalFormat,width,height,depth,border,imageSize,data){if(GLctx.currentPixelUnpackBufferBinding){GLctx["compressedTexImage3D"](target,level,internalFormat,width,height,depth,border,imageSize,data)}else{GLctx["compressedTexImage3D"](target,level,internalFormat,width,height,depth,border,HEAPU8,data,imageSize)}}function _emscripten_glCompressedTexSubImage2D(target,level,xoffset,yoffset,width,height,format,imageSize,data){if(GL.currentContext.version>=2){if(GLctx.currentPixelUnpackBufferBinding||!imageSize){GLctx["compressedTexSubImage2D"](target,level,xoffset,yoffset,width,height,format,imageSize,data)}else{GLctx["compressedTexSubImage2D"](target,level,xoffset,yoffset,width,height,format,HEAPU8,data,imageSize)}return}GLctx["compressedTexSubImage2D"](target,level,xoffset,yoffset,width,height,format,data?HEAPU8.subarray(data,data+imageSize):null)}function _emscripten_glCompressedTexSubImage3D(target,level,xoffset,yoffset,zoffset,width,height,depth,format,imageSize,data){if(GLctx.currentPixelUnpackBufferBinding){GLctx["compressedTexSubImage3D"](target,level,xoffset,yoffset,zoffset,width,height,depth,format,imageSize,data)}else{GLctx["compressedTexSubImage3D"](target,level,xoffset,yoffset,zoffset,width,height,depth,format,HEAPU8,data,imageSize)}}function _emscripten_glCopyBufferSubData(x0,x1,x2,x3,x4){GLctx["copyBufferSubData"](x0,x1,x2,x3,x4)}function _emscripten_glCopyTexImage2D(x0,x1,x2,x3,x4,x5,x6,x7){GLctx["copyTexImage2D"](x0,x1,x2,x3,x4,x5,x6,x7)}function _emscripten_glCopyTexSubImage2D(x0,x1,x2,x3,x4,x5,x6,x7){GLctx["copyTexSubImage2D"](x0,x1,x2,x3,x4,x5,x6,x7)}function _emscripten_glCopyTexSubImage3D(x0,x1,x2,x3,x4,x5,x6,x7,x8){GLctx["copyTexSubImage3D"](x0,x1,x2,x3,x4,x5,x6,x7,x8)}function _emscripten_glCreateProgram(){var id=GL.getNewId(GL.programs);var program=GLctx.createProgram();program.name=id;program.maxUniformLength=program.maxAttributeLength=program.maxUniformBlockNameLength=0;program.uniformIdCounter=1;GL.programs[id]=program;return id}function _emscripten_glCreateShader(shaderType){var id=GL.getNewId(GL.shaders);GL.shaders[id]=GLctx.createShader(shaderType);return id}function _emscripten_glCullFace(x0){GLctx["cullFace"](x0)}function _emscripten_glDeleteBuffers(n,buffers){for(var i=0;i>2];var buffer=GL.buffers[id];if(!buffer)continue;GLctx.deleteBuffer(buffer);buffer.name=0;GL.buffers[id]=null;if(id==GLctx.currentPixelPackBufferBinding)GLctx.currentPixelPackBufferBinding=0;if(id==GLctx.currentPixelUnpackBufferBinding)GLctx.currentPixelUnpackBufferBinding=0}}function _emscripten_glDeleteFramebuffers(n,framebuffers){for(var i=0;i>2];var framebuffer=GL.framebuffers[id];if(!framebuffer)continue;GLctx.deleteFramebuffer(framebuffer);framebuffer.name=0;GL.framebuffers[id]=null}}function _emscripten_glDeleteProgram(id){if(!id)return;var program=GL.programs[id];if(!program){GL.recordError(1281);return}GLctx.deleteProgram(program);program.name=0;GL.programs[id]=null}function _emscripten_glDeleteQueries(n,ids){for(var i=0;i>2];var query=GL.queries[id];if(!query)continue;GLctx["deleteQuery"](query);GL.queries[id]=null}}function _emscripten_glDeleteQueriesEXT(n,ids){for(var i=0;i>2];var query=GL.queries[id];if(!query)continue;GLctx.disjointTimerQueryExt["deleteQueryEXT"](query);GL.queries[id]=null}}function _emscripten_glDeleteRenderbuffers(n,renderbuffers){for(var i=0;i>2];var renderbuffer=GL.renderbuffers[id];if(!renderbuffer)continue;GLctx.deleteRenderbuffer(renderbuffer);renderbuffer.name=0;GL.renderbuffers[id]=null}}function _emscripten_glDeleteSamplers(n,samplers){for(var i=0;i>2];var sampler=GL.samplers[id];if(!sampler)continue;GLctx["deleteSampler"](sampler);sampler.name=0;GL.samplers[id]=null}}function _emscripten_glDeleteShader(id){if(!id)return;var shader=GL.shaders[id];if(!shader){GL.recordError(1281);return}GLctx.deleteShader(shader);GL.shaders[id]=null}function _emscripten_glDeleteSync(id){if(!id)return;var sync=GL.syncs[id];if(!sync){GL.recordError(1281);return}GLctx.deleteSync(sync);sync.name=0;GL.syncs[id]=null}function _emscripten_glDeleteTextures(n,textures){for(var i=0;i>2];var texture=GL.textures[id];if(!texture)continue;GLctx.deleteTexture(texture);texture.name=0;GL.textures[id]=null}}function _emscripten_glDeleteTransformFeedbacks(n,ids){for(var i=0;i>2];var transformFeedback=GL.transformFeedbacks[id];if(!transformFeedback)continue;GLctx["deleteTransformFeedback"](transformFeedback);transformFeedback.name=0;GL.transformFeedbacks[id]=null}}function _emscripten_glDeleteVertexArrays(n,vaos){for(var i=0;i>2];GLctx["deleteVertexArray"](GL.vaos[id]);GL.vaos[id]=null}}function _emscripten_glDeleteVertexArraysOES(n,vaos){for(var i=0;i>2];GLctx["deleteVertexArray"](GL.vaos[id]);GL.vaos[id]=null}}function _emscripten_glDepthFunc(x0){GLctx["depthFunc"](x0)}function _emscripten_glDepthMask(flag){GLctx.depthMask(!!flag)}function _emscripten_glDepthRangef(x0,x1){GLctx["depthRange"](x0,x1)}function _emscripten_glDetachShader(program,shader){GLctx.detachShader(GL.programs[program],GL.shaders[shader])}function _emscripten_glDisable(x0){GLctx["disable"](x0)}function _emscripten_glDisableVertexAttribArray(index){GLctx.disableVertexAttribArray(index)}function _emscripten_glDrawArrays(mode,first,count){GLctx.drawArrays(mode,first,count)}function _emscripten_glDrawArraysInstanced(mode,first,count,primcount){GLctx["drawArraysInstanced"](mode,first,count,primcount)}function _emscripten_glDrawArraysInstancedANGLE(mode,first,count,primcount){GLctx["drawArraysInstanced"](mode,first,count,primcount)}function _emscripten_glDrawArraysInstancedARB(mode,first,count,primcount){GLctx["drawArraysInstanced"](mode,first,count,primcount)}function _emscripten_glDrawArraysInstancedEXT(mode,first,count,primcount){GLctx["drawArraysInstanced"](mode,first,count,primcount)}function _emscripten_glDrawArraysInstancedNV(mode,first,count,primcount){GLctx["drawArraysInstanced"](mode,first,count,primcount)}var tempFixedLengthArray=[];function _emscripten_glDrawBuffers(n,bufs){var bufArray=tempFixedLengthArray[n];for(var i=0;i>2]}GLctx["drawBuffers"](bufArray)}function _emscripten_glDrawBuffersEXT(n,bufs){var bufArray=tempFixedLengthArray[n];for(var i=0;i>2]}GLctx["drawBuffers"](bufArray)}function _emscripten_glDrawBuffersWEBGL(n,bufs){var bufArray=tempFixedLengthArray[n];for(var i=0;i>2]}GLctx["drawBuffers"](bufArray)}function _emscripten_glDrawElements(mode,count,type,indices){GLctx.drawElements(mode,count,type,indices)}function _emscripten_glDrawElementsInstanced(mode,count,type,indices,primcount){GLctx["drawElementsInstanced"](mode,count,type,indices,primcount)}function _emscripten_glDrawElementsInstancedANGLE(mode,count,type,indices,primcount){GLctx["drawElementsInstanced"](mode,count,type,indices,primcount)}function _emscripten_glDrawElementsInstancedARB(mode,count,type,indices,primcount){GLctx["drawElementsInstanced"](mode,count,type,indices,primcount)}function _emscripten_glDrawElementsInstancedEXT(mode,count,type,indices,primcount){GLctx["drawElementsInstanced"](mode,count,type,indices,primcount)}function _emscripten_glDrawElementsInstancedNV(mode,count,type,indices,primcount){GLctx["drawElementsInstanced"](mode,count,type,indices,primcount)}function _glDrawElements(mode,count,type,indices){GLctx.drawElements(mode,count,type,indices)}function _emscripten_glDrawRangeElements(mode,start,end,count,type,indices){_glDrawElements(mode,count,type,indices)}function _emscripten_glEnable(x0){GLctx["enable"](x0)}function _emscripten_glEnableVertexAttribArray(index){GLctx.enableVertexAttribArray(index)}function _emscripten_glEndQuery(x0){GLctx["endQuery"](x0)}function _emscripten_glEndQueryEXT(target){GLctx.disjointTimerQueryExt["endQueryEXT"](target)}function _emscripten_glEndTransformFeedback(){GLctx["endTransformFeedback"]()}function _emscripten_glFenceSync(condition,flags){var sync=GLctx.fenceSync(condition,flags);if(sync){var id=GL.getNewId(GL.syncs);sync.name=id;GL.syncs[id]=sync;return id}else{return 0}}function _emscripten_glFinish(){GLctx["finish"]()}function _emscripten_glFlush(){GLctx["flush"]()}function _emscripten_glFramebufferRenderbuffer(target,attachment,renderbuffertarget,renderbuffer){GLctx.framebufferRenderbuffer(target,attachment,renderbuffertarget,GL.renderbuffers[renderbuffer])}function _emscripten_glFramebufferTexture2D(target,attachment,textarget,texture,level){GLctx.framebufferTexture2D(target,attachment,textarget,GL.textures[texture],level)}function _emscripten_glFramebufferTextureLayer(target,attachment,texture,level,layer){GLctx.framebufferTextureLayer(target,attachment,GL.textures[texture],level,layer)}function _emscripten_glFrontFace(x0){GLctx["frontFace"](x0)}function __glGenObject(n,buffers,createFunction,objectTable){for(var i=0;i>2]=id}}function _emscripten_glGenBuffers(n,buffers){__glGenObject(n,buffers,"createBuffer",GL.buffers)}function _emscripten_glGenFramebuffers(n,ids){__glGenObject(n,ids,"createFramebuffer",GL.framebuffers)}function _emscripten_glGenQueries(n,ids){__glGenObject(n,ids,"createQuery",GL.queries)}function _emscripten_glGenQueriesEXT(n,ids){for(var i=0;i>2]=0;return}var id=GL.getNewId(GL.queries);query.name=id;GL.queries[id]=query;HEAP32[ids+i*4>>2]=id}}function _emscripten_glGenRenderbuffers(n,renderbuffers){__glGenObject(n,renderbuffers,"createRenderbuffer",GL.renderbuffers)}function _emscripten_glGenSamplers(n,samplers){__glGenObject(n,samplers,"createSampler",GL.samplers)}function _emscripten_glGenTextures(n,textures){__glGenObject(n,textures,"createTexture",GL.textures)}function _emscripten_glGenTransformFeedbacks(n,ids){__glGenObject(n,ids,"createTransformFeedback",GL.transformFeedbacks)}function _emscripten_glGenVertexArrays(n,arrays){__glGenObject(n,arrays,"createVertexArray",GL.vaos)}function _emscripten_glGenVertexArraysOES(n,arrays){__glGenObject(n,arrays,"createVertexArray",GL.vaos)}function _emscripten_glGenerateMipmap(x0){GLctx["generateMipmap"](x0)}function __glGetActiveAttribOrUniform(funcName,program,index,bufSize,length,size,type,name){program=GL.programs[program];var info=GLctx[funcName](program,index);if(info){var numBytesWrittenExclNull=name&&stringToUTF8(info.name,name,bufSize);if(length)HEAP32[length>>2]=numBytesWrittenExclNull;if(size)HEAP32[size>>2]=info.size;if(type)HEAP32[type>>2]=info.type}}function _emscripten_glGetActiveAttrib(program,index,bufSize,length,size,type,name){__glGetActiveAttribOrUniform("getActiveAttrib",program,index,bufSize,length,size,type,name)}function _emscripten_glGetActiveUniform(program,index,bufSize,length,size,type,name){__glGetActiveAttribOrUniform("getActiveUniform",program,index,bufSize,length,size,type,name)}function _emscripten_glGetActiveUniformBlockName(program,uniformBlockIndex,bufSize,length,uniformBlockName){program=GL.programs[program];var result=GLctx["getActiveUniformBlockName"](program,uniformBlockIndex);if(!result)return;if(uniformBlockName&&bufSize>0){var numBytesWrittenExclNull=stringToUTF8(result,uniformBlockName,bufSize);if(length)HEAP32[length>>2]=numBytesWrittenExclNull}else{if(length)HEAP32[length>>2]=0}}function _emscripten_glGetActiveUniformBlockiv(program,uniformBlockIndex,pname,params){if(!params){GL.recordError(1281);return}program=GL.programs[program];if(pname==35393){var name=GLctx["getActiveUniformBlockName"](program,uniformBlockIndex);HEAP32[params>>2]=name.length+1;return}var result=GLctx["getActiveUniformBlockParameter"](program,uniformBlockIndex,pname);if(result===null)return;if(pname==35395){for(var i=0;i>2]=result[i]}}else{HEAP32[params>>2]=result}}function _emscripten_glGetActiveUniformsiv(program,uniformCount,uniformIndices,pname,params){if(!params){GL.recordError(1281);return}if(uniformCount>0&&uniformIndices==0){GL.recordError(1281);return}program=GL.programs[program];var ids=[];for(var i=0;i>2])}var result=GLctx["getActiveUniforms"](program,ids,pname);if(!result)return;var len=result.length;for(var i=0;i>2]=result[i]}}function _emscripten_glGetAttachedShaders(program,maxCount,count,shaders){var result=GLctx.getAttachedShaders(GL.programs[program]);var len=result.length;if(len>maxCount){len=maxCount}HEAP32[count>>2]=len;for(var i=0;i>2]=id}}function _emscripten_glGetAttribLocation(program,name){return GLctx.getAttribLocation(GL.programs[program],UTF8ToString(name))}function writeI53ToI64(ptr,num){HEAPU32[ptr>>2]=num;HEAPU32[ptr+4>>2]=(num-HEAPU32[ptr>>2])/4294967296}function emscriptenWebGLGet(name_,p,type){if(!p){GL.recordError(1281);return}var ret=undefined;switch(name_){case 36346:ret=1;break;case 36344:if(type!=0&&type!=1){GL.recordError(1280)}return;case 34814:case 36345:ret=0;break;case 34466:var formats=GLctx.getParameter(34467);ret=formats?formats.length:0;break;case 33309:if(GL.currentContext.version<2){GL.recordError(1282);return}var exts=GLctx.getSupportedExtensions()||[];ret=2*exts.length;break;case 33307:case 33308:if(GL.currentContext.version<2){GL.recordError(1280);return}ret=name_==33307?3:0;break}if(ret===undefined){var result=GLctx.getParameter(name_);switch(typeof result){case"number":ret=result;break;case"boolean":ret=result?1:0;break;case"string":GL.recordError(1280);return;case"object":if(result===null){switch(name_){case 34964:case 35725:case 34965:case 36006:case 36007:case 32873:case 34229:case 36662:case 36663:case 35053:case 35055:case 36010:case 35097:case 35869:case 32874:case 36389:case 35983:case 35368:case 34068:{ret=0;break}default:{GL.recordError(1280);return}}}else if(result instanceof Float32Array||result instanceof Uint32Array||result instanceof Int32Array||result instanceof Array){for(var i=0;i>2]=result[i];break;case 2:HEAPF32[p+i*4>>2]=result[i];break;case 4:HEAP8[p+i>>0]=result[i]?1:0;break}}return}else{try{ret=result.name|0}catch(e){GL.recordError(1280);err("GL_INVALID_ENUM in glGet"+type+"v: Unknown object returned from WebGL getParameter("+name_+")! (error: "+e+")");return}}break;default:GL.recordError(1280);err("GL_INVALID_ENUM in glGet"+type+"v: Native code calling glGet"+type+"v("+name_+") and it returns "+result+" of type "+typeof result+"!");return}}switch(type){case 1:writeI53ToI64(p,ret);break;case 0:HEAP32[p>>2]=ret;break;case 2:HEAPF32[p>>2]=ret;break;case 4:HEAP8[p>>0]=ret?1:0;break}}function _emscripten_glGetBooleanv(name_,p){emscriptenWebGLGet(name_,p,4)}function _emscripten_glGetBufferParameteri64v(target,value,data){if(!data){GL.recordError(1281);return}writeI53ToI64(data,GLctx.getBufferParameter(target,value))}function _emscripten_glGetBufferParameteriv(target,value,data){if(!data){GL.recordError(1281);return}HEAP32[data>>2]=GLctx.getBufferParameter(target,value)}function _emscripten_glGetError(){var error=GLctx.getError()||GL.lastError;GL.lastError=0;return error}function _emscripten_glGetFloatv(name_,p){emscriptenWebGLGet(name_,p,2)}function _emscripten_glGetFragDataLocation(program,name){return GLctx["getFragDataLocation"](GL.programs[program],UTF8ToString(name))}function _emscripten_glGetFramebufferAttachmentParameteriv(target,attachment,pname,params){var result=GLctx.getFramebufferAttachmentParameter(target,attachment,pname);if(result instanceof WebGLRenderbuffer||result instanceof WebGLTexture){result=result.name|0}HEAP32[params>>2]=result}function emscriptenWebGLGetIndexed(target,index,data,type){if(!data){GL.recordError(1281);return}var result=GLctx["getIndexedParameter"](target,index);var ret;switch(typeof result){case"boolean":ret=result?1:0;break;case"number":ret=result;break;case"object":if(result===null){switch(target){case 35983:case 35368:ret=0;break;default:{GL.recordError(1280);return}}}else if(result instanceof WebGLBuffer){ret=result.name|0}else{GL.recordError(1280);return}break;default:GL.recordError(1280);return}switch(type){case 1:writeI53ToI64(data,ret);break;case 0:HEAP32[data>>2]=ret;break;case 2:HEAPF32[data>>2]=ret;break;case 4:HEAP8[data>>0]=ret?1:0;break;default:throw"internal emscriptenWebGLGetIndexed() error, bad type: "+type}}function _emscripten_glGetInteger64i_v(target,index,data){emscriptenWebGLGetIndexed(target,index,data,1)}function _emscripten_glGetInteger64v(name_,p){emscriptenWebGLGet(name_,p,1)}function _emscripten_glGetIntegeri_v(target,index,data){emscriptenWebGLGetIndexed(target,index,data,0)}function _emscripten_glGetIntegerv(name_,p){emscriptenWebGLGet(name_,p,0)}function _emscripten_glGetInternalformativ(target,internalformat,pname,bufSize,params){if(bufSize<0){GL.recordError(1281);return}if(!params){GL.recordError(1281);return}var ret=GLctx["getInternalformatParameter"](target,internalformat,pname);if(ret===null)return;for(var i=0;i>2]=ret[i]}}function _emscripten_glGetProgramBinary(program,bufSize,length,binaryFormat,binary){GL.recordError(1282)}function _emscripten_glGetProgramInfoLog(program,maxLength,length,infoLog){var log=GLctx.getProgramInfoLog(GL.programs[program]);if(log===null)log="(unknown error)";var numBytesWrittenExclNull=maxLength>0&&infoLog?stringToUTF8(log,infoLog,maxLength):0;if(length)HEAP32[length>>2]=numBytesWrittenExclNull}function _emscripten_glGetProgramiv(program,pname,p){if(!p){GL.recordError(1281);return}if(program>=GL.counter){GL.recordError(1281);return}program=GL.programs[program];if(pname==35716){var log=GLctx.getProgramInfoLog(program);if(log===null)log="(unknown error)";HEAP32[p>>2]=log.length+1}else if(pname==35719){if(!program.maxUniformLength){for(var i=0;i>2]=program.maxUniformLength}else if(pname==35722){if(!program.maxAttributeLength){for(var i=0;i>2]=program.maxAttributeLength}else if(pname==35381){if(!program.maxUniformBlockNameLength){for(var i=0;i>2]=program.maxUniformBlockNameLength}else{HEAP32[p>>2]=GLctx.getProgramParameter(program,pname)}}function _emscripten_glGetQueryObjecti64vEXT(id,pname,params){if(!params){GL.recordError(1281);return}var query=GL.queries[id];var param;if(GL.currentContext.version<2){param=GLctx.disjointTimerQueryExt["getQueryObjectEXT"](query,pname)}else{param=GLctx["getQueryParameter"](query,pname)}var ret;if(typeof param=="boolean"){ret=param?1:0}else{ret=param}writeI53ToI64(params,ret)}function _emscripten_glGetQueryObjectivEXT(id,pname,params){if(!params){GL.recordError(1281);return}var query=GL.queries[id];var param=GLctx.disjointTimerQueryExt["getQueryObjectEXT"](query,pname);var ret;if(typeof param=="boolean"){ret=param?1:0}else{ret=param}HEAP32[params>>2]=ret}function _emscripten_glGetQueryObjectui64vEXT(id,pname,params){if(!params){GL.recordError(1281);return}var query=GL.queries[id];var param;if(GL.currentContext.version<2){param=GLctx.disjointTimerQueryExt["getQueryObjectEXT"](query,pname)}else{param=GLctx["getQueryParameter"](query,pname)}var ret;if(typeof param=="boolean"){ret=param?1:0}else{ret=param}writeI53ToI64(params,ret)}function _emscripten_glGetQueryObjectuiv(id,pname,params){if(!params){GL.recordError(1281);return}var query=GL.queries[id];var param=GLctx["getQueryParameter"](query,pname);var ret;if(typeof param=="boolean"){ret=param?1:0}else{ret=param}HEAP32[params>>2]=ret}function _emscripten_glGetQueryObjectuivEXT(id,pname,params){if(!params){GL.recordError(1281);return}var query=GL.queries[id];var param=GLctx.disjointTimerQueryExt["getQueryObjectEXT"](query,pname);var ret;if(typeof param=="boolean"){ret=param?1:0}else{ret=param}HEAP32[params>>2]=ret}function _emscripten_glGetQueryiv(target,pname,params){if(!params){GL.recordError(1281);return}HEAP32[params>>2]=GLctx["getQuery"](target,pname)}function _emscripten_glGetQueryivEXT(target,pname,params){if(!params){GL.recordError(1281);return}HEAP32[params>>2]=GLctx.disjointTimerQueryExt["getQueryEXT"](target,pname)}function _emscripten_glGetRenderbufferParameteriv(target,pname,params){if(!params){GL.recordError(1281);return}HEAP32[params>>2]=GLctx.getRenderbufferParameter(target,pname)}function _emscripten_glGetSamplerParameterfv(sampler,pname,params){if(!params){GL.recordError(1281);return}HEAPF32[params>>2]=GLctx["getSamplerParameter"](GL.samplers[sampler],pname)}function _emscripten_glGetSamplerParameteriv(sampler,pname,params){if(!params){GL.recordError(1281);return}HEAP32[params>>2]=GLctx["getSamplerParameter"](GL.samplers[sampler],pname)}function _emscripten_glGetShaderInfoLog(shader,maxLength,length,infoLog){var log=GLctx.getShaderInfoLog(GL.shaders[shader]);if(log===null)log="(unknown error)";var numBytesWrittenExclNull=maxLength>0&&infoLog?stringToUTF8(log,infoLog,maxLength):0;if(length)HEAP32[length>>2]=numBytesWrittenExclNull}function _emscripten_glGetShaderPrecisionFormat(shaderType,precisionType,range,precision){var result=GLctx.getShaderPrecisionFormat(shaderType,precisionType);HEAP32[range>>2]=result.rangeMin;HEAP32[range+4>>2]=result.rangeMax;HEAP32[precision>>2]=result.precision}function _emscripten_glGetShaderSource(shader,bufSize,length,source){var result=GLctx.getShaderSource(GL.shaders[shader]);if(!result)return;var numBytesWrittenExclNull=bufSize>0&&source?stringToUTF8(result,source,bufSize):0;if(length)HEAP32[length>>2]=numBytesWrittenExclNull}function _emscripten_glGetShaderiv(shader,pname,p){if(!p){GL.recordError(1281);return}if(pname==35716){var log=GLctx.getShaderInfoLog(GL.shaders[shader]);if(log===null)log="(unknown error)";var logLength=log?log.length+1:0;HEAP32[p>>2]=logLength}else if(pname==35720){var source=GLctx.getShaderSource(GL.shaders[shader]);var sourceLength=source?source.length+1:0;HEAP32[p>>2]=sourceLength}else{HEAP32[p>>2]=GLctx.getShaderParameter(GL.shaders[shader],pname)}}function stringToNewUTF8(jsString){var length=lengthBytesUTF8(jsString)+1;var cString=_malloc(length);stringToUTF8(jsString,cString,length);return cString}function _emscripten_glGetString(name_){var ret=GL.stringCache[name_];if(!ret){switch(name_){case 7939:var exts=GLctx.getSupportedExtensions()||[];exts=exts.concat(exts.map(function(e){return"GL_"+e}));ret=stringToNewUTF8(exts.join(" "));break;case 7936:case 7937:case 37445:case 37446:var s=GLctx.getParameter(name_);if(!s){GL.recordError(1280)}ret=s&&stringToNewUTF8(s);break;case 7938:var glVersion=GLctx.getParameter(7938);if(GL.currentContext.version>=2)glVersion="OpenGL ES 3.0 ("+glVersion+")";else{glVersion="OpenGL ES 2.0 ("+glVersion+")"}ret=stringToNewUTF8(glVersion);break;case 35724:var glslVersion=GLctx.getParameter(35724);var ver_re=/^WebGL GLSL ES ([0-9]\.[0-9][0-9]?)(?:$| .*)/;var ver_num=glslVersion.match(ver_re);if(ver_num!==null){if(ver_num[1].length==3)ver_num[1]=ver_num[1]+"0";glslVersion="OpenGL ES GLSL ES "+ver_num[1]+" ("+glslVersion+")"}ret=stringToNewUTF8(glslVersion);break;default:GL.recordError(1280)}GL.stringCache[name_]=ret}return ret}function _emscripten_glGetStringi(name,index){if(GL.currentContext.version<2){GL.recordError(1282);return 0}var stringiCache=GL.stringiCache[name];if(stringiCache){if(index<0||index>=stringiCache.length){GL.recordError(1281);return 0}return stringiCache[index]}switch(name){case 7939:var exts=GLctx.getSupportedExtensions()||[];exts=exts.concat(exts.map(function(e){return"GL_"+e}));exts=exts.map(function(e){return stringToNewUTF8(e)});stringiCache=GL.stringiCache[name]=exts;if(index<0||index>=stringiCache.length){GL.recordError(1281);return 0}return stringiCache[index];default:GL.recordError(1280);return 0}}function _emscripten_glGetSynciv(sync,pname,bufSize,length,values){if(bufSize<0){GL.recordError(1281);return}if(!values){GL.recordError(1281);return}var ret=GLctx.getSyncParameter(GL.syncs[sync],pname);if(ret!==null){HEAP32[values>>2]=ret;if(length)HEAP32[length>>2]=1}}function _emscripten_glGetTexParameterfv(target,pname,params){if(!params){GL.recordError(1281);return}HEAPF32[params>>2]=GLctx.getTexParameter(target,pname)}function _emscripten_glGetTexParameteriv(target,pname,params){if(!params){GL.recordError(1281);return}HEAP32[params>>2]=GLctx.getTexParameter(target,pname)}function _emscripten_glGetTransformFeedbackVarying(program,index,bufSize,length,size,type,name){program=GL.programs[program];var info=GLctx["getTransformFeedbackVarying"](program,index);if(!info)return;if(name&&bufSize>0){var numBytesWrittenExclNull=stringToUTF8(info.name,name,bufSize);if(length)HEAP32[length>>2]=numBytesWrittenExclNull}else{if(length)HEAP32[length>>2]=0}if(size)HEAP32[size>>2]=info.size;if(type)HEAP32[type>>2]=info.type}function _emscripten_glGetUniformBlockIndex(program,uniformBlockName){return GLctx["getUniformBlockIndex"](GL.programs[program],UTF8ToString(uniformBlockName))}function _emscripten_glGetUniformIndices(program,uniformCount,uniformNames,uniformIndices){if(!uniformIndices){GL.recordError(1281);return}if(uniformCount>0&&(uniformNames==0||uniformIndices==0)){GL.recordError(1281);return}program=GL.programs[program];var names=[];for(var i=0;i>2]));var result=GLctx["getUniformIndices"](program,names);if(!result)return;var len=result.length;for(var i=0;i>2]=result[i]}}function webglGetLeftBracePos(name){return name.slice(-1)=="]"&&name.lastIndexOf("[")}function webglPrepareUniformLocationsBeforeFirstUse(program){var uniformLocsById=program.uniformLocsById,uniformSizeAndIdsByName=program.uniformSizeAndIdsByName,i,j;if(!uniformLocsById){program.uniformLocsById=uniformLocsById={};program.uniformArrayNamesById={};for(i=0;i0?nm.slice(0,lb):nm;var id=program.uniformIdCounter;program.uniformIdCounter+=sz;uniformSizeAndIdsByName[arrayName]=[sz,id];for(j=0;j0){arrayIndex=jstoi_q(name.slice(leftBrace+1))>>>0;uniformBaseName=name.slice(0,leftBrace)}var sizeAndId=program.uniformSizeAndIdsByName[uniformBaseName];if(sizeAndId&&arrayIndex0?"["+webglLoc+"]":""))}return webglLoc}else{GL.recordError(1282)}}function emscriptenWebGLGetUniform(program,location,params,type){if(!params){GL.recordError(1281);return}program=GL.programs[program];webglPrepareUniformLocationsBeforeFirstUse(program);var data=GLctx.getUniform(program,webglGetUniformLocation(location));if(typeof data=="number"||typeof data=="boolean"){switch(type){case 0:HEAP32[params>>2]=data;break;case 2:HEAPF32[params>>2]=data;break}}else{for(var i=0;i>2]=data[i];break;case 2:HEAPF32[params+i*4>>2]=data[i];break}}}}function _emscripten_glGetUniformfv(program,location,params){emscriptenWebGLGetUniform(program,location,params,2)}function _emscripten_glGetUniformiv(program,location,params){emscriptenWebGLGetUniform(program,location,params,0)}function _emscripten_glGetUniformuiv(program,location,params){emscriptenWebGLGetUniform(program,location,params,0)}function emscriptenWebGLGetVertexAttrib(index,pname,params,type){if(!params){GL.recordError(1281);return}var data=GLctx.getVertexAttrib(index,pname);if(pname==34975){HEAP32[params>>2]=data&&data["name"]}else if(typeof data=="number"||typeof data=="boolean"){switch(type){case 0:HEAP32[params>>2]=data;break;case 2:HEAPF32[params>>2]=data;break;case 5:HEAP32[params>>2]=Math.fround(data);break}}else{for(var i=0;i>2]=data[i];break;case 2:HEAPF32[params+i*4>>2]=data[i];break;case 5:HEAP32[params+i*4>>2]=Math.fround(data[i]);break}}}}function _emscripten_glGetVertexAttribIiv(index,pname,params){emscriptenWebGLGetVertexAttrib(index,pname,params,0)}function _emscripten_glGetVertexAttribIuiv(index,pname,params){emscriptenWebGLGetVertexAttrib(index,pname,params,0)}function _emscripten_glGetVertexAttribPointerv(index,pname,pointer){if(!pointer){GL.recordError(1281);return}HEAP32[pointer>>2]=GLctx.getVertexAttribOffset(index,pname)}function _emscripten_glGetVertexAttribfv(index,pname,params){emscriptenWebGLGetVertexAttrib(index,pname,params,2)}function _emscripten_glGetVertexAttribiv(index,pname,params){emscriptenWebGLGetVertexAttrib(index,pname,params,5)}function _emscripten_glHint(x0,x1){GLctx["hint"](x0,x1)}function _emscripten_glInvalidateFramebuffer(target,numAttachments,attachments){var list=tempFixedLengthArray[numAttachments];for(var i=0;i>2]}GLctx["invalidateFramebuffer"](target,list)}function _emscripten_glInvalidateSubFramebuffer(target,numAttachments,attachments,x,y,width,height){var list=tempFixedLengthArray[numAttachments];for(var i=0;i>2]}GLctx["invalidateSubFramebuffer"](target,list,x,y,width,height)}function _emscripten_glIsBuffer(buffer){var b=GL.buffers[buffer];if(!b)return 0;return GLctx.isBuffer(b)}function _emscripten_glIsEnabled(x0){return GLctx["isEnabled"](x0)}function _emscripten_glIsFramebuffer(framebuffer){var fb=GL.framebuffers[framebuffer];if(!fb)return 0;return GLctx.isFramebuffer(fb)}function _emscripten_glIsProgram(program){program=GL.programs[program];if(!program)return 0;return GLctx.isProgram(program)}function _emscripten_glIsQuery(id){var query=GL.queries[id];if(!query)return 0;return GLctx["isQuery"](query)}function _emscripten_glIsQueryEXT(id){var query=GL.queries[id];if(!query)return 0;return GLctx.disjointTimerQueryExt["isQueryEXT"](query)}function _emscripten_glIsRenderbuffer(renderbuffer){var rb=GL.renderbuffers[renderbuffer];if(!rb)return 0;return GLctx.isRenderbuffer(rb)}function _emscripten_glIsSampler(id){var sampler=GL.samplers[id];if(!sampler)return 0;return GLctx["isSampler"](sampler)}function _emscripten_glIsShader(shader){var s=GL.shaders[shader];if(!s)return 0;return GLctx.isShader(s)}function _emscripten_glIsSync(sync){return GLctx.isSync(GL.syncs[sync])}function _emscripten_glIsTexture(id){var texture=GL.textures[id];if(!texture)return 0;return GLctx.isTexture(texture)}function _emscripten_glIsTransformFeedback(id){return GLctx["isTransformFeedback"](GL.transformFeedbacks[id])}function _emscripten_glIsVertexArray(array){var vao=GL.vaos[array];if(!vao)return 0;return GLctx["isVertexArray"](vao)}function _emscripten_glIsVertexArrayOES(array){var vao=GL.vaos[array];if(!vao)return 0;return GLctx["isVertexArray"](vao)}function _emscripten_glLineWidth(x0){GLctx["lineWidth"](x0)}function _emscripten_glLinkProgram(program){program=GL.programs[program];GLctx.linkProgram(program);program.uniformLocsById=0;program.uniformSizeAndIdsByName={}}function _emscripten_glPauseTransformFeedback(){GLctx["pauseTransformFeedback"]()}function _emscripten_glPixelStorei(pname,param){if(pname==3317){GL.unpackAlignment=param}GLctx.pixelStorei(pname,param)}function _emscripten_glPolygonOffset(x0,x1){GLctx["polygonOffset"](x0,x1)}function _emscripten_glProgramBinary(program,binaryFormat,binary,length){GL.recordError(1280)}function _emscripten_glProgramParameteri(program,pname,value){GL.recordError(1280)}function _emscripten_glQueryCounterEXT(id,target){GLctx.disjointTimerQueryExt["queryCounterEXT"](GL.queries[id],target)}function _emscripten_glReadBuffer(x0){GLctx["readBuffer"](x0)}function computeUnpackAlignedImageSize(width,height,sizePerPixel,alignment){function roundedToNextMultipleOf(x,y){return x+y-1&-y}var plainRowSize=width*sizePerPixel;var alignedRowSize=roundedToNextMultipleOf(plainRowSize,alignment);return height*alignedRowSize}function __colorChannelsInGlTextureFormat(format){var colorChannels={5:3,6:4,8:2,29502:3,29504:4,26917:2,26918:2,29846:3,29847:4};return colorChannels[format-6402]||1}function heapObjectForWebGLType(type){type-=5120;if(type==0)return HEAP8;if(type==1)return HEAPU8;if(type==2)return HEAP16;if(type==4)return HEAP32;if(type==6)return HEAPF32;if(type==5||type==28922||type==28520||type==30779||type==30782)return HEAPU32;return HEAPU16}function heapAccessShiftForWebGLHeap(heap){return 31-Math.clz32(heap.BYTES_PER_ELEMENT)}function emscriptenWebGLGetTexPixelData(type,format,width,height,pixels,internalFormat){var heap=heapObjectForWebGLType(type);var shift=heapAccessShiftForWebGLHeap(heap);var byteSize=1<>shift,pixels+bytes>>shift)}function _emscripten_glReadPixels(x,y,width,height,format,type,pixels){if(GL.currentContext.version>=2){if(GLctx.currentPixelPackBufferBinding){GLctx.readPixels(x,y,width,height,format,type,pixels)}else{var heap=heapObjectForWebGLType(type);GLctx.readPixels(x,y,width,height,format,type,heap,pixels>>heapAccessShiftForWebGLHeap(heap))}return}var pixelData=emscriptenWebGLGetTexPixelData(type,format,width,height,pixels,format);if(!pixelData){GL.recordError(1280);return}GLctx.readPixels(x,y,width,height,format,type,pixelData)}function _emscripten_glReleaseShaderCompiler(){}function _emscripten_glRenderbufferStorage(x0,x1,x2,x3){GLctx["renderbufferStorage"](x0,x1,x2,x3)}function _emscripten_glRenderbufferStorageMultisample(x0,x1,x2,x3,x4){GLctx["renderbufferStorageMultisample"](x0,x1,x2,x3,x4)}function _emscripten_glResumeTransformFeedback(){GLctx["resumeTransformFeedback"]()}function _emscripten_glSampleCoverage(value,invert){GLctx.sampleCoverage(value,!!invert)}function _emscripten_glSamplerParameterf(sampler,pname,param){GLctx["samplerParameterf"](GL.samplers[sampler],pname,param)}function _emscripten_glSamplerParameterfv(sampler,pname,params){var param=HEAPF32[params>>2];GLctx["samplerParameterf"](GL.samplers[sampler],pname,param)}function _emscripten_glSamplerParameteri(sampler,pname,param){GLctx["samplerParameteri"](GL.samplers[sampler],pname,param)}function _emscripten_glSamplerParameteriv(sampler,pname,params){var param=HEAP32[params>>2];GLctx["samplerParameteri"](GL.samplers[sampler],pname,param)}function _emscripten_glScissor(x0,x1,x2,x3){GLctx["scissor"](x0,x1,x2,x3)}function _emscripten_glShaderBinary(){GL.recordError(1280)}function _emscripten_glShaderSource(shader,count,string,length){var source=GL.getSource(shader,count,string,length);GLctx.shaderSource(GL.shaders[shader],source)}function _emscripten_glStencilFunc(x0,x1,x2){GLctx["stencilFunc"](x0,x1,x2)}function _emscripten_glStencilFuncSeparate(x0,x1,x2,x3){GLctx["stencilFuncSeparate"](x0,x1,x2,x3)}function _emscripten_glStencilMask(x0){GLctx["stencilMask"](x0)}function _emscripten_glStencilMaskSeparate(x0,x1){GLctx["stencilMaskSeparate"](x0,x1)}function _emscripten_glStencilOp(x0,x1,x2){GLctx["stencilOp"](x0,x1,x2)}function _emscripten_glStencilOpSeparate(x0,x1,x2,x3){GLctx["stencilOpSeparate"](x0,x1,x2,x3)}function _emscripten_glTexImage2D(target,level,internalFormat,width,height,border,format,type,pixels){if(GL.currentContext.version>=2){if(GLctx.currentPixelUnpackBufferBinding){GLctx.texImage2D(target,level,internalFormat,width,height,border,format,type,pixels)}else if(pixels){var heap=heapObjectForWebGLType(type);GLctx.texImage2D(target,level,internalFormat,width,height,border,format,type,heap,pixels>>heapAccessShiftForWebGLHeap(heap))}else{GLctx.texImage2D(target,level,internalFormat,width,height,border,format,type,null)}return}GLctx.texImage2D(target,level,internalFormat,width,height,border,format,type,pixels?emscriptenWebGLGetTexPixelData(type,format,width,height,pixels,internalFormat):null)}function _emscripten_glTexImage3D(target,level,internalFormat,width,height,depth,border,format,type,pixels){if(GLctx.currentPixelUnpackBufferBinding){GLctx["texImage3D"](target,level,internalFormat,width,height,depth,border,format,type,pixels)}else if(pixels){var heap=heapObjectForWebGLType(type);GLctx["texImage3D"](target,level,internalFormat,width,height,depth,border,format,type,heap,pixels>>heapAccessShiftForWebGLHeap(heap))}else{GLctx["texImage3D"](target,level,internalFormat,width,height,depth,border,format,type,null)}}function _emscripten_glTexParameterf(x0,x1,x2){GLctx["texParameterf"](x0,x1,x2)}function _emscripten_glTexParameterfv(target,pname,params){var param=HEAPF32[params>>2];GLctx.texParameterf(target,pname,param)}function _emscripten_glTexParameteri(x0,x1,x2){GLctx["texParameteri"](x0,x1,x2)}function _emscripten_glTexParameteriv(target,pname,params){var param=HEAP32[params>>2];GLctx.texParameteri(target,pname,param)}function _emscripten_glTexStorage2D(x0,x1,x2,x3,x4){GLctx["texStorage2D"](x0,x1,x2,x3,x4)}function _emscripten_glTexStorage3D(x0,x1,x2,x3,x4,x5){GLctx["texStorage3D"](x0,x1,x2,x3,x4,x5)}function _emscripten_glTexSubImage2D(target,level,xoffset,yoffset,width,height,format,type,pixels){if(GL.currentContext.version>=2){if(GLctx.currentPixelUnpackBufferBinding){GLctx.texSubImage2D(target,level,xoffset,yoffset,width,height,format,type,pixels)}else if(pixels){var heap=heapObjectForWebGLType(type);GLctx.texSubImage2D(target,level,xoffset,yoffset,width,height,format,type,heap,pixels>>heapAccessShiftForWebGLHeap(heap))}else{GLctx.texSubImage2D(target,level,xoffset,yoffset,width,height,format,type,null)}return}var pixelData=null;if(pixels)pixelData=emscriptenWebGLGetTexPixelData(type,format,width,height,pixels,0);GLctx.texSubImage2D(target,level,xoffset,yoffset,width,height,format,type,pixelData)}function _emscripten_glTexSubImage3D(target,level,xoffset,yoffset,zoffset,width,height,depth,format,type,pixels){if(GLctx.currentPixelUnpackBufferBinding){GLctx["texSubImage3D"](target,level,xoffset,yoffset,zoffset,width,height,depth,format,type,pixels)}else if(pixels){var heap=heapObjectForWebGLType(type);GLctx["texSubImage3D"](target,level,xoffset,yoffset,zoffset,width,height,depth,format,type,heap,pixels>>heapAccessShiftForWebGLHeap(heap))}else{GLctx["texSubImage3D"](target,level,xoffset,yoffset,zoffset,width,height,depth,format,type,null)}}function _emscripten_glTransformFeedbackVaryings(program,count,varyings,bufferMode){program=GL.programs[program];var vars=[];for(var i=0;i>2]));GLctx["transformFeedbackVaryings"](program,vars,bufferMode)}function _emscripten_glUniform1f(location,v0){GLctx.uniform1f(webglGetUniformLocation(location),v0)}var miniTempWebGLFloatBuffers=[];function _emscripten_glUniform1fv(location,count,value){if(GL.currentContext.version>=2){count&&GLctx.uniform1fv(webglGetUniformLocation(location),HEAPF32,value>>2,count);return}if(count<=288){var view=miniTempWebGLFloatBuffers[count-1];for(var i=0;i>2]}}else{var view=HEAPF32.subarray(value>>2,value+count*4>>2)}GLctx.uniform1fv(webglGetUniformLocation(location),view)}function _emscripten_glUniform1i(location,v0){GLctx.uniform1i(webglGetUniformLocation(location),v0)}var __miniTempWebGLIntBuffers=[];function _emscripten_glUniform1iv(location,count,value){if(GL.currentContext.version>=2){count&&GLctx.uniform1iv(webglGetUniformLocation(location),HEAP32,value>>2,count);return}if(count<=288){var view=__miniTempWebGLIntBuffers[count-1];for(var i=0;i>2]}}else{var view=HEAP32.subarray(value>>2,value+count*4>>2)}GLctx.uniform1iv(webglGetUniformLocation(location),view)}function _emscripten_glUniform1ui(location,v0){GLctx.uniform1ui(webglGetUniformLocation(location),v0)}function _emscripten_glUniform1uiv(location,count,value){count&&GLctx.uniform1uiv(webglGetUniformLocation(location),HEAPU32,value>>2,count)}function _emscripten_glUniform2f(location,v0,v1){GLctx.uniform2f(webglGetUniformLocation(location),v0,v1)}function _emscripten_glUniform2fv(location,count,value){if(GL.currentContext.version>=2){count&&GLctx.uniform2fv(webglGetUniformLocation(location),HEAPF32,value>>2,count*2);return}if(count<=144){var view=miniTempWebGLFloatBuffers[2*count-1];for(var i=0;i<2*count;i+=2){view[i]=HEAPF32[value+4*i>>2];view[i+1]=HEAPF32[value+(4*i+4)>>2]}}else{var view=HEAPF32.subarray(value>>2,value+count*8>>2)}GLctx.uniform2fv(webglGetUniformLocation(location),view)}function _emscripten_glUniform2i(location,v0,v1){GLctx.uniform2i(webglGetUniformLocation(location),v0,v1)}function _emscripten_glUniform2iv(location,count,value){if(GL.currentContext.version>=2){count&&GLctx.uniform2iv(webglGetUniformLocation(location),HEAP32,value>>2,count*2);return}if(count<=144){var view=__miniTempWebGLIntBuffers[2*count-1];for(var i=0;i<2*count;i+=2){view[i]=HEAP32[value+4*i>>2];view[i+1]=HEAP32[value+(4*i+4)>>2]}}else{var view=HEAP32.subarray(value>>2,value+count*8>>2)}GLctx.uniform2iv(webglGetUniformLocation(location),view)}function _emscripten_glUniform2ui(location,v0,v1){GLctx.uniform2ui(webglGetUniformLocation(location),v0,v1)}function _emscripten_glUniform2uiv(location,count,value){count&&GLctx.uniform2uiv(webglGetUniformLocation(location),HEAPU32,value>>2,count*2)}function _emscripten_glUniform3f(location,v0,v1,v2){GLctx.uniform3f(webglGetUniformLocation(location),v0,v1,v2)}function _emscripten_glUniform3fv(location,count,value){if(GL.currentContext.version>=2){count&&GLctx.uniform3fv(webglGetUniformLocation(location),HEAPF32,value>>2,count*3);return}if(count<=96){var view=miniTempWebGLFloatBuffers[3*count-1];for(var i=0;i<3*count;i+=3){view[i]=HEAPF32[value+4*i>>2];view[i+1]=HEAPF32[value+(4*i+4)>>2];view[i+2]=HEAPF32[value+(4*i+8)>>2]}}else{var view=HEAPF32.subarray(value>>2,value+count*12>>2)}GLctx.uniform3fv(webglGetUniformLocation(location),view)}function _emscripten_glUniform3i(location,v0,v1,v2){GLctx.uniform3i(webglGetUniformLocation(location),v0,v1,v2)}function _emscripten_glUniform3iv(location,count,value){if(GL.currentContext.version>=2){count&&GLctx.uniform3iv(webglGetUniformLocation(location),HEAP32,value>>2,count*3);return}if(count<=96){var view=__miniTempWebGLIntBuffers[3*count-1];for(var i=0;i<3*count;i+=3){view[i]=HEAP32[value+4*i>>2];view[i+1]=HEAP32[value+(4*i+4)>>2];view[i+2]=HEAP32[value+(4*i+8)>>2]}}else{var view=HEAP32.subarray(value>>2,value+count*12>>2)}GLctx.uniform3iv(webglGetUniformLocation(location),view)}function _emscripten_glUniform3ui(location,v0,v1,v2){GLctx.uniform3ui(webglGetUniformLocation(location),v0,v1,v2)}function _emscripten_glUniform3uiv(location,count,value){count&&GLctx.uniform3uiv(webglGetUniformLocation(location),HEAPU32,value>>2,count*3)}function _emscripten_glUniform4f(location,v0,v1,v2,v3){GLctx.uniform4f(webglGetUniformLocation(location),v0,v1,v2,v3)}function _emscripten_glUniform4fv(location,count,value){if(GL.currentContext.version>=2){count&&GLctx.uniform4fv(webglGetUniformLocation(location),HEAPF32,value>>2,count*4);return}if(count<=72){var view=miniTempWebGLFloatBuffers[4*count-1];var heap=HEAPF32;value>>=2;for(var i=0;i<4*count;i+=4){var dst=value+i;view[i]=heap[dst];view[i+1]=heap[dst+1];view[i+2]=heap[dst+2];view[i+3]=heap[dst+3]}}else{var view=HEAPF32.subarray(value>>2,value+count*16>>2)}GLctx.uniform4fv(webglGetUniformLocation(location),view)}function _emscripten_glUniform4i(location,v0,v1,v2,v3){GLctx.uniform4i(webglGetUniformLocation(location),v0,v1,v2,v3)}function _emscripten_glUniform4iv(location,count,value){if(GL.currentContext.version>=2){count&&GLctx.uniform4iv(webglGetUniformLocation(location),HEAP32,value>>2,count*4);return}if(count<=72){var view=__miniTempWebGLIntBuffers[4*count-1];for(var i=0;i<4*count;i+=4){view[i]=HEAP32[value+4*i>>2];view[i+1]=HEAP32[value+(4*i+4)>>2];view[i+2]=HEAP32[value+(4*i+8)>>2];view[i+3]=HEAP32[value+(4*i+12)>>2]}}else{var view=HEAP32.subarray(value>>2,value+count*16>>2)}GLctx.uniform4iv(webglGetUniformLocation(location),view)}function _emscripten_glUniform4ui(location,v0,v1,v2,v3){GLctx.uniform4ui(webglGetUniformLocation(location),v0,v1,v2,v3)}function _emscripten_glUniform4uiv(location,count,value){count&&GLctx.uniform4uiv(webglGetUniformLocation(location),HEAPU32,value>>2,count*4)}function _emscripten_glUniformBlockBinding(program,uniformBlockIndex,uniformBlockBinding){program=GL.programs[program];GLctx["uniformBlockBinding"](program,uniformBlockIndex,uniformBlockBinding)}function _emscripten_glUniformMatrix2fv(location,count,transpose,value){if(GL.currentContext.version>=2){count&&GLctx.uniformMatrix2fv(webglGetUniformLocation(location),!!transpose,HEAPF32,value>>2,count*4);return}if(count<=72){var view=miniTempWebGLFloatBuffers[4*count-1];for(var i=0;i<4*count;i+=4){view[i]=HEAPF32[value+4*i>>2];view[i+1]=HEAPF32[value+(4*i+4)>>2];view[i+2]=HEAPF32[value+(4*i+8)>>2];view[i+3]=HEAPF32[value+(4*i+12)>>2]}}else{var view=HEAPF32.subarray(value>>2,value+count*16>>2)}GLctx.uniformMatrix2fv(webglGetUniformLocation(location),!!transpose,view)}function _emscripten_glUniformMatrix2x3fv(location,count,transpose,value){count&&GLctx.uniformMatrix2x3fv(webglGetUniformLocation(location),!!transpose,HEAPF32,value>>2,count*6)}function _emscripten_glUniformMatrix2x4fv(location,count,transpose,value){count&&GLctx.uniformMatrix2x4fv(webglGetUniformLocation(location),!!transpose,HEAPF32,value>>2,count*8)}function _emscripten_glUniformMatrix3fv(location,count,transpose,value){if(GL.currentContext.version>=2){count&&GLctx.uniformMatrix3fv(webglGetUniformLocation(location),!!transpose,HEAPF32,value>>2,count*9);return}if(count<=32){var view=miniTempWebGLFloatBuffers[9*count-1];for(var i=0;i<9*count;i+=9){view[i]=HEAPF32[value+4*i>>2];view[i+1]=HEAPF32[value+(4*i+4)>>2];view[i+2]=HEAPF32[value+(4*i+8)>>2];view[i+3]=HEAPF32[value+(4*i+12)>>2];view[i+4]=HEAPF32[value+(4*i+16)>>2];view[i+5]=HEAPF32[value+(4*i+20)>>2];view[i+6]=HEAPF32[value+(4*i+24)>>2];view[i+7]=HEAPF32[value+(4*i+28)>>2];view[i+8]=HEAPF32[value+(4*i+32)>>2]}}else{var view=HEAPF32.subarray(value>>2,value+count*36>>2)}GLctx.uniformMatrix3fv(webglGetUniformLocation(location),!!transpose,view)}function _emscripten_glUniformMatrix3x2fv(location,count,transpose,value){count&&GLctx.uniformMatrix3x2fv(webglGetUniformLocation(location),!!transpose,HEAPF32,value>>2,count*6)}function _emscripten_glUniformMatrix3x4fv(location,count,transpose,value){count&&GLctx.uniformMatrix3x4fv(webglGetUniformLocation(location),!!transpose,HEAPF32,value>>2,count*12)}function _emscripten_glUniformMatrix4fv(location,count,transpose,value){if(GL.currentContext.version>=2){count&&GLctx.uniformMatrix4fv(webglGetUniformLocation(location),!!transpose,HEAPF32,value>>2,count*16);return}if(count<=18){var view=miniTempWebGLFloatBuffers[16*count-1];var heap=HEAPF32;value>>=2;for(var i=0;i<16*count;i+=16){var dst=value+i;view[i]=heap[dst];view[i+1]=heap[dst+1];view[i+2]=heap[dst+2];view[i+3]=heap[dst+3];view[i+4]=heap[dst+4];view[i+5]=heap[dst+5];view[i+6]=heap[dst+6];view[i+7]=heap[dst+7];view[i+8]=heap[dst+8];view[i+9]=heap[dst+9];view[i+10]=heap[dst+10];view[i+11]=heap[dst+11];view[i+12]=heap[dst+12];view[i+13]=heap[dst+13];view[i+14]=heap[dst+14];view[i+15]=heap[dst+15]}}else{var view=HEAPF32.subarray(value>>2,value+count*64>>2)}GLctx.uniformMatrix4fv(webglGetUniformLocation(location),!!transpose,view)}function _emscripten_glUniformMatrix4x2fv(location,count,transpose,value){count&&GLctx.uniformMatrix4x2fv(webglGetUniformLocation(location),!!transpose,HEAPF32,value>>2,count*8)}function _emscripten_glUniformMatrix4x3fv(location,count,transpose,value){count&&GLctx.uniformMatrix4x3fv(webglGetUniformLocation(location),!!transpose,HEAPF32,value>>2,count*12)}function _emscripten_glUseProgram(program){program=GL.programs[program];GLctx.useProgram(program);GLctx.currentProgram=program}function _emscripten_glValidateProgram(program){GLctx.validateProgram(GL.programs[program])}function _emscripten_glVertexAttrib1f(x0,x1){GLctx["vertexAttrib1f"](x0,x1)}function _emscripten_glVertexAttrib1fv(index,v){GLctx.vertexAttrib1f(index,HEAPF32[v>>2])}function _emscripten_glVertexAttrib2f(x0,x1,x2){GLctx["vertexAttrib2f"](x0,x1,x2)}function _emscripten_glVertexAttrib2fv(index,v){GLctx.vertexAttrib2f(index,HEAPF32[v>>2],HEAPF32[v+4>>2])}function _emscripten_glVertexAttrib3f(x0,x1,x2,x3){GLctx["vertexAttrib3f"](x0,x1,x2,x3)}function _emscripten_glVertexAttrib3fv(index,v){GLctx.vertexAttrib3f(index,HEAPF32[v>>2],HEAPF32[v+4>>2],HEAPF32[v+8>>2])}function _emscripten_glVertexAttrib4f(x0,x1,x2,x3,x4){GLctx["vertexAttrib4f"](x0,x1,x2,x3,x4)}function _emscripten_glVertexAttrib4fv(index,v){GLctx.vertexAttrib4f(index,HEAPF32[v>>2],HEAPF32[v+4>>2],HEAPF32[v+8>>2],HEAPF32[v+12>>2])}function _emscripten_glVertexAttribDivisor(index,divisor){GLctx["vertexAttribDivisor"](index,divisor)}function _emscripten_glVertexAttribDivisorANGLE(index,divisor){GLctx["vertexAttribDivisor"](index,divisor)}function _emscripten_glVertexAttribDivisorARB(index,divisor){GLctx["vertexAttribDivisor"](index,divisor)}function _emscripten_glVertexAttribDivisorEXT(index,divisor){GLctx["vertexAttribDivisor"](index,divisor)}function _emscripten_glVertexAttribDivisorNV(index,divisor){GLctx["vertexAttribDivisor"](index,divisor)}function _emscripten_glVertexAttribI4i(x0,x1,x2,x3,x4){GLctx["vertexAttribI4i"](x0,x1,x2,x3,x4)}function _emscripten_glVertexAttribI4iv(index,v){GLctx.vertexAttribI4i(index,HEAP32[v>>2],HEAP32[v+4>>2],HEAP32[v+8>>2],HEAP32[v+12>>2])}function _emscripten_glVertexAttribI4ui(x0,x1,x2,x3,x4){GLctx["vertexAttribI4ui"](x0,x1,x2,x3,x4)}function _emscripten_glVertexAttribI4uiv(index,v){GLctx.vertexAttribI4ui(index,HEAPU32[v>>2],HEAPU32[v+4>>2],HEAPU32[v+8>>2],HEAPU32[v+12>>2])}function _emscripten_glVertexAttribIPointer(index,size,type,stride,ptr){GLctx["vertexAttribIPointer"](index,size,type,stride,ptr)}function _emscripten_glVertexAttribPointer(index,size,type,normalized,stride,ptr){GLctx.vertexAttribPointer(index,size,type,!!normalized,stride,ptr)}function _emscripten_glViewport(x0,x1,x2,x3){GLctx["viewport"](x0,x1,x2,x3)}function _emscripten_glWaitSync(sync,flags,timeoutLo,timeoutHi){GLctx.waitSync(GL.syncs[sync],flags,convertI32PairToI53(timeoutLo,timeoutHi))}function _emscripten_memcpy_big(dest,src,num){HEAPU8.copyWithin(dest,src,src+num)}function getHeapMax(){return 2147483648}function emscripten_realloc_buffer(size){try{wasmMemory.grow(size-buffer.byteLength+65535>>>16);updateGlobalBufferAndViews(wasmMemory.buffer);return 1}catch(e){}}function _emscripten_resize_heap(requestedSize){var oldSize=HEAPU8.length;requestedSize=requestedSize>>>0;var maxHeapSize=getHeapMax();if(requestedSize>maxHeapSize){return false}let alignUp=(x,multiple)=>x+(multiple-x%multiple)%multiple;for(var cutDown=1;cutDown<=4;cutDown*=2){var overGrownHeapSize=oldSize*(1+.2/cutDown);overGrownHeapSize=Math.min(overGrownHeapSize,requestedSize+100663296);var newSize=Math.min(maxHeapSize,alignUp(Math.max(requestedSize,overGrownHeapSize),65536));var replacement=emscripten_realloc_buffer(newSize);if(replacement){return true}}return false}function _emscripten_set_main_loop(func,fps,simulateInfiniteLoop){var browserIterationFunc=getWasmTableEntry(func);setMainLoop(browserIterationFunc,fps,simulateInfiniteLoop)}var JSEvents={inEventHandler:0,removeAllEventListeners:function(){for(var i=JSEvents.eventHandlers.length-1;i>=0;--i){JSEvents._removeHandler(i)}JSEvents.eventHandlers=[];JSEvents.deferredCalls=[]},registerRemoveEventListeners:function(){if(!JSEvents.removeEventListenersRegistered){__ATEXIT__.push(JSEvents.removeAllEventListeners);JSEvents.removeEventListenersRegistered=true}},deferredCalls:[],deferCall:function(targetFunction,precedence,argsList){function arraysHaveEqualContent(arrA,arrB){if(arrA.length!=arrB.length)return false;for(var i in arrA){if(arrA[i]!=arrB[i])return false}return true}for(var i in JSEvents.deferredCalls){var call=JSEvents.deferredCalls[i];if(call.targetFunction==targetFunction&&arraysHaveEqualContent(call.argsList,argsList)){return}}JSEvents.deferredCalls.push({targetFunction:targetFunction,precedence:precedence,argsList:argsList});JSEvents.deferredCalls.sort(function(x,y){return x.precedence2?UTF8ToString(cString):cString}var specialHTMLTargets=[0,typeof document!="undefined"?document:0,typeof window!="undefined"?window:0];function findEventTarget(target){target=maybeCStringToJsString(target);var domElement=specialHTMLTargets[target]||(typeof document!="undefined"?document.querySelector(target):undefined);return domElement}function findCanvasEventTarget(target){return findEventTarget(target)}function _emscripten_webgl_do_create_context(target,attributes){var a=attributes>>2;var powerPreference=HEAP32[a+(24>>2)];var contextAttributes={"alpha":!!HEAP32[a+(0>>2)],"depth":!!HEAP32[a+(4>>2)],"stencil":!!HEAP32[a+(8>>2)],"antialias":!!HEAP32[a+(12>>2)],"premultipliedAlpha":!!HEAP32[a+(16>>2)],"preserveDrawingBuffer":!!HEAP32[a+(20>>2)],"powerPreference":__emscripten_webgl_power_preferences[powerPreference],"failIfMajorPerformanceCaveat":!!HEAP32[a+(28>>2)],majorVersion:HEAP32[a+(32>>2)],minorVersion:HEAP32[a+(36>>2)],enableExtensionsByDefault:HEAP32[a+(40>>2)],explicitSwapControl:HEAP32[a+(44>>2)],proxyContextToMainThread:HEAP32[a+(48>>2)],renderViaOffscreenBackBuffer:HEAP32[a+(52>>2)]};var canvas=findCanvasEventTarget(target);if(!canvas){return 0}if(contextAttributes.explicitSwapControl&&!contextAttributes.renderViaOffscreenBackBuffer){contextAttributes.renderViaOffscreenBackBuffer=true}var contextHandle=GL.createContext(canvas,contextAttributes);return contextHandle}function _emscripten_webgl_create_context(a0,a1){return _emscripten_webgl_do_create_context(a0,a1)}function _emscripten_webgl_destroy_context(contextHandle){if(GL.currentContext==contextHandle)GL.currentContext=0;GL.deleteContext(contextHandle)}function _emscripten_webgl_init_context_attributes(attributes){var a=attributes>>2;for(var i=0;i<56>>2;++i){HEAP32[a+i]=0}HEAP32[a+(0>>2)]=HEAP32[a+(4>>2)]=HEAP32[a+(12>>2)]=HEAP32[a+(16>>2)]=HEAP32[a+(32>>2)]=HEAP32[a+(40>>2)]=1}function _emscripten_webgl_make_context_current(contextHandle){var success=GL.makeContextCurrent(contextHandle);return success?0:-5}var ENV={};function getExecutableName(){return thisProgram||"./this.program"}function getEnvStrings(){if(!getEnvStrings.strings){var lang=(typeof navigator=="object"&&navigator.languages&&navigator.languages[0]||"C").replace("-","_")+".UTF-8";var env={"USER":"web_user","LOGNAME":"web_user","PATH":"/","PWD":"/","HOME":"/home/web_user","LANG":lang,"_":getExecutableName()};for(var x in ENV){if(ENV[x]===undefined)delete env[x];else env[x]=ENV[x]}var strings=[];for(var x in env){strings.push(x+"="+env[x])}getEnvStrings.strings=strings}return getEnvStrings.strings}function _environ_get(__environ,environ_buf){var bufSize=0;getEnvStrings().forEach(function(string,i){var ptr=environ_buf+bufSize;HEAPU32[__environ+i*4>>2]=ptr;writeAsciiToMemory(string,ptr);bufSize+=string.length+1});return 0}function _environ_sizes_get(penviron_count,penviron_buf_size){var strings=getEnvStrings();HEAPU32[penviron_count>>2]=strings.length;var bufSize=0;strings.forEach(function(string){bufSize+=string.length+1});HEAPU32[penviron_buf_size>>2]=bufSize;return 0}function _fd_close(fd){try{var stream=SYSCALLS.getStreamFromFD(fd);FS.close(stream);return 0}catch(e){if(typeof FS=="undefined"||!(e instanceof FS.ErrnoError))throw e;return e.errno}}function _fd_fdstat_get(fd,pbuf){try{var stream=SYSCALLS.getStreamFromFD(fd);var type=stream.tty?2:FS.isDir(stream.mode)?3:FS.isLink(stream.mode)?7:4;HEAP8[pbuf>>0]=type;return 0}catch(e){if(typeof FS=="undefined"||!(e instanceof FS.ErrnoError))throw e;return e.errno}}function doReadv(stream,iov,iovcnt,offset){var ret=0;for(var i=0;i>2];var len=HEAPU32[iov+4>>2];iov+=8;var curr=FS.read(stream,HEAP8,ptr,len,offset);if(curr<0)return-1;ret+=curr;if(curr>2]=num;return 0}catch(e){if(typeof FS=="undefined"||!(e instanceof FS.ErrnoError))throw e;return e.errno}}function convertI32PairToI53Checked(lo,hi){return hi+2097152>>>0<4194305-!!lo?(lo>>>0)+hi*4294967296:NaN}function _fd_seek(fd,offset_low,offset_high,whence,newOffset){try{var offset=convertI32PairToI53Checked(offset_low,offset_high);if(isNaN(offset))return 61;var stream=SYSCALLS.getStreamFromFD(fd);FS.llseek(stream,offset,whence);tempI64=[stream.position>>>0,(tempDouble=stream.position,+Math.abs(tempDouble)>=1?tempDouble>0?(Math.min(+Math.floor(tempDouble/4294967296),4294967295)|0)>>>0:~~+Math.ceil((tempDouble-+(~~tempDouble>>>0))/4294967296)>>>0:0)],HEAP32[newOffset>>2]=tempI64[0],HEAP32[newOffset+4>>2]=tempI64[1];if(stream.getdents&&offset===0&&whence===0)stream.getdents=null;return 0}catch(e){if(typeof FS=="undefined"||!(e instanceof FS.ErrnoError))throw e;return e.errno}}function doWritev(stream,iov,iovcnt,offset){var ret=0;for(var i=0;i>2];var len=HEAPU32[iov+4>>2];iov+=8;var curr=FS.write(stream,HEAP8,ptr,len,offset);if(curr<0)return-1;ret+=curr}return ret}function _fd_write(fd,iov,iovcnt,pnum){try{var stream=SYSCALLS.getStreamFromFD(fd);var num=doWritev(stream,iov,iovcnt);HEAPU32[pnum>>2]=num;return 0}catch(e){if(typeof FS=="undefined"||!(e instanceof FS.ErrnoError))throw e;return e.errno}}function _getTempRet0(){return getTempRet0()}function _getaddrinfo(node,service,hint,out){var addr=0;var port=0;var flags=0;var family=0;var type=0;var proto=0;var ai;function allocaddrinfo(family,type,proto,canon,addr,port){var sa,salen,ai;var errno;salen=family===10?28:16;addr=family===10?inetNtop6(addr):inetNtop4(addr);sa=_malloc(salen);errno=writeSockaddr(sa,family,addr,port);assert(!errno);ai=_malloc(32);HEAP32[ai+4>>2]=family;HEAP32[ai+8>>2]=type;HEAP32[ai+12>>2]=proto;HEAP32[ai+24>>2]=canon;HEAPU32[ai+20>>2]=sa;if(family===10){HEAP32[ai+16>>2]=28}else{HEAP32[ai+16>>2]=16}HEAP32[ai+28>>2]=0;return ai}if(hint){flags=HEAP32[hint>>2];family=HEAP32[hint+4>>2];type=HEAP32[hint+8>>2];proto=HEAP32[hint+12>>2]}if(type&&!proto){proto=type===2?17:6}if(!type&&proto){type=proto===17?2:1}if(proto===0){proto=6}if(type===0){type=1}if(!node&&!service){return-2}if(flags&~(1|2|4|1024|8|16|32)){return-1}if(hint!==0&&HEAP32[hint>>2]&2&&!node){return-1}if(flags&32){return-2}if(type!==0&&type!==1&&type!==2){return-7}if(family!==0&&family!==2&&family!==10){return-6}if(service){service=UTF8ToString(service);port=parseInt(service,10);if(isNaN(port)){if(flags&1024){return-2}return-8}}if(!node){if(family===0){family=2}if((flags&1)===0){if(family===2){addr=_htonl(2130706433)}else{addr=[0,0,0,1]}}ai=allocaddrinfo(family,type,proto,null,addr,port);HEAPU32[out>>2]=ai;return 0}node=UTF8ToString(node);addr=inetPton4(node);if(addr!==null){if(family===0||family===2){family=2}else if(family===10&&flags&8){addr=[0,0,_htonl(65535),addr];family=10}else{return-2}}else{addr=inetPton6(node);if(addr!==null){if(family===0||family===10){family=10}else{return-2}}}if(addr!=null){ai=allocaddrinfo(family,type,proto,node,addr,port);HEAPU32[out>>2]=ai;return 0}if(flags&4){return-2}node=DNS.lookup_name(node);addr=inetPton4(node);if(family===0){family=2}else if(family===10){addr=[0,0,_htonl(65535),addr]}ai=allocaddrinfo(family,type,proto,null,addr,port);HEAPU32[out>>2]=ai;return 0}function _getnameinfo(sa,salen,node,nodelen,serv,servlen,flags){var info=readSockaddr(sa,salen);if(info.errno){return-6}var port=info.port;var addr=info.addr;var overflowed=false;if(node&&nodelen){var lookup;if(flags&1||!(lookup=DNS.lookup_addr(addr))){if(flags&8){return-2}}else{addr=lookup}var numBytesWrittenExclNull=stringToUTF8(addr,node,nodelen);if(numBytesWrittenExclNull+1>=nodelen){overflowed=true}}if(serv&&servlen){port=""+port;var numBytesWrittenExclNull=stringToUTF8(port,serv,servlen);if(numBytesWrittenExclNull+1>=servlen){overflowed=true}}if(overflowed){return-12}return 0}function _glActiveTexture(x0){GLctx["activeTexture"](x0)}function _glAttachShader(program,shader){GLctx.attachShader(GL.programs[program],GL.shaders[shader])}function _glBeginTransformFeedback(x0){GLctx["beginTransformFeedback"](x0)}function _glBindAttribLocation(program,index,name){GLctx.bindAttribLocation(GL.programs[program],index,UTF8ToString(name))}function _glBindBuffer(target,buffer){if(target==35051){GLctx.currentPixelPackBufferBinding=buffer}else if(target==35052){GLctx.currentPixelUnpackBufferBinding=buffer}GLctx.bindBuffer(target,GL.buffers[buffer])}function _glBindBufferBase(target,index,buffer){GLctx["bindBufferBase"](target,index,GL.buffers[buffer])}function _glBindFramebuffer(target,framebuffer){GLctx.bindFramebuffer(target,framebuffer?GL.framebuffers[framebuffer]:GL.currentContext.defaultFbo)}function _glBindRenderbuffer(target,renderbuffer){GLctx.bindRenderbuffer(target,GL.renderbuffers[renderbuffer])}function _glBindTexture(target,texture){GLctx.bindTexture(target,GL.textures[texture])}function _glBindVertexArray(vao){GLctx["bindVertexArray"](GL.vaos[vao])}function _glBlendEquation(x0){GLctx["blendEquation"](x0)}function _glBlendFunc(x0,x1){GLctx["blendFunc"](x0,x1)}function _glBlendFuncSeparate(x0,x1,x2,x3){GLctx["blendFuncSeparate"](x0,x1,x2,x3)}function _glBlitFramebuffer(x0,x1,x2,x3,x4,x5,x6,x7,x8,x9){GLctx["blitFramebuffer"](x0,x1,x2,x3,x4,x5,x6,x7,x8,x9)}function _glBufferData(target,size,data,usage){if(GL.currentContext.version>=2){if(data&&size){GLctx.bufferData(target,HEAPU8,usage,data,size)}else{GLctx.bufferData(target,size,usage)}}else{GLctx.bufferData(target,data?HEAPU8.subarray(data,data+size):size,usage)}}function _glBufferSubData(target,offset,size,data){if(GL.currentContext.version>=2){size&&GLctx.bufferSubData(target,offset,HEAPU8,data,size);return}GLctx.bufferSubData(target,offset,HEAPU8.subarray(data,data+size))}function _glCheckFramebufferStatus(x0){return GLctx["checkFramebufferStatus"](x0)}function _glClear(x0){GLctx["clear"](x0)}function _glClearBufferfv(buffer,drawbuffer,value){GLctx["clearBufferfv"](buffer,drawbuffer,HEAPF32,value>>2)}function _glClearColor(x0,x1,x2,x3){GLctx["clearColor"](x0,x1,x2,x3)}function _glClearDepthf(x0){GLctx["clearDepth"](x0)}function _glColorMask(red,green,blue,alpha){GLctx.colorMask(!!red,!!green,!!blue,!!alpha)}function _glCompileShader(shader){GLctx.compileShader(GL.shaders[shader])}function _glCompressedTexImage2D(target,level,internalFormat,width,height,border,imageSize,data){if(GL.currentContext.version>=2){if(GLctx.currentPixelUnpackBufferBinding||!imageSize){GLctx["compressedTexImage2D"](target,level,internalFormat,width,height,border,imageSize,data)}else{GLctx["compressedTexImage2D"](target,level,internalFormat,width,height,border,HEAPU8,data,imageSize)}return}GLctx["compressedTexImage2D"](target,level,internalFormat,width,height,border,data?HEAPU8.subarray(data,data+imageSize):null)}function _glCompressedTexSubImage2D(target,level,xoffset,yoffset,width,height,format,imageSize,data){if(GL.currentContext.version>=2){if(GLctx.currentPixelUnpackBufferBinding||!imageSize){GLctx["compressedTexSubImage2D"](target,level,xoffset,yoffset,width,height,format,imageSize,data)}else{GLctx["compressedTexSubImage2D"](target,level,xoffset,yoffset,width,height,format,HEAPU8,data,imageSize)}return}GLctx["compressedTexSubImage2D"](target,level,xoffset,yoffset,width,height,format,data?HEAPU8.subarray(data,data+imageSize):null)}function _glCompressedTexSubImage3D(target,level,xoffset,yoffset,zoffset,width,height,depth,format,imageSize,data){if(GLctx.currentPixelUnpackBufferBinding){GLctx["compressedTexSubImage3D"](target,level,xoffset,yoffset,zoffset,width,height,depth,format,imageSize,data)}else{GLctx["compressedTexSubImage3D"](target,level,xoffset,yoffset,zoffset,width,height,depth,format,HEAPU8,data,imageSize)}}function _glCopyBufferSubData(x0,x1,x2,x3,x4){GLctx["copyBufferSubData"](x0,x1,x2,x3,x4)}function _glCopyTexSubImage2D(x0,x1,x2,x3,x4,x5,x6,x7){GLctx["copyTexSubImage2D"](x0,x1,x2,x3,x4,x5,x6,x7)}function _glCreateProgram(){var id=GL.getNewId(GL.programs);var program=GLctx.createProgram();program.name=id;program.maxUniformLength=program.maxAttributeLength=program.maxUniformBlockNameLength=0;program.uniformIdCounter=1;GL.programs[id]=program;return id}function _glCreateShader(shaderType){var id=GL.getNewId(GL.shaders);GL.shaders[id]=GLctx.createShader(shaderType);return id}function _glCullFace(x0){GLctx["cullFace"](x0)}function _glDeleteBuffers(n,buffers){for(var i=0;i>2];var buffer=GL.buffers[id];if(!buffer)continue;GLctx.deleteBuffer(buffer);buffer.name=0;GL.buffers[id]=null;if(id==GLctx.currentPixelPackBufferBinding)GLctx.currentPixelPackBufferBinding=0;if(id==GLctx.currentPixelUnpackBufferBinding)GLctx.currentPixelUnpackBufferBinding=0}}function _glDeleteFramebuffers(n,framebuffers){for(var i=0;i>2];var framebuffer=GL.framebuffers[id];if(!framebuffer)continue;GLctx.deleteFramebuffer(framebuffer);framebuffer.name=0;GL.framebuffers[id]=null}}function _glDeleteProgram(id){if(!id)return;var program=GL.programs[id];if(!program){GL.recordError(1281);return}GLctx.deleteProgram(program);program.name=0;GL.programs[id]=null}function _glDeleteRenderbuffers(n,renderbuffers){for(var i=0;i>2];var renderbuffer=GL.renderbuffers[id];if(!renderbuffer)continue;GLctx.deleteRenderbuffer(renderbuffer);renderbuffer.name=0;GL.renderbuffers[id]=null}}function _glDeleteShader(id){if(!id)return;var shader=GL.shaders[id];if(!shader){GL.recordError(1281);return}GLctx.deleteShader(shader);GL.shaders[id]=null}function _glDeleteTextures(n,textures){for(var i=0;i>2];var texture=GL.textures[id];if(!texture)continue;GLctx.deleteTexture(texture);texture.name=0;GL.textures[id]=null}}function _glDeleteVertexArrays(n,vaos){for(var i=0;i>2];GLctx["deleteVertexArray"](GL.vaos[id]);GL.vaos[id]=null}}function _glDepthFunc(x0){GLctx["depthFunc"](x0)}function _glDepthMask(flag){GLctx.depthMask(!!flag)}function _glDisable(x0){GLctx["disable"](x0)}function _glDisableVertexAttribArray(index){GLctx.disableVertexAttribArray(index)}function _glDrawArrays(mode,first,count){GLctx.drawArrays(mode,first,count)}function _glDrawArraysInstanced(mode,first,count,primcount){GLctx["drawArraysInstanced"](mode,first,count,primcount)}function _glDrawBuffers(n,bufs){var bufArray=tempFixedLengthArray[n];for(var i=0;i>2]}GLctx["drawBuffers"](bufArray)}function _glDrawElementsInstanced(mode,count,type,indices,primcount){GLctx["drawElementsInstanced"](mode,count,type,indices,primcount)}function _glEnable(x0){GLctx["enable"](x0)}function _glEnableVertexAttribArray(index){GLctx.enableVertexAttribArray(index)}function _glEndTransformFeedback(){GLctx["endTransformFeedback"]()}function _glFinish(){GLctx["finish"]()}function _glFramebufferRenderbuffer(target,attachment,renderbuffertarget,renderbuffer){GLctx.framebufferRenderbuffer(target,attachment,renderbuffertarget,GL.renderbuffers[renderbuffer])}function _glFramebufferTexture2D(target,attachment,textarget,texture,level){GLctx.framebufferTexture2D(target,attachment,textarget,GL.textures[texture],level)}function _glFramebufferTextureLayer(target,attachment,texture,level,layer){GLctx.framebufferTextureLayer(target,attachment,GL.textures[texture],level,layer)}function _glFrontFace(x0){GLctx["frontFace"](x0)}function _glGenBuffers(n,buffers){__glGenObject(n,buffers,"createBuffer",GL.buffers)}function _glGenFramebuffers(n,ids){__glGenObject(n,ids,"createFramebuffer",GL.framebuffers)}function _glGenRenderbuffers(n,renderbuffers){__glGenObject(n,renderbuffers,"createRenderbuffer",GL.renderbuffers)}function _glGenTextures(n,textures){__glGenObject(n,textures,"createTexture",GL.textures)}function _glGenVertexArrays(n,arrays){__glGenObject(n,arrays,"createVertexArray",GL.vaos)}function _glGenerateMipmap(x0){GLctx["generateMipmap"](x0)}function _glGetError(){var error=GLctx.getError()||GL.lastError;GL.lastError=0;return error}function _glGetFloatv(name_,p){emscriptenWebGLGet(name_,p,2)}function _glGetIntegerv(name_,p){emscriptenWebGLGet(name_,p,0)}function _glGetProgramBinary(program,bufSize,length,binaryFormat,binary){GL.recordError(1282)}function _glGetProgramInfoLog(program,maxLength,length,infoLog){var log=GLctx.getProgramInfoLog(GL.programs[program]);if(log===null)log="(unknown error)";var numBytesWrittenExclNull=maxLength>0&&infoLog?stringToUTF8(log,infoLog,maxLength):0;if(length)HEAP32[length>>2]=numBytesWrittenExclNull}function _glGetProgramiv(program,pname,p){if(!p){GL.recordError(1281);return}if(program>=GL.counter){GL.recordError(1281);return}program=GL.programs[program];if(pname==35716){var log=GLctx.getProgramInfoLog(program);if(log===null)log="(unknown error)";HEAP32[p>>2]=log.length+1}else if(pname==35719){if(!program.maxUniformLength){for(var i=0;i>2]=program.maxUniformLength}else if(pname==35722){if(!program.maxAttributeLength){for(var i=0;i>2]=program.maxAttributeLength}else if(pname==35381){if(!program.maxUniformBlockNameLength){for(var i=0;i>2]=program.maxUniformBlockNameLength}else{HEAP32[p>>2]=GLctx.getProgramParameter(program,pname)}}function _glGetShaderInfoLog(shader,maxLength,length,infoLog){var log=GLctx.getShaderInfoLog(GL.shaders[shader]);if(log===null)log="(unknown error)";var numBytesWrittenExclNull=maxLength>0&&infoLog?stringToUTF8(log,infoLog,maxLength):0;if(length)HEAP32[length>>2]=numBytesWrittenExclNull}function _glGetShaderSource(shader,bufSize,length,source){var result=GLctx.getShaderSource(GL.shaders[shader]);if(!result)return;var numBytesWrittenExclNull=bufSize>0&&source?stringToUTF8(result,source,bufSize):0;if(length)HEAP32[length>>2]=numBytesWrittenExclNull}function _glGetShaderiv(shader,pname,p){if(!p){GL.recordError(1281);return}if(pname==35716){var log=GLctx.getShaderInfoLog(GL.shaders[shader]);if(log===null)log="(unknown error)";var logLength=log?log.length+1:0;HEAP32[p>>2]=logLength}else if(pname==35720){var source=GLctx.getShaderSource(GL.shaders[shader]);var sourceLength=source?source.length+1:0;HEAP32[p>>2]=sourceLength}else{HEAP32[p>>2]=GLctx.getShaderParameter(GL.shaders[shader],pname)}}function _glGetString(name_){var ret=GL.stringCache[name_];if(!ret){switch(name_){case 7939:var exts=GLctx.getSupportedExtensions()||[];exts=exts.concat(exts.map(function(e){return"GL_"+e}));ret=stringToNewUTF8(exts.join(" "));break;case 7936:case 7937:case 37445:case 37446:var s=GLctx.getParameter(name_);if(!s){GL.recordError(1280)}ret=s&&stringToNewUTF8(s);break;case 7938:var glVersion=GLctx.getParameter(7938);if(GL.currentContext.version>=2)glVersion="OpenGL ES 3.0 ("+glVersion+")";else{glVersion="OpenGL ES 2.0 ("+glVersion+")"}ret=stringToNewUTF8(glVersion);break;case 35724:var glslVersion=GLctx.getParameter(35724);var ver_re=/^WebGL GLSL ES ([0-9]\.[0-9][0-9]?)(?:$| .*)/;var ver_num=glslVersion.match(ver_re);if(ver_num!==null){if(ver_num[1].length==3)ver_num[1]=ver_num[1]+"0";glslVersion="OpenGL ES GLSL ES "+ver_num[1]+" ("+glslVersion+")"}ret=stringToNewUTF8(glslVersion);break;default:GL.recordError(1280)}GL.stringCache[name_]=ret}return ret}function _glGetStringi(name,index){if(GL.currentContext.version<2){GL.recordError(1282);return 0}var stringiCache=GL.stringiCache[name];if(stringiCache){if(index<0||index>=stringiCache.length){GL.recordError(1281);return 0}return stringiCache[index]}switch(name){case 7939:var exts=GLctx.getSupportedExtensions()||[];exts=exts.concat(exts.map(function(e){return"GL_"+e}));exts=exts.map(function(e){return stringToNewUTF8(e)});stringiCache=GL.stringiCache[name]=exts;if(index<0||index>=stringiCache.length){GL.recordError(1281);return 0}return stringiCache[index];default:GL.recordError(1280);return 0}}function _glGetUniformBlockIndex(program,uniformBlockName){return GLctx["getUniformBlockIndex"](GL.programs[program],UTF8ToString(uniformBlockName))}function _glGetUniformLocation(program,name){name=UTF8ToString(name);if(program=GL.programs[program]){webglPrepareUniformLocationsBeforeFirstUse(program);var uniformLocsById=program.uniformLocsById;var arrayIndex=0;var uniformBaseName=name;var leftBrace=webglGetLeftBracePos(name);if(leftBrace>0){arrayIndex=jstoi_q(name.slice(leftBrace+1))>>>0;uniformBaseName=name.slice(0,leftBrace)}var sizeAndId=program.uniformSizeAndIdsByName[uniformBaseName];if(sizeAndId&&arrayIndex>2]}GLctx["invalidateFramebuffer"](target,list)}function _glLinkProgram(program){program=GL.programs[program];GLctx.linkProgram(program);program.uniformLocsById=0;program.uniformSizeAndIdsByName={}}function _glPixelStorei(pname,param){if(pname==3317){GL.unpackAlignment=param}GLctx.pixelStorei(pname,param)}function _glProgramBinary(program,binaryFormat,binary,length){GL.recordError(1280)}function _glProgramParameteri(program,pname,value){GL.recordError(1280)}function _glReadBuffer(x0){GLctx["readBuffer"](x0)}function _glReadPixels(x,y,width,height,format,type,pixels){if(GL.currentContext.version>=2){if(GLctx.currentPixelPackBufferBinding){GLctx.readPixels(x,y,width,height,format,type,pixels)}else{var heap=heapObjectForWebGLType(type);GLctx.readPixels(x,y,width,height,format,type,heap,pixels>>heapAccessShiftForWebGLHeap(heap))}return}var pixelData=emscriptenWebGLGetTexPixelData(type,format,width,height,pixels,format);if(!pixelData){GL.recordError(1280);return}GLctx.readPixels(x,y,width,height,format,type,pixelData)}function _glRenderbufferStorage(x0,x1,x2,x3){GLctx["renderbufferStorage"](x0,x1,x2,x3)}function _glRenderbufferStorageMultisample(x0,x1,x2,x3,x4){GLctx["renderbufferStorageMultisample"](x0,x1,x2,x3,x4)}function _glScissor(x0,x1,x2,x3){GLctx["scissor"](x0,x1,x2,x3)}function _glShaderSource(shader,count,string,length){var source=GL.getSource(shader,count,string,length);GLctx.shaderSource(GL.shaders[shader],source)}function _glTexImage2D(target,level,internalFormat,width,height,border,format,type,pixels){if(GL.currentContext.version>=2){if(GLctx.currentPixelUnpackBufferBinding){GLctx.texImage2D(target,level,internalFormat,width,height,border,format,type,pixels)}else if(pixels){var heap=heapObjectForWebGLType(type);GLctx.texImage2D(target,level,internalFormat,width,height,border,format,type,heap,pixels>>heapAccessShiftForWebGLHeap(heap))}else{GLctx.texImage2D(target,level,internalFormat,width,height,border,format,type,null)}return}GLctx.texImage2D(target,level,internalFormat,width,height,border,format,type,pixels?emscriptenWebGLGetTexPixelData(type,format,width,height,pixels,internalFormat):null)}function _glTexImage3D(target,level,internalFormat,width,height,depth,border,format,type,pixels){if(GLctx.currentPixelUnpackBufferBinding){GLctx["texImage3D"](target,level,internalFormat,width,height,depth,border,format,type,pixels)}else if(pixels){var heap=heapObjectForWebGLType(type);GLctx["texImage3D"](target,level,internalFormat,width,height,depth,border,format,type,heap,pixels>>heapAccessShiftForWebGLHeap(heap))}else{GLctx["texImage3D"](target,level,internalFormat,width,height,depth,border,format,type,null)}}function _glTexParameterf(x0,x1,x2){GLctx["texParameterf"](x0,x1,x2)}function _glTexParameteri(x0,x1,x2){GLctx["texParameteri"](x0,x1,x2)}function _glTexStorage2D(x0,x1,x2,x3,x4){GLctx["texStorage2D"](x0,x1,x2,x3,x4)}function _glTexSubImage2D(target,level,xoffset,yoffset,width,height,format,type,pixels){if(GL.currentContext.version>=2){if(GLctx.currentPixelUnpackBufferBinding){GLctx.texSubImage2D(target,level,xoffset,yoffset,width,height,format,type,pixels)}else if(pixels){var heap=heapObjectForWebGLType(type);GLctx.texSubImage2D(target,level,xoffset,yoffset,width,height,format,type,heap,pixels>>heapAccessShiftForWebGLHeap(heap))}else{GLctx.texSubImage2D(target,level,xoffset,yoffset,width,height,format,type,null)}return}var pixelData=null;if(pixels)pixelData=emscriptenWebGLGetTexPixelData(type,format,width,height,pixels,0);GLctx.texSubImage2D(target,level,xoffset,yoffset,width,height,format,type,pixelData)}function _glTexSubImage3D(target,level,xoffset,yoffset,zoffset,width,height,depth,format,type,pixels){if(GLctx.currentPixelUnpackBufferBinding){GLctx["texSubImage3D"](target,level,xoffset,yoffset,zoffset,width,height,depth,format,type,pixels)}else if(pixels){var heap=heapObjectForWebGLType(type);GLctx["texSubImage3D"](target,level,xoffset,yoffset,zoffset,width,height,depth,format,type,heap,pixels>>heapAccessShiftForWebGLHeap(heap))}else{GLctx["texSubImage3D"](target,level,xoffset,yoffset,zoffset,width,height,depth,format,type,null)}}function _glTransformFeedbackVaryings(program,count,varyings,bufferMode){program=GL.programs[program];var vars=[];for(var i=0;i>2]));GLctx["transformFeedbackVaryings"](program,vars,bufferMode)}function _glUniform1f(location,v0){GLctx.uniform1f(webglGetUniformLocation(location),v0)}function _glUniform1i(location,v0){GLctx.uniform1i(webglGetUniformLocation(location),v0)}function _glUniform1iv(location,count,value){if(GL.currentContext.version>=2){count&&GLctx.uniform1iv(webglGetUniformLocation(location),HEAP32,value>>2,count);return}if(count<=288){var view=__miniTempWebGLIntBuffers[count-1];for(var i=0;i>2]}}else{var view=HEAP32.subarray(value>>2,value+count*4>>2)}GLctx.uniform1iv(webglGetUniformLocation(location),view)}function _glUniform1ui(location,v0){GLctx.uniform1ui(webglGetUniformLocation(location),v0)}function _glUniform2f(location,v0,v1){GLctx.uniform2f(webglGetUniformLocation(location),v0,v1)}function _glUniform2fv(location,count,value){if(GL.currentContext.version>=2){count&&GLctx.uniform2fv(webglGetUniformLocation(location),HEAPF32,value>>2,count*2);return}if(count<=144){var view=miniTempWebGLFloatBuffers[2*count-1];for(var i=0;i<2*count;i+=2){view[i]=HEAPF32[value+4*i>>2];view[i+1]=HEAPF32[value+(4*i+4)>>2]}}else{var view=HEAPF32.subarray(value>>2,value+count*8>>2)}GLctx.uniform2fv(webglGetUniformLocation(location),view)}function _glUniform2i(location,v0,v1){GLctx.uniform2i(webglGetUniformLocation(location),v0,v1)}function _glUniform2iv(location,count,value){if(GL.currentContext.version>=2){count&&GLctx.uniform2iv(webglGetUniformLocation(location),HEAP32,value>>2,count*2);return}if(count<=144){var view=__miniTempWebGLIntBuffers[2*count-1];for(var i=0;i<2*count;i+=2){view[i]=HEAP32[value+4*i>>2];view[i+1]=HEAP32[value+(4*i+4)>>2]}}else{var view=HEAP32.subarray(value>>2,value+count*8>>2)}GLctx.uniform2iv(webglGetUniformLocation(location),view)}function _glUniform3f(location,v0,v1,v2){GLctx.uniform3f(webglGetUniformLocation(location),v0,v1,v2)}function _glUniform3fv(location,count,value){if(GL.currentContext.version>=2){count&&GLctx.uniform3fv(webglGetUniformLocation(location),HEAPF32,value>>2,count*3);return}if(count<=96){var view=miniTempWebGLFloatBuffers[3*count-1];for(var i=0;i<3*count;i+=3){view[i]=HEAPF32[value+4*i>>2];view[i+1]=HEAPF32[value+(4*i+4)>>2];view[i+2]=HEAPF32[value+(4*i+8)>>2]}}else{var view=HEAPF32.subarray(value>>2,value+count*12>>2)}GLctx.uniform3fv(webglGetUniformLocation(location),view)}function _glUniform3i(location,v0,v1,v2){GLctx.uniform3i(webglGetUniformLocation(location),v0,v1,v2)}function _glUniform4f(location,v0,v1,v2,v3){GLctx.uniform4f(webglGetUniformLocation(location),v0,v1,v2,v3)}function _glUniform4fv(location,count,value){if(GL.currentContext.version>=2){count&&GLctx.uniform4fv(webglGetUniformLocation(location),HEAPF32,value>>2,count*4);return}if(count<=72){var view=miniTempWebGLFloatBuffers[4*count-1];var heap=HEAPF32;value>>=2;for(var i=0;i<4*count;i+=4){var dst=value+i;view[i]=heap[dst];view[i+1]=heap[dst+1];view[i+2]=heap[dst+2];view[i+3]=heap[dst+3]}}else{var view=HEAPF32.subarray(value>>2,value+count*16>>2)}GLctx.uniform4fv(webglGetUniformLocation(location),view)}function _glUniform4i(location,v0,v1,v2,v3){GLctx.uniform4i(webglGetUniformLocation(location),v0,v1,v2,v3)}function _glUniformBlockBinding(program,uniformBlockIndex,uniformBlockBinding){program=GL.programs[program];GLctx["uniformBlockBinding"](program,uniformBlockIndex,uniformBlockBinding)}function _glUniformMatrix2fv(location,count,transpose,value){if(GL.currentContext.version>=2){count&&GLctx.uniformMatrix2fv(webglGetUniformLocation(location),!!transpose,HEAPF32,value>>2,count*4);return}if(count<=72){var view=miniTempWebGLFloatBuffers[4*count-1];for(var i=0;i<4*count;i+=4){view[i]=HEAPF32[value+4*i>>2];view[i+1]=HEAPF32[value+(4*i+4)>>2];view[i+2]=HEAPF32[value+(4*i+8)>>2];view[i+3]=HEAPF32[value+(4*i+12)>>2]}}else{var view=HEAPF32.subarray(value>>2,value+count*16>>2)}GLctx.uniformMatrix2fv(webglGetUniformLocation(location),!!transpose,view)}function _glUniformMatrix3fv(location,count,transpose,value){if(GL.currentContext.version>=2){count&&GLctx.uniformMatrix3fv(webglGetUniformLocation(location),!!transpose,HEAPF32,value>>2,count*9);return}if(count<=32){var view=miniTempWebGLFloatBuffers[9*count-1];for(var i=0;i<9*count;i+=9){view[i]=HEAPF32[value+4*i>>2];view[i+1]=HEAPF32[value+(4*i+4)>>2];view[i+2]=HEAPF32[value+(4*i+8)>>2];view[i+3]=HEAPF32[value+(4*i+12)>>2];view[i+4]=HEAPF32[value+(4*i+16)>>2];view[i+5]=HEAPF32[value+(4*i+20)>>2];view[i+6]=HEAPF32[value+(4*i+24)>>2];view[i+7]=HEAPF32[value+(4*i+28)>>2];view[i+8]=HEAPF32[value+(4*i+32)>>2]}}else{var view=HEAPF32.subarray(value>>2,value+count*36>>2)}GLctx.uniformMatrix3fv(webglGetUniformLocation(location),!!transpose,view)}function _glUniformMatrix4fv(location,count,transpose,value){if(GL.currentContext.version>=2){count&&GLctx.uniformMatrix4fv(webglGetUniformLocation(location),!!transpose,HEAPF32,value>>2,count*16);return}if(count<=18){var view=miniTempWebGLFloatBuffers[16*count-1];var heap=HEAPF32;value>>=2;for(var i=0;i<16*count;i+=16){var dst=value+i;view[i]=heap[dst];view[i+1]=heap[dst+1];view[i+2]=heap[dst+2];view[i+3]=heap[dst+3];view[i+4]=heap[dst+4];view[i+5]=heap[dst+5];view[i+6]=heap[dst+6];view[i+7]=heap[dst+7];view[i+8]=heap[dst+8];view[i+9]=heap[dst+9];view[i+10]=heap[dst+10];view[i+11]=heap[dst+11];view[i+12]=heap[dst+12];view[i+13]=heap[dst+13];view[i+14]=heap[dst+14];view[i+15]=heap[dst+15]}}else{var view=HEAPF32.subarray(value>>2,value+count*64>>2)}GLctx.uniformMatrix4fv(webglGetUniformLocation(location),!!transpose,view)}function _glUseProgram(program){program=GL.programs[program];GLctx.useProgram(program);GLctx.currentProgram=program}function _glVertexAttrib4f(x0,x1,x2,x3,x4){GLctx["vertexAttrib4f"](x0,x1,x2,x3,x4)}function _glVertexAttrib4fv(index,v){GLctx.vertexAttrib4f(index,HEAPF32[v>>2],HEAPF32[v+4>>2],HEAPF32[v+8>>2],HEAPF32[v+12>>2])}function _glVertexAttribDivisor(index,divisor){GLctx["vertexAttribDivisor"](index,divisor)}function _glVertexAttribI4ui(x0,x1,x2,x3,x4){GLctx["vertexAttribI4ui"](x0,x1,x2,x3,x4)}function _glVertexAttribIPointer(index,size,type,stride,ptr){GLctx["vertexAttribIPointer"](index,size,type,stride,ptr)}function _glVertexAttribPointer(index,size,type,normalized,stride,ptr){GLctx.vertexAttribPointer(index,size,type,!!normalized,stride,ptr)}function _glViewport(x0,x1,x2,x3){GLctx["viewport"](x0,x1,x2,x3)}var GodotRuntime={get_func:function(ptr){return wasmTable.get(ptr)},error:function(){err.apply(null,Array.from(arguments))},print:function(){out.apply(null,Array.from(arguments))},malloc:function(p_size){return _malloc(p_size)},free:function(p_ptr){_free(p_ptr)},getHeapValue:function(p_ptr,p_type){return getValue(p_ptr,p_type)},setHeapValue:function(p_ptr,p_value,p_type){setValue(p_ptr,p_value,p_type)},heapSub:function(p_heap,p_ptr,p_len){const bytes=p_heap.BYTES_PER_ELEMENT;return p_heap.subarray(p_ptr/bytes,p_ptr/bytes+p_len)},heapSlice:function(p_heap,p_ptr,p_len){const bytes=p_heap.BYTES_PER_ELEMENT;return p_heap.slice(p_ptr/bytes,p_ptr/bytes+p_len)},heapCopy:function(p_dst,p_src,p_ptr){const bytes=p_src.BYTES_PER_ELEMENT;return p_dst.set(p_src,p_ptr/bytes)},parseString:function(p_ptr){return UTF8ToString(p_ptr)},parseStringArray:function(p_ptr,p_size){const strings=[];const ptrs=GodotRuntime.heapSub(HEAP32,p_ptr,p_size);ptrs.forEach(function(ptr){strings.push(GodotRuntime.parseString(ptr))});return strings},strlen:function(p_str){return lengthBytesUTF8(p_str)},allocString:function(p_str){const length=GodotRuntime.strlen(p_str)+1;const c_str=GodotRuntime.malloc(length);stringToUTF8(p_str,c_str,length);return c_str},allocStringArray:function(p_strings){const size=p_strings.length;const c_ptr=GodotRuntime.malloc(size*4);for(let i=0;i>2)+i]=GodotRuntime.allocString(p_strings[i])}return c_ptr},freeStringArray:function(p_ptr,p_len){for(let i=0;i>2)+i])}GodotRuntime.free(p_ptr)},stringToHeap:function(p_str,p_ptr,p_len){return stringToUTF8Array(p_str,HEAP8,p_ptr,p_len)}};var GodotConfig={canvas:null,locale:"en",canvas_resize_policy:2,virtual_keyboard:false,persistent_drops:false,on_execute:null,on_exit:null,init_config:function(p_opts){GodotConfig.canvas_resize_policy=p_opts["canvasResizePolicy"];GodotConfig.canvas=p_opts["canvas"];GodotConfig.locale=p_opts["locale"]||GodotConfig.locale;GodotConfig.virtual_keyboard=p_opts["virtualKeyboard"];GodotConfig.persistent_drops=!!p_opts["persistentDrops"];GodotConfig.on_execute=p_opts["onExecute"];GodotConfig.on_exit=p_opts["onExit"];if(p_opts["focusCanvas"]){GodotConfig.canvas.focus()}},locate_file:function(file){return Module["locateFile"](file)},clear:function(){GodotConfig.canvas=null;GodotConfig.locale="en";GodotConfig.canvas_resize_policy=2;GodotConfig.virtual_keyboard=false;GodotConfig.persistent_drops=false;GodotConfig.on_execute=null;GodotConfig.on_exit=null}};var ERRNO_CODES={};var GodotFS={_idbfs:false,_syncing:false,_mount_points:[],is_persistent:function(){return GodotFS._idbfs?1:0},init:function(persistentPaths){GodotFS._idbfs=false;if(!Array.isArray(persistentPaths)){return Promise.reject(new Error("Persistent paths must be an array"))}if(!persistentPaths.length){return Promise.resolve()}GodotFS._mount_points=persistentPaths.slice();function createRecursive(dir){try{FS.stat(dir)}catch(e){if(e.errno!==ERRNO_CODES.ENOENT){throw e}FS.mkdirTree(dir)}}GodotFS._mount_points.forEach(function(path){createRecursive(path);FS.mount(IDBFS,{},path)});return new Promise(function(resolve,reject){FS.syncfs(true,function(err){if(err){GodotFS._mount_points=[];GodotFS._idbfs=false;GodotRuntime.print(`IndexedDB not available: ${err.message}`)}else{GodotFS._idbfs=true}resolve(err)})})},deinit:function(){GodotFS._mount_points.forEach(function(path){try{FS.unmount(path)}catch(e){GodotRuntime.print("Already unmounted",e)}if(GodotFS._idbfs&&IDBFS.dbs[path]){IDBFS.dbs[path].close();delete IDBFS.dbs[path]}});GodotFS._mount_points=[];GodotFS._idbfs=false;GodotFS._syncing=false},sync:function(){if(GodotFS._syncing){GodotRuntime.error("Already syncing!");return Promise.resolve()}GodotFS._syncing=true;return new Promise(function(resolve,reject){FS.syncfs(false,function(error){if(error){GodotRuntime.error(`Failed to save IDB file system: ${error.message}`)}GodotFS._syncing=false;resolve(error)})})},copy_to_fs:function(path,buffer){const idx=path.lastIndexOf("/");let dir="/";if(idx>0){dir=path.slice(0,idx)}try{FS.stat(dir)}catch(e){if(e.errno!==ERRNO_CODES.ENOENT){throw e}FS.mkdirTree(dir)}FS.writeFile(path,new Uint8Array(buffer))}};var GodotOS={request_quit:function(){},_async_cbs:[],_fs_sync_promise:null,atexit:function(p_promise_cb){GodotOS._async_cbs.push(p_promise_cb)},cleanup:function(exit_code){const cb=GodotConfig.on_exit;GodotFS.deinit();GodotConfig.clear();if(cb){cb(exit_code)}},finish_async:function(callback){GodotOS._fs_sync_promise.then(function(err){const promises=[];GodotOS._async_cbs.forEach(function(cb){promises.push(new Promise(cb))});return Promise.all(promises)}).then(function(){return GodotFS.sync()}).then(function(err){setTimeout(function(){callback()},0)})}};var GodotAudio={ctx:null,input:null,driver:null,interval:0,init:function(mix_rate,latency,onstatechange,onlatencyupdate){const opts={};if(mix_rate){opts["sampleRate"]=mix_rate}const ctx=new(window.AudioContext||window.webkitAudioContext)(opts);GodotAudio.ctx=ctx;ctx.onstatechange=function(){let state=0;switch(ctx.state){case"suspended":state=0;break;case"running":state=1;break;case"closed":state=2;break}onstatechange(state)};ctx.onstatechange();GodotAudio.interval=setInterval(function(){let computed_latency=0;if(ctx.baseLatency){computed_latency+=GodotAudio.ctx.baseLatency}if(ctx.outputLatency){computed_latency+=GodotAudio.ctx.outputLatency}onlatencyupdate(computed_latency)},1e3);GodotOS.atexit(GodotAudio.close_async);return ctx.destination.channelCount},create_input:function(callback){if(GodotAudio.input){return 0}function gotMediaInput(stream){try{GodotAudio.input=GodotAudio.ctx.createMediaStreamSource(stream);callback(GodotAudio.input)}catch(e){GodotRuntime.error("Failed creaating input.",e)}}if(navigator.mediaDevices&&navigator.mediaDevices.getUserMedia){navigator.mediaDevices.getUserMedia({"audio":true}).then(gotMediaInput,function(e){GodotRuntime.error("Error getting user media.",e)})}else{if(!navigator.getUserMedia){navigator.getUserMedia=navigator.webkitGetUserMedia||navigator.mozGetUserMedia}if(!navigator.getUserMedia){GodotRuntime.error("getUserMedia not available.");return 1}navigator.getUserMedia({"audio":true},gotMediaInput,function(e){GodotRuntime.print(e)})}return 0},close_async:function(resolve,reject){const ctx=GodotAudio.ctx;GodotAudio.ctx=null;if(!ctx){resolve();return}if(GodotAudio.interval){clearInterval(GodotAudio.interval);GodotAudio.interval=0}if(GodotAudio.input){GodotAudio.input.disconnect();GodotAudio.input=null}let closed=Promise.resolve();if(GodotAudio.driver){closed=GodotAudio.driver.close()}closed.then(function(){return ctx.close()}).then(function(){ctx.onstatechange=null;resolve()}).catch(function(e){ctx.onstatechange=null;GodotRuntime.error("Error closing AudioContext",e);resolve()})}};function _godot_audio_capture_start(){return GodotAudio.create_input(function(input){input.connect(GodotAudio.driver.get_node())})}function _godot_audio_capture_stop(){if(GodotAudio.input){const tracks=GodotAudio.input["mediaStream"]["getTracks"]();for(let i=0;i=size){const high=size-wpos;wbuf.set(buffer.subarray(wpos,size));pending_samples-=high;wpos=0}if(pending_samples>0){wbuf.set(buffer.subarray(wpos,wpos+pending_samples),tot_sent-pending_samples)}port.postMessage({"cmd":"chunk","data":wbuf.subarray(0,tot_sent)});wpos+=pending_samples;pending_samples=0}this.receive=function(recv_buf){const buffer=GodotRuntime.heapSub(HEAPF32,p_in_buf,p_in_size);const from=rpos;let to_write=recv_buf.length;let high=0;if(rpos+to_write>=p_in_size){high=p_in_size-rpos;buffer.set(recv_buf.subarray(0,high),rpos);to_write-=high;rpos=0}if(to_write){buffer.set(recv_buf.subarray(high,to_write),rpos)}in_callback(from,recv_buf.length);rpos+=to_write};this.consumed=function(size,port){pending_samples+=size;send(port)}}GodotAudioWorklet.ring_buffer=new RingBuffer;GodotAudioWorklet.promise.then(function(){const node=GodotAudioWorklet.worklet;const buffer=GodotRuntime.heapSlice(HEAPF32,p_out_buf,p_out_size);node.connect(GodotAudio.ctx.destination);node.port.postMessage({"cmd":"start_nothreads","data":[buffer,p_in_size]});node.port.onmessage=function(event){if(!GodotAudioWorklet.worklet){return}if(event.data["cmd"]==="read"){const read=event.data["data"];GodotAudioWorklet.ring_buffer.consumed(read,GodotAudioWorklet.worklet.port)}else if(event.data["cmd"]==="input"){const buf=event.data["data"];if(buf.length>p_in_size){GodotRuntime.error("Input chunk is too big");return}GodotAudioWorklet.ring_buffer.receive(buf)}else{GodotRuntime.error(event.data)}}})},get_node:function(){return GodotAudioWorklet.worklet},close:function(){return new Promise(function(resolve,reject){if(GodotAudioWorklet.promise===null){return}GodotAudioWorklet.promise.then(function(){GodotAudioWorklet.worklet.port.postMessage({"cmd":"stop","data":null});GodotAudioWorklet.worklet.disconnect();GodotAudioWorklet.worklet=null;GodotAudioWorklet.promise=null;resolve()}).catch(function(err){})})}};function _godot_audio_worklet_create(channels){try{GodotAudioWorklet.create(channels)}catch(e){GodotRuntime.error("Error starting AudioDriverWorklet",e);return 1}return 0}function _godot_audio_worklet_start_no_threads(p_out_buf,p_out_size,p_out_callback,p_in_buf,p_in_size,p_in_callback){const out_callback=GodotRuntime.get_func(p_out_callback);const in_callback=GodotRuntime.get_func(p_in_callback);GodotAudioWorklet.start_no_threads(p_out_buf,p_out_size,out_callback,p_in_buf,p_in_size,in_callback)}function _godot_js_config_canvas_id_get(p_ptr,p_ptr_max){GodotRuntime.stringToHeap(`#${GodotConfig.canvas.id}`,p_ptr,p_ptr_max)}function _godot_js_config_locale_get(p_ptr,p_ptr_max){GodotRuntime.stringToHeap(GodotConfig.locale,p_ptr,p_ptr_max)}var GodotDisplayCursor={shape:"default",visible:true,cursors:{},set_style:function(style){GodotConfig.canvas.style.cursor=style},set_shape:function(shape){GodotDisplayCursor.shape=shape;let css=shape;if(shape in GodotDisplayCursor.cursors){const c=GodotDisplayCursor.cursors[shape];css=`url("${c.url}") ${c.x} ${c.y}, default`}if(GodotDisplayCursor.visible){GodotDisplayCursor.set_style(css)}},clear:function(){GodotDisplayCursor.set_style("");GodotDisplayCursor.shape="default";GodotDisplayCursor.visible=true;Object.keys(GodotDisplayCursor.cursors).forEach(function(key){URL.revokeObjectURL(GodotDisplayCursor.cursors[key]);delete GodotDisplayCursor.cursors[key]})},lockPointer:function(){const canvas=GodotConfig.canvas;if(canvas.requestPointerLock){canvas.requestPointerLock()}},releasePointer:function(){if(document.exitPointerLock){document.exitPointerLock()}},isPointerLocked:function(){return document.pointerLockElement===GodotConfig.canvas}};var GodotEventListeners={handlers:[],has:function(target,event,method,capture){return GodotEventListeners.handlers.findIndex(function(e){return e.target===target&&e.event===event&&e.method===method&&e.capture===capture})!==-1},add:function(target,event,method,capture){if(GodotEventListeners.has(target,event,method,capture)){return}function Handler(p_target,p_event,p_method,p_capture){this.target=p_target;this.event=p_event;this.method=p_method;this.capture=p_capture}GodotEventListeners.handlers.push(new Handler(target,event,method,capture));target.addEventListener(event,method,capture)},clear:function(){GodotEventListeners.handlers.forEach(function(h){h.target.removeEventListener(h.event,h.method,h.capture)});GodotEventListeners.handlers.length=0}};function _emscripten_webgl_do_get_current_context(){return GL.currentContext?GL.currentContext.handle:0}function _emscripten_webgl_get_current_context(){return _emscripten_webgl_do_get_current_context()}var GodotDisplayScreen={desired_size:[0,0],hidpi:true,getPixelRatio:function(){return GodotDisplayScreen.hidpi?window.devicePixelRatio||1:1},isFullscreen:function(){const elem=document.fullscreenElement||document.mozFullscreenElement||document.webkitFullscreenElement||document.msFullscreenElement;if(elem){return elem===GodotConfig.canvas}return document.fullscreen||document.mozFullScreen||document.webkitIsFullscreen},hasFullscreen:function(){return document.fullscreenEnabled||document.mozFullScreenEnabled||document.webkitFullscreenEnabled},requestFullscreen:function(){if(!GodotDisplayScreen.hasFullscreen()){return 1}const canvas=GodotConfig.canvas;try{const promise=(canvas.requestFullscreen||canvas.msRequestFullscreen||canvas.mozRequestFullScreen||canvas.mozRequestFullscreen||canvas.webkitRequestFullscreen).call(canvas);if(promise){promise.catch(function(){})}}catch(e){return 1}return 0},exitFullscreen:function(){if(!GodotDisplayScreen.isFullscreen()){return 0}try{const promise=document.exitFullscreen();if(promise){promise.catch(function(){})}}catch(e){return 1}return 0},_updateGL:function(){const gl_context_handle=_emscripten_webgl_get_current_context();const gl=GL.getContext(gl_context_handle);if(gl){GL.resizeOffscreenFramebuffer(gl)}},updateSize:function(){const isFullscreen=GodotDisplayScreen.isFullscreen();const wantsFullWindow=GodotConfig.canvas_resize_policy===2;const noResize=GodotConfig.canvas_resize_policy===0;const wwidth=GodotDisplayScreen.desired_size[0];const wheight=GodotDisplayScreen.desired_size[1];const canvas=GodotConfig.canvas;let width=wwidth;let height=wheight;if(noResize){if(canvas.width!==width||canvas.height!==height){GodotDisplayScreen.desired_size=[canvas.width,canvas.height];GodotDisplayScreen._updateGL();return 1}return 0}const scale=GodotDisplayScreen.getPixelRatio();if(isFullscreen||wantsFullWindow){width=window.innerWidth*scale;height=window.innerHeight*scale}const csw=`${width/scale}px`;const csh=`${height/scale}px`;if(canvas.style.width!==csw||canvas.style.height!==csh||canvas.width!==width||canvas.height!==height){canvas.width=width;canvas.height=height;canvas.style.width=csw;canvas.style.height=csh;GodotDisplayScreen._updateGL();return 1}return 0}};var GodotDisplayVK={textinput:null,textarea:null,available:function(){return GodotConfig.virtual_keyboard&&"ontouchstart"in window},init:function(input_cb){function create(what){const elem=document.createElement(what);elem.style.display="none";elem.style.position="absolute";elem.style.zIndex="-1";elem.style.background="transparent";elem.style.padding="0px";elem.style.margin="0px";elem.style.overflow="hidden";elem.style.width="0px";elem.style.height="0px";elem.style.border="0px";elem.style.outline="none";elem.readonly=true;elem.disabled=true;GodotEventListeners.add(elem,"input",function(evt){const c_str=GodotRuntime.allocString(elem.value);input_cb(c_str,elem.selectionEnd);GodotRuntime.free(c_str)},false);GodotEventListeners.add(elem,"blur",function(evt){elem.style.display="none";elem.readonly=true;elem.disabled=true},false);GodotConfig.canvas.insertAdjacentElement("beforebegin",elem);return elem}GodotDisplayVK.textinput=create("input");GodotDisplayVK.textarea=create("textarea");GodotDisplayVK.updateSize()},show:function(text,multiline,start,end){if(!GodotDisplayVK.textinput||!GodotDisplayVK.textarea){return}if(GodotDisplayVK.textinput.style.display!==""||GodotDisplayVK.textarea.style.display!==""){GodotDisplayVK.hide()}GodotDisplayVK.updateSize();const elem=multiline?GodotDisplayVK.textarea:GodotDisplayVK.textinput;elem.readonly=false;elem.disabled=false;elem.value=text;elem.style.display="block";elem.focus();elem.setSelectionRange(start,end)},hide:function(){if(!GodotDisplayVK.textinput||!GodotDisplayVK.textarea){return}[GodotDisplayVK.textinput,GodotDisplayVK.textarea].forEach(function(elem){elem.blur();elem.style.display="none";elem.value=""})},updateSize:function(){if(!GodotDisplayVK.textinput||!GodotDisplayVK.textarea){return}const rect=GodotConfig.canvas.getBoundingClientRect();function update(elem){elem.style.left=`${rect.left}px`;elem.style.top=`${rect.top}px`;elem.style.width=`${rect.width}px`;elem.style.height=`${rect.height}px`}update(GodotDisplayVK.textinput);update(GodotDisplayVK.textarea)},clear:function(){if(GodotDisplayVK.textinput){GodotDisplayVK.textinput.remove();GodotDisplayVK.textinput=null}if(GodotDisplayVK.textarea){GodotDisplayVK.textarea.remove();GodotDisplayVK.textarea=null}}};var GodotDisplay={window_icon:"",findDPI:function(){function testDPI(dpi){return window.matchMedia(`(max-resolution: ${dpi}dpi)`).matches}function bisect(low,high,func){const mid=parseInt((high-low)/2+low,10);if(high-low<=1){return func(high)?high:low}if(func(mid)){return bisect(low,mid,func)}return bisect(mid,high,func)}try{const dpi=bisect(0,800,testDPI);return dpi>=96?dpi:96}catch(e){return 96}}};function _godot_js_display_alert(p_text){window.alert(GodotRuntime.parseString(p_text))}function _godot_js_display_canvas_focus(){GodotConfig.canvas.focus()}function _godot_js_display_canvas_is_focused(){return document.activeElement===GodotConfig.canvas}function _godot_js_display_clipboard_get(callback){const func=GodotRuntime.get_func(callback);try{navigator.clipboard.readText().then(function(result){const ptr=GodotRuntime.allocString(result);func(ptr);GodotRuntime.free(ptr)}).catch(function(e){})}catch(e){}}function _godot_js_display_clipboard_set(p_text){const text=GodotRuntime.parseString(p_text);if(!navigator.clipboard||!navigator.clipboard.writeText){return 1}navigator.clipboard.writeText(text).catch(function(e){GodotRuntime.error("Setting OS clipboard is only possible from an input callback for the HTML5 plafrom. Exception:",e)});return 0}function _godot_js_display_cursor_is_hidden(){return!GodotDisplayCursor.visible}function _godot_js_display_cursor_is_locked(){return GodotDisplayCursor.isPointerLocked()?1:0}function _godot_js_display_cursor_lock_set(p_lock){if(p_lock){GodotDisplayCursor.lockPointer()}else{GodotDisplayCursor.releasePointer()}}function _godot_js_display_cursor_set_custom_shape(p_shape,p_ptr,p_len,p_hotspot_x,p_hotspot_y){const shape=GodotRuntime.parseString(p_shape);const old_shape=GodotDisplayCursor.cursors[shape];if(p_len>0){const png=new Blob([GodotRuntime.heapSlice(HEAPU8,p_ptr,p_len)],{type:"image/png"});const url=URL.createObjectURL(png);GodotDisplayCursor.cursors[shape]={url:url,x:p_hotspot_x,y:p_hotspot_y}}else{delete GodotDisplayCursor.cursors[shape]}if(shape===GodotDisplayCursor.shape){GodotDisplayCursor.set_shape(GodotDisplayCursor.shape)}if(old_shape){URL.revokeObjectURL(old_shape.url)}}function _godot_js_display_cursor_set_shape(p_string){GodotDisplayCursor.set_shape(GodotRuntime.parseString(p_string))}function _godot_js_display_cursor_set_visible(p_visible){const visible=p_visible!==0;if(visible===GodotDisplayCursor.visible){return}GodotDisplayCursor.visible=visible;if(visible){GodotDisplayCursor.set_shape(GodotDisplayCursor.shape)}else{GodotDisplayCursor.set_style("none")}}function _godot_js_display_desired_size_set(width,height){GodotDisplayScreen.desired_size=[width,height];GodotDisplayScreen.updateSize()}function _godot_js_display_fullscreen_cb(callback){const canvas=GodotConfig.canvas;const func=GodotRuntime.get_func(callback);function change_cb(evt){if(evt.target===canvas){func(GodotDisplayScreen.isFullscreen())}}GodotEventListeners.add(document,"fullscreenchange",change_cb,false);GodotEventListeners.add(document,"mozfullscreenchange",change_cb,false);GodotEventListeners.add(document,"webkitfullscreenchange",change_cb,false)}function _godot_js_display_fullscreen_exit(){return GodotDisplayScreen.exitFullscreen()}function _godot_js_display_fullscreen_request(){return GodotDisplayScreen.requestFullscreen()}function _godot_js_display_glGetBufferSubData(target,offset,size,data){const gl_context_handle=_emscripten_webgl_get_current_context();const gl=GL.getContext(gl_context_handle);if(gl){gl.GLctx["getBufferSubData"](target,offset,HEAPU8,data,size)}}function _godot_js_display_has_webgl(p_version){if(p_version!==1&&p_version!==2){return false}try{return!!document.createElement("canvas").getContext(p_version===2?"webgl2":"webgl")}catch(e){}return false}function _godot_js_display_is_swap_ok_cancel(){const win=["Windows","Win64","Win32","WinCE"];const plat=navigator.platform||"";if(win.indexOf(plat)!==-1){return 1}return 0}function _godot_js_display_notification_cb(callback,p_enter,p_exit,p_in,p_out){const canvas=GodotConfig.canvas;const func=GodotRuntime.get_func(callback);const notif=[p_enter,p_exit,p_in,p_out];["mouseover","mouseleave","focus","blur"].forEach(function(evt_name,idx){GodotEventListeners.add(canvas,evt_name,function(){func(notif[idx])},true)})}function _godot_js_display_pixel_ratio_get(){return GodotDisplayScreen.getPixelRatio()}function _godot_js_display_screen_dpi_get(){return GodotDisplay.findDPI()}function _godot_js_display_screen_size_get(width,height){const scale=GodotDisplayScreen.getPixelRatio();GodotRuntime.setHeapValue(width,window.screen.width*scale,"i32");GodotRuntime.setHeapValue(height,window.screen.height*scale,"i32")}function _godot_js_display_setup_canvas(p_width,p_height,p_fullscreen,p_hidpi){const canvas=GodotConfig.canvas;GodotEventListeners.add(canvas,"contextmenu",function(ev){ev.preventDefault()},false);GodotEventListeners.add(canvas,"webglcontextlost",function(ev){alert("WebGL context lost, please reload the page");ev.preventDefault()},false);GodotDisplayScreen.hidpi=!!p_hidpi;switch(GodotConfig.canvas_resize_policy){case 0:GodotDisplayScreen.desired_size=[canvas.width,canvas.height];break;case 1:GodotDisplayScreen.desired_size=[p_width,p_height];break;default:canvas.style.position="absolute";canvas.style.top=0;canvas.style.left=0;break}GodotDisplayScreen.updateSize();if(p_fullscreen){GodotDisplayScreen.requestFullscreen()}}function _godot_js_display_size_update(){const updated=GodotDisplayScreen.updateSize();if(updated){GodotDisplayVK.updateSize()}return updated}function _godot_js_display_touchscreen_is_available(){return"ontouchstart"in window}function _godot_js_display_vk_available(){return GodotDisplayVK.available()}function _godot_js_display_vk_cb(p_input_cb){const input_cb=GodotRuntime.get_func(p_input_cb);if(GodotDisplayVK.available()){GodotDisplayVK.init(input_cb)}}function _godot_js_display_vk_hide(){GodotDisplayVK.hide()}function _godot_js_display_vk_show(p_text,p_multiline,p_start,p_end){const text=GodotRuntime.parseString(p_text);const start=p_start>0?p_start:0;const end=p_end>0?p_end:start;GodotDisplayVK.show(text,p_multiline,start,end)}function _godot_js_display_window_blur_cb(callback){const func=GodotRuntime.get_func(callback);GodotEventListeners.add(window,"blur",function(){func()},false)}function _godot_js_display_window_icon_set(p_ptr,p_len){let link=document.getElementById("-gd-engine-icon");if(link===null){link=document.createElement("link");link.rel="icon";link.id="-gd-engine-icon";document.head.appendChild(link)}const old_icon=GodotDisplay.window_icon;const png=new Blob([GodotRuntime.heapSlice(HEAPU8,p_ptr,p_len)],{type:"image/png"});GodotDisplay.window_icon=URL.createObjectURL(png);link.href=GodotDisplay.window_icon;if(old_icon){URL.revokeObjectURL(old_icon)}}function _godot_js_display_window_size_get(p_width,p_height){GodotRuntime.setHeapValue(p_width,GodotConfig.canvas.width,"i32");GodotRuntime.setHeapValue(p_height,GodotConfig.canvas.height,"i32")}function _godot_js_display_window_title_set(p_data){document.title=GodotRuntime.parseString(p_data)}function _godot_js_eval(p_js,p_use_global_ctx,p_union_ptr,p_byte_arr,p_byte_arr_write,p_callback){const js_code=GodotRuntime.parseString(p_js);let eval_ret=null;try{if(p_use_global_ctx){const global_eval=eval;eval_ret=global_eval(js_code)}else{eval_ret=eval(js_code)}}catch(e){GodotRuntime.error(e)}switch(typeof eval_ret){case"boolean":GodotRuntime.setHeapValue(p_union_ptr,eval_ret,"i32");return 1;case"number":GodotRuntime.setHeapValue(p_union_ptr,eval_ret,"double");return 3;case"string":GodotRuntime.setHeapValue(p_union_ptr,GodotRuntime.allocString(eval_ret),"*");return 4;case"object":if(eval_ret===null){break}if(ArrayBuffer.isView(eval_ret)&&!(eval_ret instanceof Uint8Array)){eval_ret=new Uint8Array(eval_ret.buffer)}else if(eval_ret instanceof ArrayBuffer){eval_ret=new Uint8Array(eval_ret)}if(eval_ret instanceof Uint8Array){const func=GodotRuntime.get_func(p_callback);const bytes_ptr=func(p_byte_arr,p_byte_arr_write,eval_ret.length);HEAPU8.set(eval_ret,bytes_ptr);return 20}break}return 0}var IDHandler={_last_id:0,_references:{},get:function(p_id){return IDHandler._references[p_id]},add:function(p_data){const id=++IDHandler._last_id;IDHandler._references[id]=p_data;return id},remove:function(p_id){delete IDHandler._references[p_id]}};var GodotFetch={onread:function(id,result){const obj=IDHandler.get(id);if(!obj){return}if(result.value){obj.chunks.push(result.value)}obj.reading=false;obj.done=result.done},onresponse:function(id,response){const obj=IDHandler.get(id);if(!obj){return}let chunked=false;response.headers.forEach(function(value,header){const v=value.toLowerCase().trim();const h=header.toLowerCase().trim();if(h==="transfer-encoding"&&v==="chunked"){chunked=true}});obj.status=response.status;obj.response=response;obj.reader=response.body.getReader();obj.chunked=chunked},onerror:function(id,err){GodotRuntime.error(err);const obj=IDHandler.get(id);if(!obj){return}obj.error=err},create:function(method,url,headers,body){const obj={request:null,response:null,reader:null,error:null,done:false,reading:false,status:0,chunks:[],bodySize:-1};const id=IDHandler.add(obj);const init={method:method,headers:headers,body:body};obj.request=fetch(url,init);obj.request.then(GodotFetch.onresponse.bind(null,id)).catch(GodotFetch.onerror.bind(null,id));return id},free:function(id){const obj=IDHandler.get(id);if(!obj){return}IDHandler.remove(id);if(!obj.request){return}obj.request.then(function(response){response.abort()}).catch(function(e){})},read:function(id){const obj=IDHandler.get(id);if(!obj){return}if(obj.reader&&!obj.reading){if(obj.done){obj.reader=null;return}obj.reading=true;obj.reader.read().then(GodotFetch.onread.bind(null,id)).catch(GodotFetch.onerror.bind(null,id))}}};function _godot_js_fetch_body_length_get(p_id){const obj=IDHandler.get(p_id);if(!obj||!obj.response){return-1}return obj.bodySize}function _godot_js_fetch_create(p_method,p_url,p_headers,p_headers_size,p_body,p_body_size){const method=GodotRuntime.parseString(p_method);const url=GodotRuntime.parseString(p_url);const headers=GodotRuntime.parseStringArray(p_headers,p_headers_size);const body=p_body_size?GodotRuntime.heapSlice(HEAP8,p_body,p_body_size):null;return GodotFetch.create(method,url,headers.map(function(hv){const idx=hv.indexOf(":");if(idx<=0){return[]}return[hv.slice(0,idx).trim(),hv.slice(idx+1).trim()]}).filter(function(v){return v.length===2}),body)}function _godot_js_fetch_free(id){GodotFetch.free(id)}function _godot_js_fetch_http_status_get(p_id){const obj=IDHandler.get(p_id);if(!obj||!obj.response){return 0}return obj.status}function _godot_js_fetch_is_chunked(p_id){const obj=IDHandler.get(p_id);if(!obj||!obj.response){return-1}return obj.chunked?1:0}function _godot_js_fetch_read_chunk(p_id,p_buf,p_buf_size){const obj=IDHandler.get(p_id);if(!obj||!obj.response){return 0}let to_read=p_buf_size;const chunks=obj.chunks;while(to_read&&chunks.length){const chunk=obj.chunks[0];if(chunk.length>to_read){GodotRuntime.heapCopy(HEAP8,chunk.slice(0,to_read),p_buf);chunks[0]=chunk.slice(to_read);to_read=0}else{GodotRuntime.heapCopy(HEAP8,chunk,p_buf);to_read-=chunk.length;chunks.pop()}}if(!chunks.length){GodotFetch.read(p_id)}return p_buf_size-to_read}function _godot_js_fetch_read_headers(p_id,p_parse_cb,p_ref){const obj=IDHandler.get(p_id);if(!obj||!obj.response){return 1}const cb=GodotRuntime.get_func(p_parse_cb);const arr=[];obj.response.headers.forEach(function(v,h){arr.push(`${h}:${v}`)});const c_ptr=GodotRuntime.allocStringArray(arr);cb(arr.length,c_ptr,p_ref);GodotRuntime.freeStringArray(c_ptr,arr.length);return 0}function _godot_js_fetch_state_get(p_id){const obj=IDHandler.get(p_id);if(!obj){return-1}if(obj.error){return-1}if(!obj.response){return 0}if(obj.reader){return 1}if(obj.done){return 2}return-1}var GodotInputGamepads={samples:[],get_pads:function(){try{const pads=navigator.getGamepads();if(pads){return pads}return[]}catch(e){return[]}},get_samples:function(){return GodotInputGamepads.samples},get_sample:function(index){const samples=GodotInputGamepads.samples;return index=0){os="Android"}else if(ua.indexOf("Linux")>=0){os="Linux"}else if(ua.indexOf("iPhone")>=0){os="iOS"}else if(ua.indexOf("Macintosh")>=0){os="MacOSX"}else if(ua.indexOf("Windows")>=0){os="Windows"}const id=pad.id;const exp1=/vendor: ([0-9a-f]{4}) product: ([0-9a-f]{4})/i;const exp2=/^([0-9a-f]+)-([0-9a-f]+)-/i;let vendor="";let product="";if(exp1.test(id)){const match=exp1.exec(id);vendor=match[1].padStart(4,"0");product=match[2].padStart(4,"0")}else if(exp2.test(id)){const match=exp2.exec(id);vendor=match[1].padStart(4,"0");product=match[2].padStart(4,"0")}if(!vendor||!product){return`${os}Unknown`}return os+vendor+product}};var GodotInputDragDrop={promises:[],pending_files:[],add_entry:function(entry){if(entry.isDirectory){GodotInputDragDrop.add_dir(entry)}else if(entry.isFile){GodotInputDragDrop.add_file(entry)}else{GodotRuntime.error("Unrecognized entry...",entry)}},add_dir:function(entry){GodotInputDragDrop.promises.push(new Promise(function(resolve,reject){const reader=entry.createReader();reader.readEntries(function(entries){for(let i=0;i{const path=elem["path"];GodotFS.copy_to_fs(DROP+path,elem["data"]);let idx=path.indexOf("/");if(idx===-1){drops.push(DROP+path)}else{const sub=path.substr(0,idx);idx=sub.indexOf("/");if(idx<0&&drops.indexOf(DROP+sub)===-1){drops.push(DROP+sub)}}files.push(DROP+path)});GodotInputDragDrop.promises=[];GodotInputDragDrop.pending_files=[];callback(drops);if(GodotConfig.persistent_drops){GodotOS.atexit(function(resolve,reject){GodotInputDragDrop.remove_drop(files,DROP);resolve()})}else{GodotInputDragDrop.remove_drop(files,DROP)}})},remove_drop:function(files,drop_path){const dirs=[drop_path.substr(0,drop_path.length-1)];files.forEach(function(file){FS.unlink(file);let dir=file.replace(drop_path,"");let idx=dir.lastIndexOf("/");while(idx>0){dir=dir.substr(0,idx);if(dirs.indexOf(drop_path+dir)===-1){dirs.push(drop_path+dir)}idx=dir.lastIndexOf("/")}});dirs.sort(function(a,b){const al=(a.match(/\//g)||[]).length;const bl=(b.match(/\//g)||[]).length;if(al>bl){return-1}else if(al{if(GodotWebXR.session&&GodotWebXR.space){const onFrame=function(time,frame){GodotWebXR.frame=frame;GodotWebXR.pose=frame.getViewerPose(GodotWebXR.space);callback(time);GodotWebXR.frame=null;GodotWebXR.pose=null};GodotWebXR.session.requestAnimationFrame(onFrame)}else{GodotWebXR.orig_requestAnimationFrame(callback)}},monkeyPatchRequestAnimationFrame:enable=>{if(GodotWebXR.orig_requestAnimationFrame===null){GodotWebXR.orig_requestAnimationFrame=Browser.requestAnimationFrame}Browser.requestAnimationFrame=enable?GodotWebXR.requestAnimationFrame:GodotWebXR.orig_requestAnimationFrame},pauseResumeMainLoop:()=>{Browser.mainLoop.pause();window.setTimeout(function(){Browser.mainLoop.resume()},0)},shaderProgram:null,programInfo:null,buffer:null,vsSource:"\n\t\t\tconst vec2 scale = vec2(0.5, 0.5);\n\t\t\tattribute vec4 aVertexPosition;\n\n\t\t\tvarying highp vec2 vTextureCoord;\n\n\t\t\tvoid main () {\n\t\t\t\tgl_Position = aVertexPosition;\n\t\t\t\tvTextureCoord = aVertexPosition.xy * scale + scale;\n\t\t\t}\n\t\t",fsSource:"\n\t\t\tvarying highp vec2 vTextureCoord;\n\n\t\t\tuniform sampler2D uSampler;\n\n\t\t\tvoid main() {\n\t\t\t\tgl_FragColor = texture2D(uSampler, vTextureCoord);\n\t\t\t}\n\t\t",initShaderProgram:(gl,vsSource,fsSource)=>{const vertexShader=GodotWebXR.loadShader(gl,gl.VERTEX_SHADER,vsSource);const fragmentShader=GodotWebXR.loadShader(gl,gl.FRAGMENT_SHADER,fsSource);const shaderProgram=gl.createProgram();gl.attachShader(shaderProgram,vertexShader);gl.attachShader(shaderProgram,fragmentShader);gl.linkProgram(shaderProgram);if(!gl.getProgramParameter(shaderProgram,gl.LINK_STATUS)){GodotRuntime.error(`Unable to initialize the shader program: ${gl.getProgramInfoLog(shaderProgram)}`);return null}return shaderProgram},loadShader:(gl,type,source)=>{const shader=gl.createShader(type);gl.shaderSource(shader,source);gl.compileShader(shader);if(!gl.getShaderParameter(shader,gl.COMPILE_STATUS)){GodotRuntime.error(`An error occurred compiling the shader: ${gl.getShaderInfoLog(shader)}`);gl.deleteShader(shader);return null}return shader},initBuffer:gl=>{const positionBuffer=gl.createBuffer();gl.bindBuffer(gl.ARRAY_BUFFER,positionBuffer);const positions=[-1,-1,1,-1,-1,1,1,1];gl.bufferData(gl.ARRAY_BUFFER,new Float32Array(positions),gl.STATIC_DRAW);return positionBuffer},blitTexture:(gl,texture)=>{if(GodotWebXR.shaderProgram===null){GodotWebXR.shaderProgram=GodotWebXR.initShaderProgram(gl,GodotWebXR.vsSource,GodotWebXR.fsSource);GodotWebXR.programInfo={program:GodotWebXR.shaderProgram,attribLocations:{vertexPosition:gl.getAttribLocation(GodotWebXR.shaderProgram,"aVertexPosition")},uniformLocations:{uSampler:gl.getUniformLocation(GodotWebXR.shaderProgram,"uSampler")}};GodotWebXR.buffer=GodotWebXR.initBuffer(gl)}const orig_program=gl.getParameter(gl.CURRENT_PROGRAM);gl.useProgram(GodotWebXR.shaderProgram);gl.bindBuffer(gl.ARRAY_BUFFER,GodotWebXR.buffer);gl.vertexAttribPointer(GodotWebXR.programInfo.attribLocations.vertexPosition,2,gl.FLOAT,false,0,0);gl.enableVertexAttribArray(GodotWebXR.programInfo.attribLocations.vertexPosition);gl.activeTexture(gl.TEXTURE0);gl.bindTexture(gl.TEXTURE_2D,texture);gl.uniform1i(GodotWebXR.programInfo.uniformLocations.uSampler,0);gl.drawArrays(gl.TRIANGLE_STRIP,0,4);gl.bindTexture(gl.TEXTURE_2D,null);gl.disableVertexAttribArray(GodotWebXR.programInfo.attribLocations.vertexPosition);gl.bindBuffer(gl.ARRAY_BUFFER,null);gl.useProgram(orig_program)},controllers:[],sampleControllers:()=>{if(!GodotWebXR.session){return}let other_index=2;const controllers=[];GodotWebXR.session.inputSources.forEach(input_source=>{if(input_source.targetRayMode==="tracked-pointer"){if(input_source.handedness==="right"){controllers[1]=input_source}else if(input_source.handedness==="left"||!controllers[0]){controllers[0]=input_source}}else{controllers[other_index++]=input_source}});GodotWebXR.controllers=controllers},getControllerId:input_source=>GodotWebXR.controllers.indexOf(input_source)};function _godot_webxr_commit_for_eye(p_eye,p_texture_id){if(!GodotWebXR.session||!GodotWebXR.pose){return}const view_index=p_eye===2?1:0;const glLayer=GodotWebXR.session.renderState.baseLayer;const view=GodotWebXR.pose.views[view_index];const viewport=glLayer.getViewport(view);const gl=GodotWebXR.gl;const orig_framebuffer=gl.getParameter(gl.FRAMEBUFFER_BINDING);const orig_viewport=gl.getParameter(gl.VIEWPORT);gl.bindFramebuffer(gl.FRAMEBUFFER,glLayer.framebuffer);gl.viewport(viewport.x,viewport.y,viewport.width,viewport.height);GodotWebXR.blitTexture(gl,GL.textures[p_texture_id]);gl.bindFramebuffer(gl.FRAMEBUFFER,orig_framebuffer);gl.viewport(orig_viewport[0],orig_viewport[1],orig_viewport[2],orig_viewport[3])}function _godot_webxr_get_bounds_geometry(){if(!GodotWebXR.space||!GodotWebXR.space.boundsGeometry){return 0}const point_count=GodotWebXR.space.boundsGeometry.length;if(point_count===0){return 0}const buf=GodotRuntime.malloc((point_count*3+1)*4);GodotRuntime.setHeapValue(buf,point_count,"i32");for(let i=0;i=GodotWebXR.controllers.length){return 0}const controller=GodotWebXR.controllers[p_controller];if(!controller){return 0}switch(controller.targetRayMode){case"gaze":return 1;case"tracked-pointer":return 2;case"screen":return 3;default:break}return 0}function _godot_webxr_get_controller_transform(p_controller){if(!GodotWebXR.session||!GodotWebXR.frame){return 0}const controller=GodotWebXR.controllers[p_controller];if(!controller){return 0}const frame=GodotWebXR.frame;const space=GodotWebXR.space;const pose=frame.getPose(controller.targetRaySpace,space);if(!pose){return 0}const matrix=pose.transform.matrix;const buf=GodotRuntime.malloc(16*4);for(let i=0;i<16;i++){GodotRuntime.setHeapValue(buf+i*4,matrix[i],"float")}return buf}function _godot_webxr_get_projection_for_eye(p_eye){if(!GodotWebXR.session||!GodotWebXR.pose){return 0}const view_index=p_eye===2?1:0;const matrix=GodotWebXR.pose.views[view_index].projectionMatrix;const buf=GodotRuntime.malloc(16*4);for(let i=0;i<16;i++){GodotRuntime.setHeapValue(buf+i*4,matrix[i],"float")}return buf}function _godot_webxr_get_render_targetsize(){if(!GodotWebXR.session||!GodotWebXR.pose){return 0}const glLayer=GodotWebXR.session.renderState.baseLayer;const view=GodotWebXR.pose.views[0];const viewport=glLayer.getViewport(view);const buf=GodotRuntime.malloc(2*4);GodotRuntime.setHeapValue(buf+0,viewport.width,"i32");GodotRuntime.setHeapValue(buf+4,viewport.height,"i32");return buf}function _godot_webxr_get_transform_for_eye(p_eye){if(!GodotWebXR.session||!GodotWebXR.pose){return 0}const views=GodotWebXR.pose.views;let matrix;if(p_eye===0){matrix=GodotWebXR.pose.transform.matrix}else{matrix=views[p_eye-1].transform.matrix}const buf=GodotRuntime.malloc(16*4);for(let i=0;i<16;i++){GodotRuntime.setHeapValue(buf+i*4,matrix[i],"float")}return buf}function _godot_webxr_get_view_count(){if(!GodotWebXR.session||!GodotWebXR.pose){return 0}return GodotWebXR.pose.views.length}function _godot_webxr_get_visibility_state(){if(!GodotWebXR.session||!GodotWebXR.session.visibilityState){return 0}return GodotRuntime.allocString(GodotWebXR.session.visibilityState)}function _godot_webxr_initialize(p_session_mode,p_required_features,p_optional_features,p_requested_reference_spaces,p_on_session_started,p_on_session_ended,p_on_session_failed,p_on_controller_changed,p_on_input_event,p_on_simple_event){GodotWebXR.monkeyPatchRequestAnimationFrame(true);const session_mode=GodotRuntime.parseString(p_session_mode);const required_features=GodotRuntime.parseString(p_required_features).split(",").map(s=>s.trim()).filter(s=>s!=="");const optional_features=GodotRuntime.parseString(p_optional_features).split(",").map(s=>s.trim()).filter(s=>s!=="");const requested_reference_space_types=GodotRuntime.parseString(p_requested_reference_spaces).split(",").map(s=>s.trim());const onstarted=GodotRuntime.get_func(p_on_session_started);const onended=GodotRuntime.get_func(p_on_session_ended);const onfailed=GodotRuntime.get_func(p_on_session_failed);const oncontroller=GodotRuntime.get_func(p_on_controller_changed);const oninputevent=GodotRuntime.get_func(p_on_input_event);const onsimpleevent=GodotRuntime.get_func(p_on_simple_event);const session_init={};if(required_features.length>0){session_init["requiredFeatures"]=required_features}if(optional_features.length>0){session_init["optionalFeatures"]=optional_features}navigator.xr.requestSession(session_mode,session_init).then(function(session){GodotWebXR.session=session;session.addEventListener("end",function(evt){onended()});session.addEventListener("inputsourceschange",function(evt){let controller_changed=false;[evt.added,evt.removed].forEach(lst=>{lst.forEach(input_source=>{if(input_source.targetRayMode==="tracked-pointer"){controller_changed=true}})});if(controller_changed){oncontroller()}});["selectstart","selectend","select","squeezestart","squeezeend","squeeze"].forEach((input_event,index)=>{session.addEventListener(input_event,function(evt){GodotWebXR.sampleControllers();oninputevent(index,GodotWebXR.getControllerId(evt.inputSource))})});session.addEventListener("visibilitychange",function(evt){const c_str=GodotRuntime.allocString("visibility_state_changed");onsimpleevent(c_str);GodotRuntime.free(c_str)});const gl_context_handle=_emscripten_webgl_get_current_context();const gl=GL.getContext(gl_context_handle).GLctx;GodotWebXR.gl=gl;gl.makeXRCompatible().then(function(){session.updateRenderState({baseLayer:new XRWebGLLayer(session,gl)});function onReferenceSpaceSuccess(reference_space,reference_space_type){GodotWebXR.space=reference_space;reference_space.onreset=function(evt){const c_str=GodotRuntime.allocString("reference_space_reset");onsimpleevent(c_str);GodotRuntime.free(c_str)};GodotWebXR.pauseResumeMainLoop();window.setTimeout(function(){const c_str=GodotRuntime.allocString(reference_space_type);onstarted(c_str);GodotRuntime.free(c_str)},0)}function requestReferenceSpace(){const reference_space_type=requested_reference_space_types.shift();session.requestReferenceSpace(reference_space_type).then(refSpace=>{onReferenceSpaceSuccess(refSpace,reference_space_type)}).catch(()=>{if(requested_reference_space_types.length===0){const c_str=GodotRuntime.allocString("Unable to get any of the requested reference space types");onfailed(c_str);GodotRuntime.free(c_str)}else{requestReferenceSpace()}})}requestReferenceSpace()}).catch(function(error){const c_str=GodotRuntime.allocString(`Unable to make WebGL context compatible with WebXR: ${error}`);onfailed(c_str);GodotRuntime.free(c_str)})}).catch(function(error){const c_str=GodotRuntime.allocString(`Unable to start session: ${error}`);onfailed(c_str);GodotRuntime.free(c_str)})}function _godot_webxr_is_controller_connected(p_controller){if(!GodotWebXR.session||!GodotWebXR.frame){return false}return!!GodotWebXR.controllers[p_controller]}function _godot_webxr_is_session_supported(p_session_mode,p_callback){const session_mode=GodotRuntime.parseString(p_session_mode);const cb=GodotRuntime.get_func(p_callback);if(navigator.xr){navigator.xr.isSessionSupported(session_mode).then(function(supported){const c_str=GodotRuntime.allocString(session_mode);cb(c_str,supported?1:0);GodotRuntime.free(c_str)})}else{const c_str=GodotRuntime.allocString(session_mode);cb(c_str,0);GodotRuntime.free(c_str)}}function _godot_webxr_is_supported(){return!!navigator.xr}function _godot_webxr_sample_controller_data(){GodotWebXR.sampleControllers()}function _godot_webxr_uninitialize(){if(GodotWebXR.session){GodotWebXR.session.end().catch(e=>{})}GodotWebXR.session=null;GodotWebXR.space=null;GodotWebXR.frame=null;GodotWebXR.pose=null;GodotWebXR.monkeyPatchRequestAnimationFrame(false);GodotWebXR.pauseResumeMainLoop()}function _setTempRet0(val){setTempRet0(val)}function __isLeapYear(year){return year%4===0&&(year%100!==0||year%400===0)}function __arraySum(array,index){var sum=0;for(var i=0;i<=index;sum+=array[i++]){}return sum}var __MONTH_DAYS_LEAP=[31,29,31,30,31,30,31,31,30,31,30,31];var __MONTH_DAYS_REGULAR=[31,28,31,30,31,30,31,31,30,31,30,31];function __addDays(date,days){var newDate=new Date(date.getTime());while(days>0){var leap=__isLeapYear(newDate.getFullYear());var currentMonth=newDate.getMonth();var daysInCurrentMonth=(leap?__MONTH_DAYS_LEAP:__MONTH_DAYS_REGULAR)[currentMonth];if(days>daysInCurrentMonth-newDate.getDate()){days-=daysInCurrentMonth-newDate.getDate()+1;newDate.setDate(1);if(currentMonth<11){newDate.setMonth(currentMonth+1)}else{newDate.setMonth(0);newDate.setFullYear(newDate.getFullYear()+1)}}else{newDate.setDate(newDate.getDate()+days);return newDate}}return newDate}function _strftime(s,maxsize,format,tm){var tm_zone=HEAP32[tm+40>>2];var date={tm_sec:HEAP32[tm>>2],tm_min:HEAP32[tm+4>>2],tm_hour:HEAP32[tm+8>>2],tm_mday:HEAP32[tm+12>>2],tm_mon:HEAP32[tm+16>>2],tm_year:HEAP32[tm+20>>2],tm_wday:HEAP32[tm+24>>2],tm_yday:HEAP32[tm+28>>2],tm_isdst:HEAP32[tm+32>>2],tm_gmtoff:HEAP32[tm+36>>2],tm_zone:tm_zone?UTF8ToString(tm_zone):""};var pattern=UTF8ToString(format);var EXPANSION_RULES_1={"%c":"%a %b %d %H:%M:%S %Y","%D":"%m/%d/%y","%F":"%Y-%m-%d","%h":"%b","%r":"%I:%M:%S %p","%R":"%H:%M","%T":"%H:%M:%S","%x":"%m/%d/%y","%X":"%H:%M:%S","%Ec":"%c","%EC":"%C","%Ex":"%m/%d/%y","%EX":"%H:%M:%S","%Ey":"%y","%EY":"%Y","%Od":"%d","%Oe":"%e","%OH":"%H","%OI":"%I","%Om":"%m","%OM":"%M","%OS":"%S","%Ou":"%u","%OU":"%U","%OV":"%V","%Ow":"%w","%OW":"%W","%Oy":"%y"};for(var rule in EXPANSION_RULES_1){pattern=pattern.replace(new RegExp(rule,"g"),EXPANSION_RULES_1[rule])}var WEEKDAYS=["Sunday","Monday","Tuesday","Wednesday","Thursday","Friday","Saturday"];var MONTHS=["January","February","March","April","May","June","July","August","September","October","November","December"];function leadingSomething(value,digits,character){var str=typeof value=="number"?value.toString():value||"";while(str.length0?1:0}var compare;if((compare=sgn(date1.getFullYear()-date2.getFullYear()))===0){if((compare=sgn(date1.getMonth()-date2.getMonth()))===0){compare=sgn(date1.getDate()-date2.getDate())}}return compare}function getFirstWeekStartDate(janFourth){switch(janFourth.getDay()){case 0:return new Date(janFourth.getFullYear()-1,11,29);case 1:return janFourth;case 2:return new Date(janFourth.getFullYear(),0,3);case 3:return new Date(janFourth.getFullYear(),0,2);case 4:return new Date(janFourth.getFullYear(),0,1);case 5:return new Date(janFourth.getFullYear()-1,11,31);case 6:return new Date(janFourth.getFullYear()-1,11,30)}}function getWeekBasedYear(date){var thisDate=__addDays(new Date(date.tm_year+1900,0,1),date.tm_yday);var janFourthThisYear=new Date(thisDate.getFullYear(),0,4);var janFourthNextYear=new Date(thisDate.getFullYear()+1,0,4);var firstWeekStartThisYear=getFirstWeekStartDate(janFourthThisYear);var firstWeekStartNextYear=getFirstWeekStartDate(janFourthNextYear);if(compareByDay(firstWeekStartThisYear,thisDate)<=0){if(compareByDay(firstWeekStartNextYear,thisDate)<=0){return thisDate.getFullYear()+1}else{return thisDate.getFullYear()}}else{return thisDate.getFullYear()-1}}var EXPANSION_RULES_2={"%a":function(date){return WEEKDAYS[date.tm_wday].substring(0,3)},"%A":function(date){return WEEKDAYS[date.tm_wday]},"%b":function(date){return MONTHS[date.tm_mon].substring(0,3)},"%B":function(date){return MONTHS[date.tm_mon]},"%C":function(date){var year=date.tm_year+1900;return leadingNulls(year/100|0,2)},"%d":function(date){return leadingNulls(date.tm_mday,2)},"%e":function(date){return leadingSomething(date.tm_mday,2," ")},"%g":function(date){return getWeekBasedYear(date).toString().substring(2)},"%G":function(date){return getWeekBasedYear(date)},"%H":function(date){return leadingNulls(date.tm_hour,2)},"%I":function(date){var twelveHour=date.tm_hour;if(twelveHour==0)twelveHour=12;else if(twelveHour>12)twelveHour-=12;return leadingNulls(twelveHour,2)},"%j":function(date){return leadingNulls(date.tm_mday+__arraySum(__isLeapYear(date.tm_year+1900)?__MONTH_DAYS_LEAP:__MONTH_DAYS_REGULAR,date.tm_mon-1),3)},"%m":function(date){return leadingNulls(date.tm_mon+1,2)},"%M":function(date){return leadingNulls(date.tm_min,2)},"%n":function(){return"\n"},"%p":function(date){if(date.tm_hour>=0&&date.tm_hour<12){return"AM"}else{return"PM"}},"%S":function(date){return leadingNulls(date.tm_sec,2)},"%t":function(){return"\t"},"%u":function(date){return date.tm_wday||7},"%U":function(date){var days=date.tm_yday+7-date.tm_wday;return leadingNulls(Math.floor(days/7),2)},"%V":function(date){var val=Math.floor((date.tm_yday+7-(date.tm_wday+6)%7)/7);if((date.tm_wday+371-date.tm_yday-2)%7<=2){val++}if(!val){val=52;var dec31=(date.tm_wday+7-date.tm_yday-1)%7;if(dec31==4||dec31==5&&__isLeapYear(date.tm_year%400-1)){val++}}else if(val==53){var jan1=(date.tm_wday+371-date.tm_yday)%7;if(jan1!=4&&(jan1!=3||!__isLeapYear(date.tm_year)))val=1}return leadingNulls(val,2)},"%w":function(date){return date.tm_wday},"%W":function(date){var days=date.tm_yday+7-(date.tm_wday+6)%7;return leadingNulls(Math.floor(days/7),2)},"%y":function(date){return(date.tm_year+1900).toString().substring(2)},"%Y":function(date){return date.tm_year+1900},"%z":function(date){var off=date.tm_gmtoff;var ahead=off>=0;off=Math.abs(off)/60;off=off/60*100+off%60;return(ahead?"+":"-")+String("0000"+off).slice(-4)},"%Z":function(date){return date.tm_zone},"%%":function(){return"%"}};pattern=pattern.replace(/%%/g,"\0\0");for(var rule in EXPANSION_RULES_2){if(pattern.includes(rule)){pattern=pattern.replace(new RegExp(rule,"g"),EXPANSION_RULES_2[rule](date))}}pattern=pattern.replace(/\0\0/g,"%");var bytes=intArrayFromString(pattern,false);if(bytes.length>maxsize){return 0}writeArrayToMemory(bytes,s);return bytes.length-1}function _strftime_l(s,maxsize,format,tm){return _strftime(s,maxsize,format,tm)}var FSNode=function(parent,name,mode,rdev){if(!parent){parent=this}this.parent=parent;this.mount=parent.mount;this.mounted=null;this.id=FS.nextInode++;this.name=name;this.mode=mode;this.node_ops={};this.stream_ops={};this.rdev=rdev};var readMode=292|73;var writeMode=146;Object.defineProperties(FSNode.prototype,{read:{get:function(){return(this.mode&readMode)===readMode},set:function(val){val?this.mode|=readMode:this.mode&=~readMode}},write:{get:function(){return(this.mode&writeMode)===writeMode},set:function(val){val?this.mode|=writeMode:this.mode&=~writeMode}},isFolder:{get:function(){return FS.isDir(this.mode)}},isDevice:{get:function(){return FS.isChrdev(this.mode)}}});FS.FSNode=FSNode;FS.staticInit();Module["requestFullscreen"]=function Module_requestFullscreen(lockPointer,resizeCanvas){Browser.requestFullscreen(lockPointer,resizeCanvas)};Module["requestAnimationFrame"]=function Module_requestAnimationFrame(func){Browser.requestAnimationFrame(func)};Module["setCanvasSize"]=function Module_setCanvasSize(width,height,noUpdates){Browser.setCanvasSize(width,height,noUpdates)};Module["pauseMainLoop"]=function Module_pauseMainLoop(){Browser.mainLoop.pause()};Module["resumeMainLoop"]=function Module_resumeMainLoop(){Browser.mainLoop.resume()};Module["getUserMedia"]=function Module_getUserMedia(){Browser.getUserMedia()};Module["createContext"]=function Module_createContext(canvas,useWebGL,setInModule,webGLContextAttributes){return Browser.createContext(canvas,useWebGL,setInModule,webGLContextAttributes)};var preloadedImages={};var preloadedAudios={};var GLctx;for(var i=0;i<32;++i)tempFixedLengthArray.push(new Array(i));var miniTempWebGLFloatBuffersStorage=new Float32Array(288);for(var i=0;i<288;++i){miniTempWebGLFloatBuffers[i]=miniTempWebGLFloatBuffersStorage.subarray(0,i+1)}var __miniTempWebGLIntBuffersStorage=new Int32Array(288);for(var i=0;i<288;++i){__miniTempWebGLIntBuffers[i]=__miniTempWebGLIntBuffersStorage.subarray(0,i+1)}Module["request_quit"]=function(){GodotOS.request_quit()};Module["onExit"]=GodotOS.cleanup;GodotOS._fs_sync_promise=Promise.resolve();Module["initConfig"]=GodotConfig.init_config;Module["initFS"]=GodotFS.init;Module["copyToFS"]=GodotFS.copy_to_fs;ERRNO_CODES={"EPERM":63,"ENOENT":44,"ESRCH":71,"EINTR":27,"EIO":29,"ENXIO":60,"E2BIG":1,"ENOEXEC":45,"EBADF":8,"ECHILD":12,"EAGAIN":6,"EWOULDBLOCK":6,"ENOMEM":48,"EACCES":2,"EFAULT":21,"ENOTBLK":105,"EBUSY":10,"EEXIST":20,"EXDEV":75,"ENODEV":43,"ENOTDIR":54,"EISDIR":31,"EINVAL":28,"ENFILE":41,"EMFILE":33,"ENOTTY":59,"ETXTBSY":74,"EFBIG":22,"ENOSPC":51,"ESPIPE":70,"EROFS":69,"EMLINK":34,"EPIPE":64,"EDOM":18,"ERANGE":68,"ENOMSG":49,"EIDRM":24,"ECHRNG":106,"EL2NSYNC":156,"EL3HLT":107,"EL3RST":108,"ELNRNG":109,"EUNATCH":110,"ENOCSI":111,"EL2HLT":112,"EDEADLK":16,"ENOLCK":46,"EBADE":113,"EBADR":114,"EXFULL":115,"ENOANO":104,"EBADRQC":103,"EBADSLT":102,"EDEADLOCK":16,"EBFONT":101,"ENOSTR":100,"ENODATA":116,"ETIME":117,"ENOSR":118,"ENONET":119,"ENOPKG":120,"EREMOTE":121,"ENOLINK":47,"EADV":122,"ESRMNT":123,"ECOMM":124,"EPROTO":65,"EMULTIHOP":36,"EDOTDOT":125,"EBADMSG":9,"ENOTUNIQ":126,"EBADFD":127,"EREMCHG":128,"ELIBACC":129,"ELIBBAD":130,"ELIBSCN":131,"ELIBMAX":132,"ELIBEXEC":133,"ENOSYS":52,"ENOTEMPTY":55,"ENAMETOOLONG":37,"ELOOP":32,"EOPNOTSUPP":138,"EPFNOSUPPORT":139,"ECONNRESET":15,"ENOBUFS":42,"EAFNOSUPPORT":5,"EPROTOTYPE":67,"ENOTSOCK":57,"ENOPROTOOPT":50,"ESHUTDOWN":140,"ECONNREFUSED":14,"EADDRINUSE":3,"ECONNABORTED":13,"ENETUNREACH":40,"ENETDOWN":38,"ETIMEDOUT":73,"EHOSTDOWN":142,"EHOSTUNREACH":23,"EINPROGRESS":26,"EALREADY":7,"EDESTADDRREQ":17,"EMSGSIZE":35,"EPROTONOSUPPORT":66,"ESOCKTNOSUPPORT":137,"EADDRNOTAVAIL":4,"ENETRESET":39,"EISCONN":30,"ENOTCONN":53,"ETOOMANYREFS":141,"EUSERS":136,"EDQUOT":19,"ESTALE":72,"ENOTSUP":138,"ENOMEDIUM":148,"EILSEQ":25,"EOVERFLOW":61,"ECANCELED":11,"ENOTRECOVERABLE":56,"EOWNERDEAD":62,"ESTRPIPE":135};GodotOS.atexit(function(resolve,reject){GodotDisplayCursor.clear();resolve()});GodotOS.atexit(function(resolve,reject){GodotEventListeners.clear();resolve()});GodotOS.atexit(function(resolve,reject){GodotDisplayVK.clear();resolve()});GodotJSWrapper.proxies=new Map;function intArrayFromString(stringy,dontAddNull,length){var len=length>0?length:lengthBytesUTF8(stringy)+1;var u8array=new Array(len);var numBytesWritten=stringToUTF8Array(stringy,u8array,0,u8array.length);if(dontAddNull)u8array.length=numBytesWritten;return u8array}var asmLibraryArg={"a":___assert_fail,"dk":___call_sighandler,"ck":___syscall__newselect,"bk":___syscall_accept4,"ak":___syscall_bind,"$j":___syscall_chdir,"_j":___syscall_chmod,"Zj":___syscall_connect,"Yj":___syscall_faccessat,"Ma":___syscall_fcntl64,"Xj":___syscall_getcwd,"Wj":___syscall_getdents64,"Vj":___syscall_getsockname,"Uj":___syscall_getsockopt,"Mb":___syscall_ioctl,"Tj":___syscall_listen,"Sj":___syscall_lstat64,"Rj":___syscall_mkdirat,"Qj":___syscall_newfstatat,"Lb":___syscall_openat,"Pj":___syscall_poll,"Oj":___syscall_readlinkat,"Nj":___syscall_recvfrom,"Mj":___syscall_renameat,"Lj":___syscall_rmdir,"Kj":___syscall_sendto,"Kb":___syscall_socket,"Jj":___syscall_stat64,"Ij":___syscall_statfs64,"Hj":___syscall_symlink,"Gj":___syscall_unlinkat,"Cj":__dlinit,"Bj":__dlopen_js,"Aj":__dlsym_js,"fb":__emscripten_date_now,"zj":__emscripten_get_now_is_monotonic,"yj":__emscripten_throw_longjmp,"xj":__gmtime_js,"wj":__localtime_js,"vj":__tzset_js,"na":_abort,"Hb":_emscripten_cancel_main_loop,"uj":_emscripten_force_exit,"eb":_emscripten_get_now,"tj":_emscripten_glActiveTexture,"sj":_emscripten_glAttachShader,"rj":_emscripten_glBeginQuery,"qj":_emscripten_glBeginQueryEXT,"pj":_emscripten_glBeginTransformFeedback,"oj":_emscripten_glBindAttribLocation,"nj":_emscripten_glBindBuffer,"mj":_emscripten_glBindBufferBase,"lj":_emscripten_glBindBufferRange,"kj":_emscripten_glBindFramebuffer,"jj":_emscripten_glBindRenderbuffer,"ij":_emscripten_glBindSampler,"hj":_emscripten_glBindTexture,"gj":_emscripten_glBindTransformFeedback,"fj":_emscripten_glBindVertexArray,"ej":_emscripten_glBindVertexArrayOES,"dj":_emscripten_glBlendColor,"cj":_emscripten_glBlendEquation,"bj":_emscripten_glBlendEquationSeparate,"aj":_emscripten_glBlendFunc,"$i":_emscripten_glBlendFuncSeparate,"_i":_emscripten_glBlitFramebuffer,"Zi":_emscripten_glBufferData,"Yi":_emscripten_glBufferSubData,"Xi":_emscripten_glCheckFramebufferStatus,"Wi":_emscripten_glClear,"Vi":_emscripten_glClearBufferfi,"Ui":_emscripten_glClearBufferfv,"Ti":_emscripten_glClearBufferiv,"Si":_emscripten_glClearBufferuiv,"Ri":_emscripten_glClearColor,"Qi":_emscripten_glClearDepthf,"Pi":_emscripten_glClearStencil,"Oi":_emscripten_glClientWaitSync,"Ni":_emscripten_glColorMask,"Mi":_emscripten_glCompileShader,"Li":_emscripten_glCompressedTexImage2D,"Ki":_emscripten_glCompressedTexImage3D,"Ji":_emscripten_glCompressedTexSubImage2D,"Ii":_emscripten_glCompressedTexSubImage3D,"Hi":_emscripten_glCopyBufferSubData,"Gi":_emscripten_glCopyTexImage2D,"Fi":_emscripten_glCopyTexSubImage2D,"Ei":_emscripten_glCopyTexSubImage3D,"Di":_emscripten_glCreateProgram,"Ci":_emscripten_glCreateShader,"Bi":_emscripten_glCullFace,"Ai":_emscripten_glDeleteBuffers,"zi":_emscripten_glDeleteFramebuffers,"yi":_emscripten_glDeleteProgram,"xi":_emscripten_glDeleteQueries,"wi":_emscripten_glDeleteQueriesEXT,"vi":_emscripten_glDeleteRenderbuffers,"ui":_emscripten_glDeleteSamplers,"ti":_emscripten_glDeleteShader,"si":_emscripten_glDeleteSync,"ri":_emscripten_glDeleteTextures,"qi":_emscripten_glDeleteTransformFeedbacks,"pi":_emscripten_glDeleteVertexArrays,"oi":_emscripten_glDeleteVertexArraysOES,"ni":_emscripten_glDepthFunc,"mi":_emscripten_glDepthMask,"li":_emscripten_glDepthRangef,"ki":_emscripten_glDetachShader,"ji":_emscripten_glDisable,"ii":_emscripten_glDisableVertexAttribArray,"hi":_emscripten_glDrawArrays,"gi":_emscripten_glDrawArraysInstanced,"fi":_emscripten_glDrawArraysInstancedANGLE,"ei":_emscripten_glDrawArraysInstancedARB,"di":_emscripten_glDrawArraysInstancedEXT,"ci":_emscripten_glDrawArraysInstancedNV,"bi":_emscripten_glDrawBuffers,"ai":_emscripten_glDrawBuffersEXT,"$h":_emscripten_glDrawBuffersWEBGL,"_h":_emscripten_glDrawElements,"Zh":_emscripten_glDrawElementsInstanced,"Yh":_emscripten_glDrawElementsInstancedANGLE,"Xh":_emscripten_glDrawElementsInstancedARB,"Wh":_emscripten_glDrawElementsInstancedEXT,"Vh":_emscripten_glDrawElementsInstancedNV,"Uh":_emscripten_glDrawRangeElements,"Th":_emscripten_glEnable,"Sh":_emscripten_glEnableVertexAttribArray,"Rh":_emscripten_glEndQuery,"Qh":_emscripten_glEndQueryEXT,"Ph":_emscripten_glEndTransformFeedback,"Oh":_emscripten_glFenceSync,"Nh":_emscripten_glFinish,"Mh":_emscripten_glFlush,"Lh":_emscripten_glFramebufferRenderbuffer,"Kh":_emscripten_glFramebufferTexture2D,"Jh":_emscripten_glFramebufferTextureLayer,"Ih":_emscripten_glFrontFace,"Hh":_emscripten_glGenBuffers,"Gh":_emscripten_glGenFramebuffers,"Fh":_emscripten_glGenQueries,"Eh":_emscripten_glGenQueriesEXT,"Dh":_emscripten_glGenRenderbuffers,"Ch":_emscripten_glGenSamplers,"Bh":_emscripten_glGenTextures,"Ah":_emscripten_glGenTransformFeedbacks,"zh":_emscripten_glGenVertexArrays,"yh":_emscripten_glGenVertexArraysOES,"xh":_emscripten_glGenerateMipmap,"wh":_emscripten_glGetActiveAttrib,"vh":_emscripten_glGetActiveUniform,"uh":_emscripten_glGetActiveUniformBlockName,"th":_emscripten_glGetActiveUniformBlockiv,"sh":_emscripten_glGetActiveUniformsiv,"rh":_emscripten_glGetAttachedShaders,"qh":_emscripten_glGetAttribLocation,"ph":_emscripten_glGetBooleanv,"oh":_emscripten_glGetBufferParameteri64v,"nh":_emscripten_glGetBufferParameteriv,"mh":_emscripten_glGetError,"lh":_emscripten_glGetFloatv,"kh":_emscripten_glGetFragDataLocation,"jh":_emscripten_glGetFramebufferAttachmentParameteriv,"ih":_emscripten_glGetInteger64i_v,"hh":_emscripten_glGetInteger64v,"gh":_emscripten_glGetIntegeri_v,"fh":_emscripten_glGetIntegerv,"eh":_emscripten_glGetInternalformativ,"dh":_emscripten_glGetProgramBinary,"ch":_emscripten_glGetProgramInfoLog,"bh":_emscripten_glGetProgramiv,"ah":_emscripten_glGetQueryObjecti64vEXT,"$g":_emscripten_glGetQueryObjectivEXT,"_g":_emscripten_glGetQueryObjectui64vEXT,"Zg":_emscripten_glGetQueryObjectuiv,"Yg":_emscripten_glGetQueryObjectuivEXT,"Xg":_emscripten_glGetQueryiv,"Wg":_emscripten_glGetQueryivEXT,"Vg":_emscripten_glGetRenderbufferParameteriv,"Ug":_emscripten_glGetSamplerParameterfv,"Tg":_emscripten_glGetSamplerParameteriv,"Sg":_emscripten_glGetShaderInfoLog,"Rg":_emscripten_glGetShaderPrecisionFormat,"Qg":_emscripten_glGetShaderSource,"Pg":_emscripten_glGetShaderiv,"Og":_emscripten_glGetString,"Ng":_emscripten_glGetStringi,"Mg":_emscripten_glGetSynciv,"Lg":_emscripten_glGetTexParameterfv,"Kg":_emscripten_glGetTexParameteriv,"Jg":_emscripten_glGetTransformFeedbackVarying,"Ig":_emscripten_glGetUniformBlockIndex,"Hg":_emscripten_glGetUniformIndices,"Gg":_emscripten_glGetUniformLocation,"Fg":_emscripten_glGetUniformfv,"Eg":_emscripten_glGetUniformiv,"Dg":_emscripten_glGetUniformuiv,"Cg":_emscripten_glGetVertexAttribIiv,"Bg":_emscripten_glGetVertexAttribIuiv,"Ag":_emscripten_glGetVertexAttribPointerv,"zg":_emscripten_glGetVertexAttribfv,"yg":_emscripten_glGetVertexAttribiv,"xg":_emscripten_glHint,"wg":_emscripten_glInvalidateFramebuffer,"vg":_emscripten_glInvalidateSubFramebuffer,"ug":_emscripten_glIsBuffer,"tg":_emscripten_glIsEnabled,"sg":_emscripten_glIsFramebuffer,"rg":_emscripten_glIsProgram,"qg":_emscripten_glIsQuery,"pg":_emscripten_glIsQueryEXT,"og":_emscripten_glIsRenderbuffer,"ng":_emscripten_glIsSampler,"mg":_emscripten_glIsShader,"lg":_emscripten_glIsSync,"kg":_emscripten_glIsTexture,"jg":_emscripten_glIsTransformFeedback,"ig":_emscripten_glIsVertexArray,"hg":_emscripten_glIsVertexArrayOES,"gg":_emscripten_glLineWidth,"fg":_emscripten_glLinkProgram,"eg":_emscripten_glPauseTransformFeedback,"dg":_emscripten_glPixelStorei,"cg":_emscripten_glPolygonOffset,"bg":_emscripten_glProgramBinary,"ag":_emscripten_glProgramParameteri,"$f":_emscripten_glQueryCounterEXT,"_f":_emscripten_glReadBuffer,"Zf":_emscripten_glReadPixels,"Yf":_emscripten_glReleaseShaderCompiler,"Xf":_emscripten_glRenderbufferStorage,"Wf":_emscripten_glRenderbufferStorageMultisample,"Vf":_emscripten_glResumeTransformFeedback,"Uf":_emscripten_glSampleCoverage,"Tf":_emscripten_glSamplerParameterf,"Sf":_emscripten_glSamplerParameterfv,"Rf":_emscripten_glSamplerParameteri,"Qf":_emscripten_glSamplerParameteriv,"Pf":_emscripten_glScissor,"Of":_emscripten_glShaderBinary,"Nf":_emscripten_glShaderSource,"Mf":_emscripten_glStencilFunc,"Lf":_emscripten_glStencilFuncSeparate,"Kf":_emscripten_glStencilMask,"Jf":_emscripten_glStencilMaskSeparate,"If":_emscripten_glStencilOp,"Hf":_emscripten_glStencilOpSeparate,"Gf":_emscripten_glTexImage2D,"Ff":_emscripten_glTexImage3D,"Ef":_emscripten_glTexParameterf,"Df":_emscripten_glTexParameterfv,"Cf":_emscripten_glTexParameteri,"Bf":_emscripten_glTexParameteriv,"Af":_emscripten_glTexStorage2D,"zf":_emscripten_glTexStorage3D,"yf":_emscripten_glTexSubImage2D,"xf":_emscripten_glTexSubImage3D,"wf":_emscripten_glTransformFeedbackVaryings,"vf":_emscripten_glUniform1f,"uf":_emscripten_glUniform1fv,"tf":_emscripten_glUniform1i,"sf":_emscripten_glUniform1iv,"rf":_emscripten_glUniform1ui,"qf":_emscripten_glUniform1uiv,"pf":_emscripten_glUniform2f,"of":_emscripten_glUniform2fv,"nf":_emscripten_glUniform2i,"mf":_emscripten_glUniform2iv,"lf":_emscripten_glUniform2ui,"kf":_emscripten_glUniform2uiv,"jf":_emscripten_glUniform3f,"hf":_emscripten_glUniform3fv,"gf":_emscripten_glUniform3i,"ff":_emscripten_glUniform3iv,"ef":_emscripten_glUniform3ui,"df":_emscripten_glUniform3uiv,"cf":_emscripten_glUniform4f,"bf":_emscripten_glUniform4fv,"af":_emscripten_glUniform4i,"$e":_emscripten_glUniform4iv,"_e":_emscripten_glUniform4ui,"Ze":_emscripten_glUniform4uiv,"Ye":_emscripten_glUniformBlockBinding,"Xe":_emscripten_glUniformMatrix2fv,"We":_emscripten_glUniformMatrix2x3fv,"Ve":_emscripten_glUniformMatrix2x4fv,"Ue":_emscripten_glUniformMatrix3fv,"Te":_emscripten_glUniformMatrix3x2fv,"Se":_emscripten_glUniformMatrix3x4fv,"Re":_emscripten_glUniformMatrix4fv,"Qe":_emscripten_glUniformMatrix4x2fv,"Pe":_emscripten_glUniformMatrix4x3fv,"Oe":_emscripten_glUseProgram,"Ne":_emscripten_glValidateProgram,"Me":_emscripten_glVertexAttrib1f,"Le":_emscripten_glVertexAttrib1fv,"Ke":_emscripten_glVertexAttrib2f,"Je":_emscripten_glVertexAttrib2fv,"Ie":_emscripten_glVertexAttrib3f,"He":_emscripten_glVertexAttrib3fv,"Ge":_emscripten_glVertexAttrib4f,"Fe":_emscripten_glVertexAttrib4fv,"Ee":_emscripten_glVertexAttribDivisor,"De":_emscripten_glVertexAttribDivisorANGLE,"Ce":_emscripten_glVertexAttribDivisorARB,"Be":_emscripten_glVertexAttribDivisorEXT,"Ae":_emscripten_glVertexAttribDivisorNV,"ze":_emscripten_glVertexAttribI4i,"ye":_emscripten_glVertexAttribI4iv,"xe":_emscripten_glVertexAttribI4ui,"we":_emscripten_glVertexAttribI4uiv,"ve":_emscripten_glVertexAttribIPointer,"ue":_emscripten_glVertexAttribPointer,"te":_emscripten_glViewport,"se":_emscripten_glWaitSync,"re":_emscripten_memcpy_big,"db":_emscripten_resize_heap,"Gb":_emscripten_set_main_loop,"Fb":_emscripten_webgl_commit_frame,"qe":_emscripten_webgl_create_context,"pe":_emscripten_webgl_destroy_context,"oe":_emscripten_webgl_init_context_attributes,"ne":_emscripten_webgl_make_context_current,"Fj":_environ_get,"Ej":_environ_sizes_get,"Ba":_fd_close,"Dj":_fd_fdstat_get,"Jb":_fd_read,"Pb":_fd_seek,"Ib":_fd_write,"l":_getTempRet0,"cb":_getaddrinfo,"me":_getnameinfo,"d":_glActiveTexture,"Sa":_glAttachShader,"bb":_glBeginTransformFeedback,"Eb":_glBindAttribLocation,"c":_glBindBuffer,"Q":_glBindBufferBase,"f":_glBindFramebuffer,"da":_glBindRenderbuffer,"b":_glBindTexture,"n":_glBindVertexArray,"E":_glBlendEquation,"_":_glBlendFunc,"x":_glBlendFuncSeparate,"ja":_glBlitFramebuffer,"r":_glBufferData,"M":_glBufferSubData,"L":_glCheckFramebufferStatus,"K":_glClear,"va":_glClearBufferfv,"P":_glClearColor,"ca":_glClearDepthf,"O":_glColorMask,"ma":_glCompileShader,"Db":_glCompressedTexImage2D,"le":_glCompressedTexSubImage2D,"Cb":_glCompressedTexSubImage3D,"ke":_glCopyBufferSubData,"ab":_glCopyTexSubImage2D,"$a":_glCreateProgram,"Ha":_glCreateShader,"ua":_glCullFace,"N":_glDeleteBuffers,"G":_glDeleteFramebuffers,"U":_glDeleteProgram,"V":_glDeleteRenderbuffers,"J":_glDeleteShader,"B":_glDeleteTextures,"fa":_glDeleteVertexArrays,"Z":_glDepthFunc,"I":_glDepthMask,"j":_glDisable,"q":_glDisableVertexAttribArray,"D":_glDrawArrays,"Aa":_glDrawArraysInstanced,"La":_glDrawBuffers,"Y":_glDrawElements,"qa":_glDrawElementsInstanced,"t":_glEnable,"k":_glEnableVertexAttribArray,"_a":_glEndTransformFeedback,"Bb":_glFinish,"ba":_glFramebufferRenderbuffer,"y":_glFramebufferTexture2D,"je":_glFramebufferTextureLayer,"Ab":_glFrontFace,"C":_glGenBuffers,"F":_glGenFramebuffers,"ia":_glGenRenderbuffers,"w":_glGenTextures,"W":_glGenVertexArrays,"T":_glGenerateMipmap,"zb":_glGetError,"yb":_glGetFloatv,"aa":_glGetIntegerv,"ie":_glGetProgramBinary,"xb":_glGetProgramInfoLog,"Ga":_glGetProgramiv,"Ra":_glGetShaderInfoLog,"he":_glGetShaderSource,"ea":_glGetShaderiv,"za":_glGetString,"ge":_glGetStringi,"fe":_glGetUniformBlockIndex,"ya":_glGetUniformLocation,"ee":_glInvalidateFramebuffer,"wb":_glLinkProgram,"la":_glPixelStorei,"de":_glProgramBinary,"ce":_glProgramParameteri,"ha":_glReadBuffer,"Za":_glReadPixels,"ga":_glRenderbufferStorage,"Ka":_glRenderbufferStorageMultisample,"S":_glScissor,"Qa":_glShaderSource,"s":_glTexImage2D,"Ja":_glTexImage3D,"h":_glTexParameterf,"e":_glTexParameteri,"be":_glTexStorage2D,"Ia":_glTexSubImage2D,"Pa":_glTexSubImage3D,"ae":_glTransformFeedbackVaryings,"g":_glUniform1f,"v":_glUniform1i,"Ya":_glUniform1iv,"vb":_glUniform1ui,"Xa":_glUniform2f,"o":_glUniform2fv,"Fa":_glUniform2i,"ka":_glUniform2iv,"Wa":_glUniform3f,"X":_glUniform3fv,"Ea":_glUniform3i,"xa":_glUniform4f,"z":_glUniform4fv,"Da":_glUniform4i,"$d":_glUniformBlockBinding,"ub":_glUniformMatrix2fv,"tb":_glUniformMatrix3fv,"p":_glUniformMatrix4fv,"ta":_glUseProgram,"A":_glVertexAttrib4f,"R":_glVertexAttrib4fv,"H":_glVertexAttribDivisor,"_d":_glVertexAttribI4ui,"Ca":_glVertexAttribIPointer,"i":_glVertexAttribPointer,"u":_glViewport,"Zd":_godot_audio_capture_start,"Yd":_godot_audio_capture_stop,"Xd":_godot_audio_has_script_processor,"Wd":_godot_audio_has_worklet,"Vd":_godot_audio_init,"Ud":_godot_audio_is_available,"Td":_godot_audio_resume,"Sd":_godot_audio_script_create,"Rd":_godot_audio_script_start,"Qd":_godot_audio_worklet_create,"Pd":_godot_audio_worklet_start_no_threads,"Od":_godot_js_config_canvas_id_get,"Nd":_godot_js_config_locale_get,"Md":_godot_js_display_alert,"Ld":_godot_js_display_canvas_focus,"Kd":_godot_js_display_canvas_is_focused,"Jd":_godot_js_display_clipboard_get,"Id":_godot_js_display_clipboard_set,"Hd":_godot_js_display_cursor_is_hidden,"Gd":_godot_js_display_cursor_is_locked,"Va":_godot_js_display_cursor_lock_set,"sb":_godot_js_display_cursor_set_custom_shape,"Fd":_godot_js_display_cursor_set_shape,"Ua":_godot_js_display_cursor_set_visible,"Ed":_godot_js_display_desired_size_set,"Dd":_godot_js_display_fullscreen_cb,"Cd":_godot_js_display_fullscreen_exit,"Bd":_godot_js_display_fullscreen_request,"Ad":_godot_js_display_glGetBufferSubData,"rb":_godot_js_display_has_webgl,"zd":_godot_js_display_is_swap_ok_cancel,"yd":_godot_js_display_notification_cb,"xd":_godot_js_display_pixel_ratio_get,"wd":_godot_js_display_screen_dpi_get,"vd":_godot_js_display_screen_size_get,"ud":_godot_js_display_setup_canvas,"td":_godot_js_display_size_update,"sd":_godot_js_display_touchscreen_is_available,"rd":_godot_js_display_vk_available,"qd":_godot_js_display_vk_cb,"pd":_godot_js_display_vk_hide,"od":_godot_js_display_vk_show,"nd":_godot_js_display_window_blur_cb,"md":_godot_js_display_window_icon_set,"ld":_godot_js_display_window_size_get,"kd":_godot_js_display_window_title_set,"jd":_godot_js_eval,"id":_godot_js_fetch_body_length_get,"hd":_godot_js_fetch_create,"qb":_godot_js_fetch_free,"gd":_godot_js_fetch_http_status_get,"fd":_godot_js_fetch_is_chunked,"ed":_godot_js_fetch_read_chunk,"dd":_godot_js_fetch_read_headers,"Ta":_godot_js_fetch_state_get,"cd":_godot_js_input_drop_files_cb,"bd":_godot_js_input_gamepad_cb,"ad":_godot_js_input_gamepad_sample,"$c":_godot_js_input_gamepad_sample_count,"_c":_godot_js_input_gamepad_sample_get,"Zc":_godot_js_input_key_cb,"Yc":_godot_js_input_mouse_button_cb,"Xc":_godot_js_input_mouse_move_cb,"Wc":_godot_js_input_mouse_wheel_cb,"Vc":_godot_js_input_paste_cb,"Uc":_godot_js_input_touch_cb,"Tc":_godot_js_input_vibrate_handheld,"Sc":_godot_js_os_download_buffer,"Rc":_godot_js_os_execute,"Qc":_godot_js_os_finish_async,"Pc":_godot_js_os_fs_is_persistent,"Oc":_godot_js_os_fs_sync,"Nc":_godot_js_os_hw_concurrency_get,"Mc":_godot_js_os_request_quit_cb,"Lc":_godot_js_os_shell_open,"Kc":_godot_js_pwa_cb,"Jc":_godot_js_pwa_update,"Ic":_godot_js_rtc_datachannel_close,"Hc":_godot_js_rtc_datachannel_connect,"Gc":_godot_js_rtc_datachannel_destroy,"Fc":_godot_js_rtc_datachannel_get_buffered_amount,"Ec":_godot_js_rtc_datachannel_id_get,"Dc":_godot_js_rtc_datachannel_is_negotiated,"Cc":_godot_js_rtc_datachannel_is_ordered,"Bc":_godot_js_rtc_datachannel_label_get,"Ac":_godot_js_rtc_datachannel_max_packet_lifetime_get,"zc":_godot_js_rtc_datachannel_max_retransmits_get,"yc":_godot_js_rtc_datachannel_protocol_get,"xc":_godot_js_rtc_datachannel_ready_state_get,"wc":_godot_js_rtc_datachannel_send,"vc":_godot_js_rtc_pc_close,"uc":_godot_js_rtc_pc_create,"tc":_godot_js_rtc_pc_datachannel_create,"pb":_godot_js_rtc_pc_destroy,"sc":_godot_js_rtc_pc_ice_candidate_add,"rc":_godot_js_rtc_pc_local_description_set,"qc":_godot_js_rtc_pc_offer_create,"pc":_godot_js_rtc_pc_remote_description_set,"ob":_godot_js_websocket_buffered_amount,"oc":_godot_js_websocket_close,"nc":_godot_js_websocket_create,"nb":_godot_js_websocket_destroy,"mc":_godot_js_websocket_send,"lc":_godot_js_wrapper_create_cb,"kc":_godot_js_wrapper_create_object,"jc":_godot_js_wrapper_interface_get,"ic":_godot_js_wrapper_object_call,"hc":_godot_js_wrapper_object_get,"mb":_godot_js_wrapper_object_getvar,"gc":_godot_js_wrapper_object_set,"fc":_godot_js_wrapper_object_setvar,"ec":_godot_js_wrapper_object_unref,"dc":_godot_webxr_commit_for_eye,"cc":_godot_webxr_get_bounds_geometry,"lb":_godot_webxr_get_controller_axes,"bc":_godot_webxr_get_controller_buttons,"ac":_godot_webxr_get_controller_count,"Oa":_godot_webxr_get_controller_target_ray_mode,"$b":_godot_webxr_get_controller_transform,"_b":_godot_webxr_get_projection_for_eye,"Zb":_godot_webxr_get_render_targetsize,"Yb":_godot_webxr_get_transform_for_eye,"Xb":_godot_webxr_get_view_count,"Wb":_godot_webxr_get_visibility_state,"Vb":_godot_webxr_initialize,"kb":_godot_webxr_is_controller_connected,"Ub":_godot_webxr_is_session_supported,"Tb":_godot_webxr_is_supported,"jb":_godot_webxr_sample_controller_data,"Sb":_godot_webxr_uninitialize,"pa":invoke_ii,"oa":invoke_iii,"ib":invoke_iiii,"hb":invoke_iiiii,"Rb":invoke_iiiiii,"Qb":invoke_iiiiiii,"Ob":invoke_iij,"$":invoke_vi,"sa":invoke_vii,"wa":invoke_viii,"ra":invoke_viiii,"Na":invoke_viiiiiii,"m":_setTempRet0,"gb":_strftime,"Nb":_strftime_l};var asm=createWasm();var ___wasm_call_ctors=Module["___wasm_call_ctors"]=function(){return(___wasm_call_ctors=Module["___wasm_call_ctors"]=Module["asm"]["fk"]).apply(null,arguments)};var _free=Module["_free"]=function(){return(_free=Module["_free"]=Module["asm"]["gk"]).apply(null,arguments)};var __Z13godot_js_mainiPPc=Module["__Z13godot_js_mainiPPc"]=function(){return(__Z13godot_js_mainiPPc=Module["__Z13godot_js_mainiPPc"]=Module["asm"]["hk"]).apply(null,arguments)};var _main=Module["_main"]=function(){return(_main=Module["_main"]=Module["asm"]["ik"]).apply(null,arguments)};var _malloc=Module["_malloc"]=function(){return(_malloc=Module["_malloc"]=Module["asm"]["jk"]).apply(null,arguments)};var _htonl=Module["_htonl"]=function(){return(_htonl=Module["_htonl"]=Module["asm"]["kk"]).apply(null,arguments)};var _htons=Module["_htons"]=function(){return(_htons=Module["_htons"]=Module["asm"]["lk"]).apply(null,arguments)};var _ntohs=Module["_ntohs"]=function(){return(_ntohs=Module["_ntohs"]=Module["asm"]["mk"]).apply(null,arguments)};var ___errno_location=Module["___errno_location"]=function(){return(___errno_location=Module["___errno_location"]=Module["asm"]["nk"]).apply(null,arguments)};var _fflush=Module["_fflush"]=function(){return(_fflush=Module["_fflush"]=Module["asm"]["ok"]).apply(null,arguments)};var __emwebxr_on_input_event=Module["__emwebxr_on_input_event"]=function(){return(__emwebxr_on_input_event=Module["__emwebxr_on_input_event"]=Module["asm"]["pk"]).apply(null,arguments)};var __emwebxr_on_simple_event=Module["__emwebxr_on_simple_event"]=function(){return(__emwebxr_on_simple_event=Module["__emwebxr_on_simple_event"]=Module["asm"]["qk"]).apply(null,arguments)};var ___funcs_on_exit=Module["___funcs_on_exit"]=function(){return(___funcs_on_exit=Module["___funcs_on_exit"]=Module["asm"]["rk"]).apply(null,arguments)};var _setThrew=Module["_setThrew"]=function(){return(_setThrew=Module["_setThrew"]=Module["asm"]["tk"]).apply(null,arguments)};var stackSave=Module["stackSave"]=function(){return(stackSave=Module["stackSave"]=Module["asm"]["uk"]).apply(null,arguments)};var stackRestore=Module["stackRestore"]=function(){return(stackRestore=Module["stackRestore"]=Module["asm"]["vk"]).apply(null,arguments)};var stackAlloc=Module["stackAlloc"]=function(){return(stackAlloc=Module["stackAlloc"]=Module["asm"]["wk"]).apply(null,arguments)};var dynCall_iij=Module["dynCall_iij"]=function(){return(dynCall_iij=Module["dynCall_iij"]=Module["asm"]["xk"]).apply(null,arguments)};function invoke_vii(index,a1,a2){var sp=stackSave();try{getWasmTableEntry(index)(a1,a2)}catch(e){stackRestore(sp);if(e!==e+0)throw e;_setThrew(1,0)}}function invoke_vi(index,a1){var sp=stackSave();try{getWasmTableEntry(index)(a1)}catch(e){stackRestore(sp);if(e!==e+0)throw e;_setThrew(1,0)}}function invoke_viii(index,a1,a2,a3){var sp=stackSave();try{getWasmTableEntry(index)(a1,a2,a3)}catch(e){stackRestore(sp);if(e!==e+0)throw e;_setThrew(1,0)}}function invoke_ii(index,a1){var sp=stackSave();try{return getWasmTableEntry(index)(a1)}catch(e){stackRestore(sp);if(e!==e+0)throw e;_setThrew(1,0)}}function invoke_iii(index,a1,a2){var sp=stackSave();try{return getWasmTableEntry(index)(a1,a2)}catch(e){stackRestore(sp);if(e!==e+0)throw e;_setThrew(1,0)}}function invoke_iiiii(index,a1,a2,a3,a4){var sp=stackSave();try{return getWasmTableEntry(index)(a1,a2,a3,a4)}catch(e){stackRestore(sp);if(e!==e+0)throw e;_setThrew(1,0)}}function invoke_iiiiii(index,a1,a2,a3,a4,a5){var sp=stackSave();try{return getWasmTableEntry(index)(a1,a2,a3,a4,a5)}catch(e){stackRestore(sp);if(e!==e+0)throw e;_setThrew(1,0)}}function invoke_viiii(index,a1,a2,a3,a4){var sp=stackSave();try{getWasmTableEntry(index)(a1,a2,a3,a4)}catch(e){stackRestore(sp);if(e!==e+0)throw e;_setThrew(1,0)}}function invoke_iiii(index,a1,a2,a3){var sp=stackSave();try{return getWasmTableEntry(index)(a1,a2,a3)}catch(e){stackRestore(sp);if(e!==e+0)throw e;_setThrew(1,0)}}function invoke_viiiiiii(index,a1,a2,a3,a4,a5,a6,a7){var sp=stackSave();try{getWasmTableEntry(index)(a1,a2,a3,a4,a5,a6,a7)}catch(e){stackRestore(sp);if(e!==e+0)throw e;_setThrew(1,0)}}function invoke_iiiiiii(index,a1,a2,a3,a4,a5,a6){var sp=stackSave();try{return getWasmTableEntry(index)(a1,a2,a3,a4,a5,a6)}catch(e){stackRestore(sp);if(e!==e+0)throw e;_setThrew(1,0)}}function invoke_iij(index,a1,a2,a3){var sp=stackSave();try{return dynCall_iij(index,a1,a2,a3)}catch(e){stackRestore(sp);if(e!==e+0)throw e;_setThrew(1,0)}}Module["cwrap"]=cwrap;Module["callMain"]=callMain;var calledRun;function ExitStatus(status){this.name="ExitStatus";this.message="Program terminated with exit("+status+")";this.status=status}var calledMain=false;dependenciesFulfilled=function runCaller(){if(!calledRun)run();if(!calledRun)dependenciesFulfilled=runCaller};function callMain(args){var entryFunction=Module["_main"];args=args||[];args.unshift(thisProgram);var argc=args.length;var argv=stackAlloc((argc+1)*4);var argv_ptr=argv>>2;args.forEach(arg=>{HEAP32[argv_ptr++]=allocateUTF8OnStack(arg)});HEAP32[argv_ptr]=0;try{var ret=entryFunction(argc,argv);exit(ret,true);return ret}catch(e){return handleException(e)}finally{calledMain=true}}function run(args){args=args||arguments_;if(runDependencies>0){return}preRun();if(runDependencies>0){return}function doRun(){if(calledRun)return;calledRun=true;Module["calledRun"]=true;if(ABORT)return;initRuntime();preMain();readyPromiseResolve(Module);if(Module["onRuntimeInitialized"])Module["onRuntimeInitialized"]();if(shouldRunNow)callMain(args);postRun()}if(Module["setStatus"]){Module["setStatus"]("Running...");setTimeout(function(){setTimeout(function(){Module["setStatus"]("")},1);doRun()},1)}else{doRun()}}Module["run"]=run;function exit(status,implicit){EXITSTATUS=status;if(!keepRuntimeAlive()){exitRuntime()}procExit(status)}function procExit(code){EXITSTATUS=code;if(!keepRuntimeAlive()){if(Module["onExit"])Module["onExit"](code);ABORT=true}quit_(code,new ExitStatus(code))}if(Module["preInit"]){if(typeof Module["preInit"]=="function")Module["preInit"]=[Module["preInit"]];while(Module["preInit"].length>0){Module["preInit"].pop()()}}var shouldRunNow=false;if(Module["noInitialRun"])shouldRunNow=false;run(); - - - return Godot.ready -} -); -})(); -if (typeof exports === 'object' && typeof module === 'object') - module.exports = Godot; -else if (typeof define === 'function' && define['amd']) - define([], function() { return Godot; }); -else if (typeof exports === 'object') - exports["Godot"] = Godot; - -const Preloader = /** @constructor */ function () { // eslint-disable-line no-unused-vars - function getTrackedResponse(response, load_status) { - function onloadprogress(reader, controller) { - return reader.read().then(function (result) { - if (load_status.done) { - return Promise.resolve(); - } - if (result.value) { - controller.enqueue(result.value); - load_status.loaded += result.value.length; - } - if (!result.done) { - return onloadprogress(reader, controller); - } - load_status.done = true; - return Promise.resolve(); - }); - } - const reader = response.body.getReader(); - return new Response(new ReadableStream({ - start: function (controller) { - onloadprogress(reader, controller).then(function () { - controller.close(); - }); - }, - }), { headers: response.headers }); - } - - function loadFetch(file, tracker, fileSize, raw) { - tracker[file] = { - total: fileSize || 0, - loaded: 0, - done: false, - }; - return fetch(file).then(function (response) { - if (!response.ok) { - return Promise.reject(new Error(`Failed loading file '${file}'`)); - } - const tr = getTrackedResponse(response, tracker[file]); - if (raw) { - return Promise.resolve(tr); - } - return tr.arrayBuffer(); - }); - } - - function retry(func, attempts = 1) { - function onerror(err) { - if (attempts <= 1) { - return Promise.reject(err); - } - return new Promise(function (resolve, reject) { - setTimeout(function () { - retry(func, attempts - 1).then(resolve).catch(reject); - }, 1000); - }); - } - return func().catch(onerror); - } - - const DOWNLOAD_ATTEMPTS_MAX = 4; - const loadingFiles = {}; - const lastProgress = { loaded: 0, total: 0 }; - let progressFunc = null; - - const animateProgress = function () { - let loaded = 0; - let total = 0; - let totalIsValid = true; - let progressIsFinal = true; - - Object.keys(loadingFiles).forEach(function (file) { - const stat = loadingFiles[file]; - if (!stat.done) { - progressIsFinal = false; - } - if (!totalIsValid || stat.total === 0) { - totalIsValid = false; - total = 0; - } else { - total += stat.total; - } - loaded += stat.loaded; - }); - if (loaded !== lastProgress.loaded || total !== lastProgress.total) { - lastProgress.loaded = loaded; - lastProgress.total = total; - if (typeof progressFunc === 'function') { - progressFunc(loaded, total); - } - } - if (!progressIsFinal) { - requestAnimationFrame(animateProgress); - } - }; - - this.animateProgress = animateProgress; - - this.setProgressFunc = function (callback) { - progressFunc = callback; - }; - - this.loadPromise = function (file, fileSize, raw = false) { - return retry(loadFetch.bind(null, file, loadingFiles, fileSize, raw), DOWNLOAD_ATTEMPTS_MAX); - }; - - this.preloadedFiles = []; - this.preload = function (pathOrBuffer, destPath, fileSize) { - let buffer = null; - if (typeof pathOrBuffer === 'string') { - const me = this; - return this.loadPromise(pathOrBuffer, fileSize).then(function (buf) { - me.preloadedFiles.push({ - path: destPath || pathOrBuffer, - buffer: buf, - }); - return Promise.resolve(); - }); - } else if (pathOrBuffer instanceof ArrayBuffer) { - buffer = new Uint8Array(pathOrBuffer); - } else if (ArrayBuffer.isView(pathOrBuffer)) { - buffer = new Uint8Array(pathOrBuffer.buffer); - } - if (buffer) { - this.preloadedFiles.push({ - path: destPath, - buffer: pathOrBuffer, - }); - return Promise.resolve(); - } - return Promise.reject(new Error('Invalid object for preloading')); - }; -}; - -/** - * An object used to configure the Engine instance based on godot export options, and to override those in custom HTML - * templates if needed. - * - * @header Engine configuration - * @summary The Engine configuration object. This is just a typedef, create it like a regular object, e.g.: - * - * ``const MyConfig = { executable: 'godot', unloadAfterInit: false }`` - * - * @typedef {Object} EngineConfig - */ -const EngineConfig = {}; // eslint-disable-line no-unused-vars - -/** - * @struct - * @constructor - * @ignore - */ -const InternalConfig = function (initConfig) { // eslint-disable-line no-unused-vars - const cfg = /** @lends {InternalConfig.prototype} */ { - /** - * Whether the unload the engine automatically after the instance is initialized. - * - * @memberof EngineConfig - * @default - * @type {boolean} - */ - unloadAfterInit: true, - /** - * The HTML DOM Canvas object to use. - * - * By default, the first canvas element in the document will be used is none is specified. - * - * @memberof EngineConfig - * @default - * @type {?HTMLCanvasElement} - */ - canvas: null, - /** - * The name of the WASM file without the extension. (Set by Godot Editor export process). - * - * @memberof EngineConfig - * @default - * @type {string} - */ - executable: '', - /** - * An alternative name for the game pck to load. The executable name is used otherwise. - * - * @memberof EngineConfig - * @default - * @type {?string} - */ - mainPack: null, - /** - * Specify a language code to select the proper localization for the game. - * - * The browser locale will be used if none is specified. See complete list of - * :ref:`supported locales `. - * - * @memberof EngineConfig - * @type {?string} - * @default - */ - locale: null, - /** - * The canvas resize policy determines how the canvas should be resized by Godot. - * - * ``0`` means Godot won't do any resizing. This is useful if you want to control the canvas size from - * javascript code in your template. - * - * ``1`` means Godot will resize the canvas on start, and when changing window size via engine functions. - * - * ``2`` means Godot will adapt the canvas size to match the whole browser window. - * - * @memberof EngineConfig - * @type {number} - * @default - */ - canvasResizePolicy: 2, - /** - * The arguments to be passed as command line arguments on startup. - * - * See :ref:`command line tutorial `. - * - * **Note**: :js:meth:`startGame ` will always add the ``--main-pack`` argument. - * - * @memberof EngineConfig - * @type {Array} - * @default - */ - args: [], - /** - * When enabled, the game canvas will automatically grab the focus when the engine starts. - * - * @memberof EngineConfig - * @type {boolean} - * @default - */ - focusCanvas: true, - /** - * When enabled, this will turn on experimental virtual keyboard support on mobile. - * - * @memberof EngineConfig - * @type {boolean} - * @default - */ - experimentalVK: false, - /** - * The progressive web app service worker to install. - * @memberof EngineConfig - * @default - * @type {string} - */ - serviceWorker: '', - /** - * @ignore - * @type {Array.} - */ - persistentPaths: ['/userfs'], - /** - * @ignore - * @type {boolean} - */ - persistentDrops: false, - /** - * @ignore - * @type {Array.} - */ - gdnativeLibs: [], - /** - * @ignore - * @type {Array.} - */ - fileSizes: [], - /** - * A callback function for handling Godot's ``OS.execute`` calls. - * - * This is for example used in the Web Editor template to switch between project manager and editor, and for running the game. - * - * @callback EngineConfig.onExecute - * @param {string} path The path that Godot's wants executed. - * @param {Array.} args The arguments of the "command" to execute. - */ - /** - * @ignore - * @type {?function(string, Array.)} - */ - onExecute: null, - /** - * A callback function for being notified when the Godot instance quits. - * - * **Note**: This function will not be called if the engine crashes or become unresponsive. - * - * @callback EngineConfig.onExit - * @param {number} status_code The status code returned by Godot on exit. - */ - /** - * @ignore - * @type {?function(number)} - */ - onExit: null, - /** - * A callback function for displaying download progress. - * - * The function is called once per frame while downloading files, so the usage of ``requestAnimationFrame()`` - * is not necessary. - * - * If the callback function receives a total amount of bytes as 0, this means that it is impossible to calculate. - * Possible reasons include: - * - * - Files are delivered with server-side chunked compression - * - Files are delivered with server-side compression on Chromium - * - Not all file downloads have started yet (usually on servers without multi-threading) - * - * @callback EngineConfig.onProgress - * @param {number} current The current amount of downloaded bytes so far. - * @param {number} total The total amount of bytes to be downloaded. - */ - /** - * @ignore - * @type {?function(number, number)} - */ - onProgress: null, - /** - * A callback function for handling the standard output stream. This method should usually only be used in debug pages. - * - * By default, ``console.log()`` is used. - * - * @callback EngineConfig.onPrint - * @param {...*} [var_args] A variadic number of arguments to be printed. - */ - /** - * @ignore - * @type {?function(...*)} - */ - onPrint: function () { - console.log.apply(console, Array.from(arguments)); // eslint-disable-line no-console - }, - /** - * A callback function for handling the standard error stream. This method should usually only be used in debug pages. - * - * By default, ``console.error()`` is used. - * - * @callback EngineConfig.onPrintError - * @param {...*} [var_args] A variadic number of arguments to be printed as errors. - */ - /** - * @ignore - * @type {?function(...*)} - */ - onPrintError: function (var_args) { - console.error.apply(console, Array.from(arguments)); // eslint-disable-line no-console - }, - }; - - /** - * @ignore - * @struct - * @constructor - * @param {EngineConfig} opts - */ - function Config(opts) { - this.update(opts); - } - - Config.prototype = cfg; - - /** - * @ignore - * @param {EngineConfig} opts - */ - Config.prototype.update = function (opts) { - const config = opts || {}; - // NOTE: We must explicitly pass the default, accessing it via - // the key will fail due to closure compiler renames. - function parse(key, def) { - if (typeof (config[key]) === 'undefined') { - return def; - } - return config[key]; - } - // Module config - this.unloadAfterInit = parse('unloadAfterInit', this.unloadAfterInit); - this.onPrintError = parse('onPrintError', this.onPrintError); - this.onPrint = parse('onPrint', this.onPrint); - this.onProgress = parse('onProgress', this.onProgress); - - // Godot config - this.canvas = parse('canvas', this.canvas); - this.executable = parse('executable', this.executable); - this.mainPack = parse('mainPack', this.mainPack); - this.locale = parse('locale', this.locale); - this.canvasResizePolicy = parse('canvasResizePolicy', this.canvasResizePolicy); - this.persistentPaths = parse('persistentPaths', this.persistentPaths); - this.persistentDrops = parse('persistentDrops', this.persistentDrops); - this.experimentalVK = parse('experimentalVK', this.experimentalVK); - this.focusCanvas = parse('focusCanvas', this.focusCanvas); - this.serviceWorker = parse('serviceWorker', this.serviceWorker); - this.gdnativeLibs = parse('gdnativeLibs', this.gdnativeLibs); - this.fileSizes = parse('fileSizes', this.fileSizes); - this.args = parse('args', this.args); - this.onExecute = parse('onExecute', this.onExecute); - this.onExit = parse('onExit', this.onExit); - }; - - /** - * @ignore - * @param {string} loadPath - * @param {Response} response - */ - Config.prototype.getModuleConfig = function (loadPath, response) { - let r = response; - return { - 'print': this.onPrint, - 'printErr': this.onPrintError, - 'thisProgram': this.executable, - 'noExitRuntime': true, - 'dynamicLibraries': [`${loadPath}.side.wasm`], - 'instantiateWasm': function (imports, onSuccess) { - function done(result) { - onSuccess(result['instance'], result['module']); - } - if (typeof (WebAssembly.instantiateStreaming) !== 'undefined') { - WebAssembly.instantiateStreaming(Promise.resolve(r), imports).then(done); - } else { - r.arrayBuffer().then(function (buffer) { - WebAssembly.instantiate(buffer, imports).then(done); - }); - } - r = null; - return {}; - }, - 'locateFile': function (path) { - if (path.endsWith('.worker.js')) { - return `${loadPath}.worker.js`; - } else if (path.endsWith('.audio.worklet.js')) { - return `${loadPath}.audio.worklet.js`; - } else if (path.endsWith('.js')) { - return `${loadPath}.js`; - } else if (path.endsWith('.side.wasm')) { - return `${loadPath}.side.wasm`; - } else if (path.endsWith('.wasm')) { - return `${loadPath}.wasm`; - } - return path; - }, - }; - }; - - /** - * @ignore - * @param {function()} cleanup - */ - Config.prototype.getGodotConfig = function (cleanup) { - // Try to find a canvas - if (!(this.canvas instanceof HTMLCanvasElement)) { - const nodes = document.getElementsByTagName('canvas'); - if (nodes.length && nodes[0] instanceof HTMLCanvasElement) { - this.canvas = nodes[0]; - } - if (!this.canvas) { - throw new Error('No canvas found in page'); - } - } - // Canvas can grab focus on click, or key events won't work. - if (this.canvas.tabIndex < 0) { - this.canvas.tabIndex = 0; - } - - // Browser locale, or custom one if defined. - let locale = this.locale; - if (!locale) { - locale = navigator.languages ? navigator.languages[0] : navigator.language; - locale = locale.split('.')[0]; - } - locale = locale.replace('-', '_'); - const onExit = this.onExit; - - // Godot configuration. - return { - 'canvas': this.canvas, - 'canvasResizePolicy': this.canvasResizePolicy, - 'locale': locale, - 'persistentDrops': this.persistentDrops, - 'virtualKeyboard': this.experimentalVK, - 'focusCanvas': this.focusCanvas, - 'onExecute': this.onExecute, - 'onExit': function (p_code) { - cleanup(); // We always need to call the cleanup callback to free memory. - if (typeof (onExit) === 'function') { - onExit(p_code); - } - }, - }; - }; - return new Config(initConfig); -}; - -/** - * Projects exported for the Web expose the :js:class:`Engine` class to the JavaScript environment, that allows - * fine control over the engine's start-up process. - * - * This API is built in an asynchronous manner and requires basic understanding - * of `Promises `__. - * - * @module Engine - * @header HTML5 shell class reference - */ -const Engine = (function () { - const preloader = new Preloader(); - - let loadPromise = null; - let loadPath = ''; - let initPromise = null; - - /** - * @classdesc The ``Engine`` class provides methods for loading and starting exported projects on the Web. For default export - * settings, this is already part of the exported HTML page. To understand practical use of the ``Engine`` class, - * see :ref:`Custom HTML page for Web export `. - * - * @description Create a new Engine instance with the given configuration. - * - * @global - * @constructor - * @param {EngineConfig} initConfig The initial config for this instance. - */ - function Engine(initConfig) { // eslint-disable-line no-shadow - this.config = new InternalConfig(initConfig); - this.rtenv = null; - } - - /** - * Load the engine from the specified base path. - * - * @param {string} basePath Base path of the engine to load. - * @param {number=} [size=0] The file size if known. - * @returns {Promise} A Promise that resolves once the engine is loaded. - * - * @function Engine.load - */ - Engine.load = function (basePath, size) { - if (loadPromise == null) { - loadPath = basePath; - loadPromise = preloader.loadPromise(`${loadPath}.wasm`, size, true); - requestAnimationFrame(preloader.animateProgress); - } - return loadPromise; - }; - - /** - * Unload the engine to free memory. - * - * This method will be called automatically depending on the configuration. See :js:attr:`unloadAfterInit`. - * - * @function Engine.unload - */ - Engine.unload = function () { - loadPromise = null; - }; - - /** - * Check whether WebGL is available. Optionally, specify a particular version of WebGL to check for. - * - * @param {number=} [majorVersion=1] The major WebGL version to check for. - * @returns {boolean} If the given major version of WebGL is available. - * @function Engine.isWebGLAvailable - */ - Engine.isWebGLAvailable = function (majorVersion = 1) { - try { - return !!document.createElement('canvas').getContext(['webgl', 'webgl2'][majorVersion - 1]); - } catch (e) { /* Not available */ } - return false; - }; - - /** - * Safe Engine constructor, creates a new prototype for every new instance to avoid prototype pollution. - * @ignore - * @constructor - */ - function SafeEngine(initConfig) { - const proto = /** @lends Engine.prototype */ { - /** - * Initialize the engine instance. Optionally, pass the base path to the engine to load it, - * if it hasn't been loaded yet. See :js:meth:`Engine.load`. - * - * @param {string=} basePath Base path of the engine to load. - * @return {Promise} A ``Promise`` that resolves once the engine is loaded and initialized. - */ - init: function (basePath) { - if (initPromise) { - return initPromise; - } - if (loadPromise == null) { - if (!basePath) { - initPromise = Promise.reject(new Error('A base path must be provided when calling `init` and the engine is not loaded.')); - return initPromise; - } - Engine.load(basePath, this.config.fileSizes[`${basePath}.wasm`]); - } - const me = this; - function doInit(promise) { - // Care! Promise chaining is bogus with old emscripten versions. - // This caused a regression with the Mono build (which uses an older emscripten version). - // Make sure to test that when refactoring. - return new Promise(function (resolve, reject) { - promise.then(function (response) { - const cloned = new Response(response.clone().body, { 'headers': [['content-type', 'application/wasm']] }); - Godot(me.config.getModuleConfig(loadPath, cloned)).then(function (module) { - const paths = me.config.persistentPaths; - module['initFS'](paths).then(function (err) { - me.rtenv = module; - if (me.config.unloadAfterInit) { - Engine.unload(); - } - resolve(); - }); - }); - }); - }); - } - preloader.setProgressFunc(this.config.onProgress); - initPromise = doInit(loadPromise); - return initPromise; - }, - - /** - * Load a file so it is available in the instance's file system once it runs. Must be called **before** starting the - * instance. - * - * If not provided, the ``path`` is derived from the URL of the loaded file. - * - * @param {string|ArrayBuffer} file The file to preload. - * - * If a ``string`` the file will be loaded from that path. - * - * If an ``ArrayBuffer`` or a view on one, the buffer will used as the content of the file. - * - * @param {string=} path Path by which the file will be accessible. Required, if ``file`` is not a string. - * - * @returns {Promise} A Promise that resolves once the file is loaded. - */ - preloadFile: function (file, path) { - return preloader.preload(file, path, this.config.fileSizes[file]); - }, - - /** - * Start the engine instance using the given override configuration (if any). - * :js:meth:`startGame ` can be used in typical cases instead. - * - * This will initialize the instance if it is not initialized. For manual initialization, see :js:meth:`init `. - * The engine must be loaded beforehand. - * - * Fails if a canvas cannot be found on the page, or not specified in the configuration. - * - * @param {EngineConfig} override An optional configuration override. - * @return {Promise} Promise that resolves once the engine started. - */ - start: function (override) { - this.config.update(override); - const me = this; - return me.init().then(function () { - if (!me.rtenv) { - return Promise.reject(new Error('The engine must be initialized before it can be started')); - } - - let config = {}; - try { - config = me.config.getGodotConfig(function () { - me.rtenv = null; - }); - } catch (e) { - return Promise.reject(e); - } - // Godot configuration. - me.rtenv['initConfig'](config); - - // Preload GDNative libraries. - const libs = []; - me.config.gdnativeLibs.forEach(function (lib) { - libs.push(me.rtenv['loadDynamicLibrary'](lib, { 'loadAsync': true })); - }); - return Promise.all(libs).then(function () { - return new Promise(function (resolve, reject) { - preloader.preloadedFiles.forEach(function (file) { - me.rtenv['copyToFS'](file.path, file.buffer); - }); - preloader.preloadedFiles.length = 0; // Clear memory - me.rtenv['callMain'](me.config.args); - initPromise = null; - if (me.config.serviceWorker && 'serviceWorker' in navigator) { - navigator.serviceWorker.register(me.config.serviceWorker); - } - resolve(); - }); - }); - }); - }, - - /** - * Start the game instance using the given configuration override (if any). - * - * This will initialize the instance if it is not initialized. For manual initialization, see :js:meth:`init `. - * - * This will load the engine if it is not loaded, and preload the main pck. - * - * This method expects the initial config (or the override) to have both the :js:attr:`executable` and :js:attr:`mainPack` - * properties set (normally done by the editor during export). - * - * @param {EngineConfig} override An optional configuration override. - * @return {Promise} Promise that resolves once the game started. - */ - startGame: function (override) { - this.config.update(override); - // Add main-pack argument. - const exe = this.config.executable; - const pack = this.config.mainPack || `${exe}.pck`; - this.config.args = ['--main-pack', pack].concat(this.config.args); - // Start and init with execName as loadPath if not inited. - const me = this; - return Promise.all([ - this.init(exe), - this.preloadFile(pack, pack), - ]).then(function () { - return me.start.apply(me); - }); - }, - - /** - * Create a file at the specified ``path`` with the passed as ``buffer`` in the instance's file system. - * - * @param {string} path The location where the file will be created. - * @param {ArrayBuffer} buffer The content of the file. - */ - copyToFS: function (path, buffer) { - if (this.rtenv == null) { - throw new Error('Engine must be inited before copying files'); - } - this.rtenv['copyToFS'](path, buffer); - }, - - /** - * Request that the current instance quit. - * - * This is akin the user pressing the close button in the window manager, and will - * have no effect if the engine has crashed, or is stuck in a loop. - * - */ - requestQuit: function () { - if (this.rtenv) { - this.rtenv['request_quit'](); - } - }, - }; - - Engine.prototype = proto; - // Closure compiler exported instance methods. - Engine.prototype['init'] = Engine.prototype.init; - Engine.prototype['preloadFile'] = Engine.prototype.preloadFile; - Engine.prototype['start'] = Engine.prototype.start; - Engine.prototype['startGame'] = Engine.prototype.startGame; - Engine.prototype['copyToFS'] = Engine.prototype.copyToFS; - Engine.prototype['requestQuit'] = Engine.prototype.requestQuit; - // Also expose static methods as instance methods - Engine.prototype['load'] = Engine.load; - Engine.prototype['unload'] = Engine.unload; - Engine.prototype['isWebGLAvailable'] = Engine.isWebGLAvailable; - return new Engine(initConfig); - } - - // Closure compiler exported static methods. - SafeEngine['load'] = Engine.load; - SafeEngine['unload'] = Engine.unload; - SafeEngine['isWebGLAvailable'] = Engine.isWebGLAvailable; - - return SafeEngine; -}()); -if (typeof window !== 'undefined') { - window['Engine'] = Engine; -} diff --git a/spaces/ICML2022/OFA/fairseq/examples/wav2vec/unsupervised/scripts/wav2vec_extract_features.py b/spaces/ICML2022/OFA/fairseq/examples/wav2vec/unsupervised/scripts/wav2vec_extract_features.py deleted file mode 100644 index b07e274d202414ce40d00aa64a27cf97bb49c1c3..0000000000000000000000000000000000000000 --- a/spaces/ICML2022/OFA/fairseq/examples/wav2vec/unsupervised/scripts/wav2vec_extract_features.py +++ /dev/null @@ -1,119 +0,0 @@ -#!/usr/bin/env python3 -u -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import argparse -import os -import os.path as osp -import tqdm -import torch -import torch.nn.functional as F -from shutil import copyfile - -from npy_append_array import NpyAppendArray - -import fairseq -import soundfile as sf - - -def get_parser(): - parser = argparse.ArgumentParser( - description="compute kmeans codebook from kaldi-computed feats" - ) - # fmt: off - parser.add_argument('data', help='location of tsv files') - parser.add_argument('--split', help='which split to read', required=True) - parser.add_argument('--save-dir', help='where to save the output', required=True) - parser.add_argument('--checkpoint', type=str, help='checkpoint for wav2vec ctc model', required=True) - parser.add_argument('--layer', type=int, default=14, help='which layer to use') - # fmt: on - - return parser - - -class Wav2VecFeatureReader(object): - def __init__(self, cp_file, layer): - model, cfg, task = fairseq.checkpoint_utils.load_model_ensemble_and_task( - [cp_file] - ) - model = model[0] - model.eval() - model.cuda() - self.model = model - self.task = task - self.layer = layer - - def read_audio(self, fname): - """Load an audio file and return PCM along with the sample rate""" - wav, sr = sf.read(fname) - assert sr == 16e3 - - return wav - - def get_feats(self, loc): - x = self.read_audio(loc) - with torch.no_grad(): - source = torch.from_numpy(x).float().cuda() - if self.task.cfg.normalize: - assert source.dim() == 1, source.dim() - with torch.no_grad(): - source = F.layer_norm(source, source.shape) - source = source.view(1, -1) - - m_res = self.model(source=source, mask=False, features_only=True, layer=self.layer) - return m_res["x"].squeeze(0).cpu() - - -def get_iterator(args): - with open(osp.join(args.data, args.split) + ".tsv", "r") as fp: - lines = fp.read().split("\n") - root = lines.pop(0).strip() - files = [osp.join(root, line.split("\t")[0]) for line in lines if len(line) > 0] - - num = len(files) - reader = Wav2VecFeatureReader(args.checkpoint, args.layer) - - def iterate(): - for fname in files: - w2v_feats = reader.get_feats(fname) - yield w2v_feats - - return iterate, num - - -def main(): - parser = get_parser() - args = parser.parse_args() - - os.makedirs(args.save_dir, exist_ok=True) - - def create_files(dest): - copyfile(osp.join(args.data, args.split) + ".tsv", dest + ".tsv") - if osp.exists(osp.join(args.data, args.split) + ".wrd"): - copyfile(osp.join(args.data, args.split) + ".wrd", dest + ".wrd") - if osp.exists(osp.join(args.data, args.split) + ".phn"): - copyfile(osp.join(args.data, args.split) + ".phn", dest + ".phn") - - if osp.exists(dest + ".npy"): - os.remove(dest + ".npy") - npaa = NpyAppendArray(dest + ".npy") - return npaa - - save_path = osp.join(args.save_dir, args.split) - npaa = create_files(save_path) - - generator, num = get_iterator(args) - iterator = generator() - - with open(save_path + ".lengths", "w") as l_f: - for w2v_feats in tqdm.tqdm(iterator, total=num): - print(len(w2v_feats), file=l_f) - - if len(w2v_feats) > 0: - npaa.append(w2v_feats.numpy()) - - -if __name__ == "__main__": - main() diff --git a/spaces/InpaintAI/Inpaint-Anything/third_party/lama/fetch_data/celebahq_dataset_prepare.sh b/spaces/InpaintAI/Inpaint-Anything/third_party/lama/fetch_data/celebahq_dataset_prepare.sh deleted file mode 100644 index 6d2ba9a6265c0d5fa580035952a1f568dd8d9e44..0000000000000000000000000000000000000000 --- a/spaces/InpaintAI/Inpaint-Anything/third_party/lama/fetch_data/celebahq_dataset_prepare.sh +++ /dev/null @@ -1,37 +0,0 @@ -mkdir celeba-hq-dataset - -unzip data256x256.zip -d celeba-hq-dataset/ - -# Reindex -for i in `echo {00001..30000}` -do - mv 'celeba-hq-dataset/data256x256/'$i'.jpg' 'celeba-hq-dataset/data256x256/'$[10#$i - 1]'.jpg' -done - - -# Split: split train -> train & val -cat fetch_data/train_shuffled.flist | shuf > celeba-hq-dataset/temp_train_shuffled.flist -cat celeba-hq-dataset/temp_train_shuffled.flist | head -n 2000 > celeba-hq-dataset/val_shuffled.flist -cat celeba-hq-dataset/temp_train_shuffled.flist | tail -n +2001 > celeba-hq-dataset/train_shuffled.flist -cat fetch_data/val_shuffled.flist > celeba-hq-dataset/visual_test_shuffled.flist - -mkdir celeba-hq-dataset/train_256/ -mkdir celeba-hq-dataset/val_source_256/ -mkdir celeba-hq-dataset/visual_test_source_256/ - -cat celeba-hq-dataset/train_shuffled.flist | xargs -I {} mv celeba-hq-dataset/data256x256/{} celeba-hq-dataset/train_256/ -cat celeba-hq-dataset/val_shuffled.flist | xargs -I {} mv celeba-hq-dataset/data256x256/{} celeba-hq-dataset/val_source_256/ -cat celeba-hq-dataset/visual_test_shuffled.flist | xargs -I {} mv celeba-hq-dataset/data256x256/{} celeba-hq-dataset/visual_test_source_256/ - - -# create location config celeba.yaml -PWD=$(pwd) -DATASET=${PWD}/celeba-hq-dataset -CELEBA=${PWD}/configs/training/location/celeba.yaml - -touch $CELEBA -echo "# @package _group_" >> $CELEBA -echo "data_root_dir: ${DATASET}/" >> $CELEBA -echo "out_root_dir: ${PWD}/experiments/" >> $CELEBA -echo "tb_dir: ${PWD}/tb_logs/" >> $CELEBA -echo "pretrained_models: ${PWD}/" >> $CELEBA diff --git a/spaces/Ivanrs/harris-corner-detector/README.md b/spaces/Ivanrs/harris-corner-detector/README.md deleted file mode 100644 index af1c461e4a54199a67558e77f4d896bcf6b60619..0000000000000000000000000000000000000000 --- a/spaces/Ivanrs/harris-corner-detector/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Harris Corner Detector -emoji: 🐠 -colorFrom: indigo -colorTo: green -sdk: gradio -sdk_version: 3.4.1 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/JFoz/Dog-Pose-Editor-Controlnet/fileservice.py b/spaces/JFoz/Dog-Pose-Editor-Controlnet/fileservice.py deleted file mode 100644 index f90a304d1b7fa9d84b5f20c2ca9a822234e22443..0000000000000000000000000000000000000000 --- a/spaces/JFoz/Dog-Pose-Editor-Controlnet/fileservice.py +++ /dev/null @@ -1,37 +0,0 @@ -from fastapi import FastAPI, Request, Response - -filenames = ["js/sketch.js"] -contents = '\n'.join([f"" for x in filenames]) - -app = FastAPI() - -@app.middleware("http") -async def insert_js(request: Request, call_next): - path = request.scope['path'] # get the request route - response = await call_next(request) - - if path == "/": - response_body = "" - async for chunk in response.body_iterator: - response_body += chunk.decode() - - some_javascript = """ - - - """ - - response_body = response_body.replace("", some_javascript + "") - response_body = response_body.replace("", contents + "") - - del response.headers["content-length"] - - return Response( - content=response_body, - status_code=response.status_code, - headers=dict(response.headers), - media_type=response.media_type - ) - - return response \ No newline at end of file diff --git a/spaces/JUNGU/VToonify/vtoonify/train_vtoonify_t.py b/spaces/JUNGU/VToonify/vtoonify/train_vtoonify_t.py deleted file mode 100644 index 147d5f38a5b25822ab05f089173cd96c6aa22c12..0000000000000000000000000000000000000000 --- a/spaces/JUNGU/VToonify/vtoonify/train_vtoonify_t.py +++ /dev/null @@ -1,432 +0,0 @@ -import os -#os.environ['CUDA_VISIBLE_DEVICES'] = "0" -import argparse -import math -import random - -import numpy as np -import torch -from torch import nn, optim -from torch.nn import functional as F -from torch.utils import data -import torch.distributed as dist -from torchvision import transforms, utils -from tqdm import tqdm -from PIL import Image -from util import * -from model.stylegan import lpips -from model.stylegan.model import Generator, Downsample -from model.vtoonify import VToonify, ConditionalDiscriminator -from model.bisenet.model import BiSeNet -from model.simple_augment import random_apply_affine -from model.stylegan.distributed import ( - get_rank, - synchronize, - reduce_loss_dict, - reduce_sum, - get_world_size, -) - -# In the paper, --weight for each style is set as follows, -# cartoon: default -# caricature: default -# pixar: 1 1 1 1 1 1 1 1 1 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 -# comic: 0.5 0.5 0.5 0.5 0.5 0.5 0.5 1 1 1 1 1 1 1 1 1 1 1 -# arcane: 0.5 0.5 0.5 0.5 0.5 0.5 0.5 1 1 1 1 1 1 1 1 1 1 1 - -class TrainOptions(): - def __init__(self): - - self.parser = argparse.ArgumentParser(description="Train VToonify-T") - self.parser.add_argument("--iter", type=int, default=2000, help="total training iterations") - self.parser.add_argument("--batch", type=int, default=8, help="batch sizes for each gpus") - self.parser.add_argument("--lr", type=float, default=0.0001, help="learning rate") - self.parser.add_argument("--local_rank", type=int, default=0, help="local rank for distributed training") - self.parser.add_argument("--start_iter", type=int, default=0, help="start iteration") - self.parser.add_argument("--save_every", type=int, default=30000, help="interval of saving a checkpoint") - self.parser.add_argument("--save_begin", type=int, default=30000, help="when to start saving a checkpoint") - self.parser.add_argument("--log_every", type=int, default=200, help="interval of saving an intermediate image result") - - self.parser.add_argument("--adv_loss", type=float, default=0.01, help="the weight of adv loss") - self.parser.add_argument("--grec_loss", type=float, default=0.1, help="the weight of mse recontruction loss") - self.parser.add_argument("--perc_loss", type=float, default=0.01, help="the weight of perceptual loss") - self.parser.add_argument("--tmp_loss", type=float, default=1.0, help="the weight of temporal consistency loss") - - self.parser.add_argument("--encoder_path", type=str, default=None, help="path to the pretrained encoder model") - self.parser.add_argument("--direction_path", type=str, default='./checkpoint/directions.npy', help="path to the editing direction latents") - self.parser.add_argument("--stylegan_path", type=str, default='./checkpoint/stylegan2-ffhq-config-f.pt', help="path to the stylegan model") - self.parser.add_argument("--finetunegan_path", type=str, default='./checkpoint/cartoon/finetune-000600.pt', help="path to the finetuned stylegan model") - self.parser.add_argument("--weight", type=float, nargs=18, default=[1]*9+[0]*9, help="the weight for blending two models") - self.parser.add_argument("--faceparsing_path", type=str, default='./checkpoint/faceparsing.pth', help="path of the face parsing model") - self.parser.add_argument("--style_encoder_path", type=str, default='./checkpoint/encoder.pt', help="path of the style encoder") - - self.parser.add_argument("--name", type=str, default='vtoonify_t_cartoon', help="saved model name") - self.parser.add_argument("--pretrain", action="store_true", help="if true, only pretrain the encoder") - - def parse(self): - self.opt = self.parser.parse_args() - if self.opt.encoder_path is None: - self.opt.encoder_path = os.path.join('./checkpoint/', self.opt.name, 'pretrain.pt') - args = vars(self.opt) - if self.opt.local_rank == 0: - print('Load options') - for name, value in sorted(args.items()): - print('%s: %s' % (str(name), str(value))) - return self.opt - - -# pretrain E of vtoonify. -# We train E so that its the last-layer feature matches the original 8-th-layer input feature of G1 -# See Model initialization in Sec. 4.1.2 for the detail -def pretrain(args, generator, g_optim, g_ema, parsingpredictor, down, directions, basemodel, device): - pbar = range(args.iter) - - if get_rank() == 0: - pbar = tqdm(pbar, initial=args.start_iter, dynamic_ncols=True, smoothing=0.01) - - recon_loss = torch.tensor(0.0, device=device) - loss_dict = {} - - if args.distributed: - g_module = generator.module - else: - g_module = generator - - accum = 0.5 ** (32 / (10 * 1000)) - - requires_grad(g_module.encoder, True) - - for idx in pbar: - i = idx + args.start_iter - - if i > args.iter: - print("Done!") - break - - with torch.no_grad(): - # during pretraining, no geometric transformations are applied. - noise_sample = torch.randn(args.batch, 512).cuda() - ws_ = basemodel.style(noise_sample).unsqueeze(1).repeat(1,18,1) # random w - ws_[:, 3:7] += directions[torch.randint(0, directions.shape[0], (args.batch,)), 3:7] # w''=w'=w+n - img_gen, _ = basemodel([ws_], input_is_latent=True, truncation=0.5, truncation_latent=0) # image part of x' - img_gen = torch.clamp(img_gen, -1, 1).detach() - img_gen512 = down(img_gen.detach()) - img_gen256 = down(img_gen512.detach()) # image part of x'_down - mask512 = parsingpredictor(2*torch.clamp(img_gen512, -1, 1))[0] - real_input = torch.cat((img_gen256, down(mask512)/16.0), dim=1).detach() # x'_down - # f_G1^(8)(w'') - real_feat, real_skip = g_ema.generator([ws_], input_is_latent=True, return_feature_ind = 6, truncation=0.5, truncation_latent=0) - real_feat = real_feat.detach() - real_skip = real_skip.detach() - - # f_E^(last)(x'_down) - fake_feat, fake_skip = generator(real_input, style=None, return_feat=True) - - # L_E in Eq.(1) - recon_loss = F.mse_loss(fake_feat, real_feat) + F.mse_loss(fake_skip, real_skip) - - loss_dict["emse"] = recon_loss - - generator.zero_grad() - recon_loss.backward() - g_optim.step() - - accumulate(g_ema.encoder, g_module.encoder, accum) - - loss_reduced = reduce_loss_dict(loss_dict) - - emse_loss_val = loss_reduced["emse"].mean().item() - - if get_rank() == 0: - pbar.set_description( - ( - f"iter: {i:d}; emse: {emse_loss_val:.3f}" - ) - ) - - if ((i+1) >= args.save_begin and (i+1) % args.save_every == 0) or (i+1) == args.iter: - if (i+1) == args.iter: - savename = f"checkpoint/%s/pretrain.pt"%(args.name) - else: - savename = f"checkpoint/%s/pretrain-%05d.pt"%(args.name, i+1) - torch.save( - { - #"g": g_module.encoder.state_dict(), - "g_ema": g_ema.encoder.state_dict(), - }, - savename, - ) - - -# generate paired data and train vtoonify, see Sec. 4.1.2 for the detail -def train(args, generator, discriminator, g_optim, d_optim, g_ema, percept, parsingpredictor, down, pspencoder, directions, basemodel, device): - pbar = range(args.iter) - - if get_rank() == 0: - pbar = tqdm(pbar, initial=args.start_iter, smoothing=0.01, ncols=120, dynamic_ncols=False) - - d_loss = torch.tensor(0.0, device=device) - g_loss = torch.tensor(0.0, device=device) - grec_loss = torch.tensor(0.0, device=device) - gfeat_loss = torch.tensor(0.0, device=device) - temporal_loss = torch.tensor(0.0, device=device) - loss_dict = {} - - if args.distributed: - g_module = generator.module - d_module = discriminator.module - - else: - g_module = generator - d_module = discriminator - - accum = 0.5 ** (32 / (10 * 1000)) - - for idx in pbar: - i = idx + args.start_iter - - if i > args.iter: - print("Done!") - break - - ###### This part is for data generation. Generate pair (x, y, w'') as in Fig. 5 of the paper - with torch.no_grad(): - noise_sample = torch.randn(args.batch, 512).cuda() - wc = basemodel.style(noise_sample).unsqueeze(1).repeat(1,18,1) # random w - wc[:, 3:7] += directions[torch.randint(0, directions.shape[0], (args.batch,)), 3:7] # w'=w+n - wc = wc.detach() - xc, _ = basemodel([wc], input_is_latent=True, truncation=0.5, truncation_latent=0) - xc = torch.clamp(xc, -1, 1).detach() # x' - xl = pspencoder(F.adaptive_avg_pool2d(xc, 256)) - xl = basemodel.style(xl.reshape(xl.shape[0]*xl.shape[1], xl.shape[2])).reshape(xl.shape) # E_s(x'_down) - xl = torch.cat((wc[:,0:7]*0.5, xl[:,7:18]), dim=1).detach() # w'' = concatenate w' and E_s(x'_down) - xs, _ = g_ema.generator([xl], input_is_latent=True) - xs = torch.clamp(xs, -1, 1).detach() # y' - # during training, random geometric transformations are applied. - imgs, _ = random_apply_affine(torch.cat((xc.detach(),xs), dim=1), 0.2, None) - real_input1024 = imgs[:,0:3].detach() # image part of x - real_input512 = down(real_input1024).detach() - real_input256 = down(real_input512).detach() - mask512 = parsingpredictor(2*real_input512)[0] - mask256 = down(mask512).detach() - mask = F.adaptive_avg_pool2d(mask512, 1024).detach() # parsing part of x - real_output = imgs[:,3:].detach() # y - real_input = torch.cat((real_input256, mask256/16.0), dim=1) # x_down - # for log, sample a fixed input-output pair (x_down, y, w'') - if idx == 0 or i == 0: - samplein = real_input.clone().detach() - sampleout = real_output.clone().detach() - samplexl = xl.clone().detach() - - ###### This part is for training discriminator - - requires_grad(g_module.encoder, False) - requires_grad(g_module.fusion_out, False) - requires_grad(g_module.fusion_skip, False) - requires_grad(discriminator, True) - - fake_output = generator(real_input, xl) - fake_pred = discriminator(F.adaptive_avg_pool2d(fake_output, 256)) - real_pred = discriminator(F.adaptive_avg_pool2d(real_output, 256)) - - # L_adv in Eq.(3) - d_loss = d_logistic_loss(real_pred, fake_pred) * args.adv_loss - loss_dict["d"] = d_loss - - discriminator.zero_grad() - d_loss.backward() - d_optim.step() - - ###### This part is for training generator (encoder and fusion modules) - - requires_grad(g_module.encoder, True) - requires_grad(g_module.fusion_out, True) - requires_grad(g_module.fusion_skip, True) - requires_grad(discriminator, False) - - fake_output = generator(real_input, xl) - fake_pred = discriminator(F.adaptive_avg_pool2d(fake_output, 256)) - # L_adv in Eq.(3) - g_loss = g_nonsaturating_loss(fake_pred) * args.adv_loss - # L_rec in Eq.(2) - grec_loss = F.mse_loss(fake_output, real_output) * args.grec_loss - gfeat_loss = percept(F.adaptive_avg_pool2d(fake_output, 512), # 1024 will out of memory - F.adaptive_avg_pool2d(real_output, 512)).sum() * args.perc_loss # 256 will get blurry output - - loss_dict["g"] = g_loss - loss_dict["gr"] = grec_loss - loss_dict["gf"] = gfeat_loss - - w = random.randint(0,1024-896) - h = random.randint(0,1024-896) - crop_input = torch.cat((real_input1024[:,:,w:w+896,h:h+896], mask[:,:,w:w+896,h:h+896]/16.0), dim=1).detach() - crop_input = down(down(crop_input)) - crop_fake_output = fake_output[:,:,w:w+896,h:h+896] - fake_crop_output = generator(crop_input, xl) - # L_tmp in Eq.(4), gradually increase the weight of L_tmp - temporal_loss = ((fake_crop_output-crop_fake_output)**2).mean() * max(idx/(args.iter/2.0)-1, 0) * args.tmp_loss - loss_dict["tp"] = temporal_loss - - generator.zero_grad() - (g_loss + grec_loss + gfeat_loss + temporal_loss).backward() - g_optim.step() - - accumulate(g_ema.encoder, g_module.encoder, accum) - accumulate(g_ema.fusion_out, g_module.fusion_out, accum) - accumulate(g_ema.fusion_skip, g_module.fusion_skip, accum) - - loss_reduced = reduce_loss_dict(loss_dict) - - d_loss_val = loss_reduced["d"].mean().item() - g_loss_val = loss_reduced["g"].mean().item() - gr_loss_val = loss_reduced["gr"].mean().item() - gf_loss_val = loss_reduced["gf"].mean().item() - tmp_loss_val = loss_reduced["tp"].mean().item() - - if get_rank() == 0: - pbar.set_description( - ( - f"iter: {i:d}; advd: {d_loss_val:.3f}; advg: {g_loss_val:.3f}; mse: {gr_loss_val:.3f}; " - f"perc: {gf_loss_val:.3f}; tmp: {tmp_loss_val:.3f}" - ) - ) - - if i % args.log_every == 0 or (i+1) == args.iter: - with torch.no_grad(): - g_ema.eval() - sample = g_ema(samplein, samplexl) - sample = F.interpolate(torch.cat((sampleout, sample), dim=0), 256) - utils.save_image( - sample, - f"log/%s/%05d.jpg"%(args.name, i), - nrow=int(args.batch), - normalize=True, - range=(-1, 1), - ) - - if ((i+1) >= args.save_begin and (i+1) % args.save_every == 0) or (i+1) == args.iter: - if (i+1) == args.iter: - savename = f"checkpoint/%s/vtoonify.pt"%(args.name) - else: - savename = f"checkpoint/%s/vtoonify_%05d.pt"%(args.name, i+1) - torch.save( - { - #"g": g_module.state_dict(), - #"d": d_module.state_dict(), - "g_ema": g_ema.state_dict(), - }, - savename, - ) - - - -if __name__ == "__main__": - - device = "cuda" - parser = TrainOptions() - args = parser.parse() - if args.local_rank == 0: - print('*'*98) - if not os.path.exists("log/%s/"%(args.name)): - os.makedirs("log/%s/"%(args.name)) - if not os.path.exists("checkpoint/%s/"%(args.name)): - os.makedirs("checkpoint/%s/"%(args.name)) - - n_gpu = int(os.environ["WORLD_SIZE"]) if "WORLD_SIZE" in os.environ else 1 - args.distributed = n_gpu > 1 - - if args.distributed: - torch.cuda.set_device(args.local_rank) - torch.distributed.init_process_group(backend="nccl", init_method="env://") - synchronize() - - generator = VToonify(backbone = 'toonify').to(device) - generator.apply(weights_init) - g_ema = VToonify(backbone = 'toonify').to(device) - g_ema.eval() - - basemodel = Generator(1024, 512, 8, 2).to(device) # G0 - finetunemodel = Generator(1024, 512, 8, 2).to(device) - basemodel.load_state_dict(torch.load(args.stylegan_path, map_location=lambda storage, loc: storage)['g_ema']) - finetunemodel.load_state_dict(torch.load(args.finetunegan_path, map_location=lambda storage, loc: storage)['g_ema']) - fused_state_dict = blend_models(finetunemodel, basemodel, args.weight) # G1 - generator.generator.load_state_dict(fused_state_dict) # load G1 - g_ema.generator.load_state_dict(fused_state_dict) - requires_grad(basemodel, False) - requires_grad(generator.generator, False) - requires_grad(g_ema.generator, False) - - if not args.pretrain: - generator.encoder.load_state_dict(torch.load(args.encoder_path, map_location=lambda storage, loc: storage)["g_ema"]) - # we initialize the fusion modules to map f_G \otimes f_E to f_G. - for k in generator.fusion_out: - k.weight.data *= 0.01 - k.weight[:,0:k.weight.shape[0],1,1].data += torch.eye(k.weight.shape[0]).cuda() - for k in generator.fusion_skip: - k.weight.data *= 0.01 - k.weight[:,0:k.weight.shape[0],1,1].data += torch.eye(k.weight.shape[0]).cuda() - - accumulate(g_ema.encoder, generator.encoder, 0) - accumulate(g_ema.fusion_out, generator.fusion_out, 0) - accumulate(g_ema.fusion_skip, generator.fusion_skip, 0) - - g_parameters = list(generator.encoder.parameters()) - if not args.pretrain: - g_parameters = g_parameters + list(generator.fusion_out.parameters()) + list(generator.fusion_skip.parameters()) - - g_optim = optim.Adam( - g_parameters, - lr=args.lr, - betas=(0.9, 0.99), - ) - - if args.distributed: - generator = nn.parallel.DistributedDataParallel( - generator, - device_ids=[args.local_rank], - output_device=args.local_rank, - broadcast_buffers=False, - find_unused_parameters=True, - ) - - parsingpredictor = BiSeNet(n_classes=19) - parsingpredictor.load_state_dict(torch.load(args.faceparsing_path, map_location=lambda storage, loc: storage)) - parsingpredictor.to(device).eval() - requires_grad(parsingpredictor, False) - - # we apply gaussian blur to the images to avoid flickers caused during downsampling - down = Downsample(kernel=[1, 3, 3, 1], factor=2).to(device) - requires_grad(down, False) - - directions = torch.tensor(np.load(args.direction_path)).to(device) - - if not args.pretrain: - discriminator = ConditionalDiscriminator(256).to(device) - - d_optim = optim.Adam( - discriminator.parameters(), - lr=args.lr, - betas=(0.9, 0.99), - ) - - if args.distributed: - discriminator = nn.parallel.DistributedDataParallel( - discriminator, - device_ids=[args.local_rank], - output_device=args.local_rank, - broadcast_buffers=False, - find_unused_parameters=True, - ) - - percept = lpips.PerceptualLoss(model="net-lin", net="vgg", use_gpu=device.startswith("cuda"), gpu_ids=[args.local_rank]) - requires_grad(percept.model.net, False) - - pspencoder = load_psp_standalone(args.style_encoder_path, device) - - if args.local_rank == 0: - print('Load models and data successfully loaded!') - - if args.pretrain: - pretrain(args, generator, g_optim, g_ema, parsingpredictor, down, directions, basemodel, device) - else: - train(args, generator, discriminator, g_optim, d_optim, g_ema, percept, parsingpredictor, down, pspencoder, directions, basemodel, device) diff --git a/spaces/Jarvis2301/Aku/text/cleaners.py b/spaces/Jarvis2301/Aku/text/cleaners.py deleted file mode 100644 index d26581deb399609163518054718ad80ecca5d934..0000000000000000000000000000000000000000 --- a/spaces/Jarvis2301/Aku/text/cleaners.py +++ /dev/null @@ -1,475 +0,0 @@ -""" from https://github.com/keithito/tacotron """ - -''' -Cleaners are transformations that run over the input text at both training and eval time. - -Cleaners can be selected by passing a comma-delimited list of cleaner names as the "cleaners" -hyperparameter. Some cleaners are English-specific. You'll typically want to use: - 1. "english_cleaners" for English text - 2. "transliteration_cleaners" for non-English text that can be transliterated to ASCII using - the Unidecode library (https://pypi.python.org/pypi/Unidecode) - 3. "basic_cleaners" if you do not want to transliterate (in this case, you should also update - the symbols in symbols.py to match your data). -''' - -import re -from unidecode import unidecode -import pyopenjtalk -from jamo import h2j, j2hcj -from pypinyin import lazy_pinyin, BOPOMOFO -import jieba, cn2an - - -# This is a list of Korean classifiers preceded by pure Korean numerals. -_korean_classifiers = '군데 권 개 그루 닢 대 두 마리 모 모금 뭇 발 발짝 방 번 벌 보루 살 수 술 시 쌈 움큼 정 짝 채 척 첩 축 켤레 톨 통' - -# Regular expression matching whitespace: -_whitespace_re = re.compile(r'\s+') - -# Regular expression matching Japanese without punctuation marks: -_japanese_characters = re.compile(r'[A-Za-z\d\u3005\u3040-\u30ff\u4e00-\u9fff\uff11-\uff19\uff21-\uff3a\uff41-\uff5a\uff66-\uff9d]') - -# Regular expression matching non-Japanese characters or punctuation marks: -_japanese_marks = re.compile(r'[^A-Za-z\d\u3005\u3040-\u30ff\u4e00-\u9fff\uff11-\uff19\uff21-\uff3a\uff41-\uff5a\uff66-\uff9d]') - -# List of (regular expression, replacement) pairs for abbreviations: -_abbreviations = [(re.compile('\\b%s\\.' % x[0], re.IGNORECASE), x[1]) for x in [ - ('mrs', 'misess'), - ('mr', 'mister'), - ('dr', 'doctor'), - ('st', 'saint'), - ('co', 'company'), - ('jr', 'junior'), - ('maj', 'major'), - ('gen', 'general'), - ('drs', 'doctors'), - ('rev', 'reverend'), - ('lt', 'lieutenant'), - ('hon', 'honorable'), - ('sgt', 'sergeant'), - ('capt', 'captain'), - ('esq', 'esquire'), - ('ltd', 'limited'), - ('col', 'colonel'), - ('ft', 'fort'), -]] - -# List of (hangul, hangul divided) pairs: -_hangul_divided = [(re.compile('%s' % x[0]), x[1]) for x in [ - ('ㄳ', 'ㄱㅅ'), - ('ㄵ', 'ㄴㅈ'), - ('ㄶ', 'ㄴㅎ'), - ('ㄺ', 'ㄹㄱ'), - ('ㄻ', 'ㄹㅁ'), - ('ㄼ', 'ㄹㅂ'), - ('ㄽ', 'ㄹㅅ'), - ('ㄾ', 'ㄹㅌ'), - ('ㄿ', 'ㄹㅍ'), - ('ㅀ', 'ㄹㅎ'), - ('ㅄ', 'ㅂㅅ'), - ('ㅘ', 'ㅗㅏ'), - ('ㅙ', 'ㅗㅐ'), - ('ㅚ', 'ㅗㅣ'), - ('ㅝ', 'ㅜㅓ'), - ('ㅞ', 'ㅜㅔ'), - ('ㅟ', 'ㅜㅣ'), - ('ㅢ', 'ㅡㅣ'), - ('ㅑ', 'ㅣㅏ'), - ('ㅒ', 'ㅣㅐ'), - ('ㅕ', 'ㅣㅓ'), - ('ㅖ', 'ㅣㅔ'), - ('ㅛ', 'ㅣㅗ'), - ('ㅠ', 'ㅣㅜ') -]] - -# List of (Latin alphabet, hangul) pairs: -_latin_to_hangul = [(re.compile('%s' % x[0], re.IGNORECASE), x[1]) for x in [ - ('a', '에이'), - ('b', '비'), - ('c', '시'), - ('d', '디'), - ('e', '이'), - ('f', '에프'), - ('g', '지'), - ('h', '에이치'), - ('i', '아이'), - ('j', '제이'), - ('k', '케이'), - ('l', '엘'), - ('m', '엠'), - ('n', '엔'), - ('o', '오'), - ('p', '피'), - ('q', '큐'), - ('r', '아르'), - ('s', '에스'), - ('t', '티'), - ('u', '유'), - ('v', '브이'), - ('w', '더블유'), - ('x', '엑스'), - ('y', '와이'), - ('z', '제트') -]] - -# List of (Latin alphabet, bopomofo) pairs: -_latin_to_bopomofo = [(re.compile('%s' % x[0], re.IGNORECASE), x[1]) for x in [ - ('a', 'ㄟˉ'), - ('b', 'ㄅㄧˋ'), - ('c', 'ㄙㄧˉ'), - ('d', 'ㄉㄧˋ'), - ('e', 'ㄧˋ'), - ('f', 'ㄝˊㄈㄨˋ'), - ('g', 'ㄐㄧˋ'), - ('h', 'ㄝˇㄑㄩˋ'), - ('i', 'ㄞˋ'), - ('j', 'ㄐㄟˋ'), - ('k', 'ㄎㄟˋ'), - ('l', 'ㄝˊㄛˋ'), - ('m', 'ㄝˊㄇㄨˋ'), - ('n', 'ㄣˉ'), - ('o', 'ㄡˉ'), - ('p', 'ㄆㄧˉ'), - ('q', 'ㄎㄧㄡˉ'), - ('r', 'ㄚˋ'), - ('s', 'ㄝˊㄙˋ'), - ('t', 'ㄊㄧˋ'), - ('u', 'ㄧㄡˉ'), - ('v', 'ㄨㄧˉ'), - ('w', 'ㄉㄚˋㄅㄨˋㄌㄧㄡˋ'), - ('x', 'ㄝˉㄎㄨˋㄙˋ'), - ('y', 'ㄨㄞˋ'), - ('z', 'ㄗㄟˋ') -]] - - -# List of (bopomofo, romaji) pairs: -_bopomofo_to_romaji = [(re.compile('%s' % x[0], re.IGNORECASE), x[1]) for x in [ - ('ㄅㄛ', 'p⁼wo'), - ('ㄆㄛ', 'pʰwo'), - ('ㄇㄛ', 'mwo'), - ('ㄈㄛ', 'fwo'), - ('ㄅ', 'p⁼'), - ('ㄆ', 'pʰ'), - ('ㄇ', 'm'), - ('ㄈ', 'f'), - ('ㄉ', 't⁼'), - ('ㄊ', 'tʰ'), - ('ㄋ', 'n'), - ('ㄌ', 'l'), - ('ㄍ', 'k⁼'), - ('ㄎ', 'kʰ'), - ('ㄏ', 'h'), - ('ㄐ', 'ʧ⁼'), - ('ㄑ', 'ʧʰ'), - ('ㄒ', 'ʃ'), - ('ㄓ', 'ʦ`⁼'), - ('ㄔ', 'ʦ`ʰ'), - ('ㄕ', 's`'), - ('ㄖ', 'ɹ`'), - ('ㄗ', 'ʦ⁼'), - ('ㄘ', 'ʦʰ'), - ('ㄙ', 's'), - ('ㄚ', 'a'), - ('ㄛ', 'o'), - ('ㄜ', 'ə'), - ('ㄝ', 'e'), - ('ㄞ', 'ai'), - ('ㄟ', 'ei'), - ('ㄠ', 'au'), - ('ㄡ', 'ou'), - ('ㄧㄢ', 'yeNN'), - ('ㄢ', 'aNN'), - ('ㄧㄣ', 'iNN'), - ('ㄣ', 'əNN'), - ('ㄤ', 'aNg'), - ('ㄧㄥ', 'iNg'), - ('ㄨㄥ', 'uNg'), - ('ㄩㄥ', 'yuNg'), - ('ㄥ', 'əNg'), - ('ㄦ', 'əɻ'), - ('ㄧ', 'i'), - ('ㄨ', 'u'), - ('ㄩ', 'ɥ'), - ('ˉ', '→'), - ('ˊ', '↑'), - ('ˇ', '↓↑'), - ('ˋ', '↓'), - ('˙', ''), - (',', ','), - ('。', '.'), - ('!', '!'), - ('?', '?'), - ('—', '-') -]] - - -def expand_abbreviations(text): - for regex, replacement in _abbreviations: - text = re.sub(regex, replacement, text) - return text - - -def lowercase(text): - return text.lower() - - -def collapse_whitespace(text): - return re.sub(_whitespace_re, ' ', text) - - -def convert_to_ascii(text): - return unidecode(text) - - -def japanese_to_romaji_with_accent(text): - '''Reference https://r9y9.github.io/ttslearn/latest/notebooks/ch10_Recipe-Tacotron.html''' - sentences = re.split(_japanese_marks, text) - marks = re.findall(_japanese_marks, text) - text = '' - for i, sentence in enumerate(sentences): - if re.match(_japanese_characters, sentence): - if text!='': - text+=' ' - labels = pyopenjtalk.extract_fullcontext(sentence) - for n, label in enumerate(labels): - phoneme = re.search(r'\-([^\+]*)\+', label).group(1) - if phoneme not in ['sil','pau']: - text += phoneme.replace('ch','ʧ').replace('sh','ʃ').replace('cl','Q') - else: - continue - n_moras = int(re.search(r'/F:(\d+)_', label).group(1)) - a1 = int(re.search(r"/A:(\-?[0-9]+)\+", label).group(1)) - a2 = int(re.search(r"\+(\d+)\+", label).group(1)) - a3 = int(re.search(r"\+(\d+)/", label).group(1)) - if re.search(r'\-([^\+]*)\+', labels[n + 1]).group(1) in ['sil','pau']: - a2_next=-1 - else: - a2_next = int(re.search(r"\+(\d+)\+", labels[n + 1]).group(1)) - # Accent phrase boundary - if a3 == 1 and a2_next == 1: - text += ' ' - # Falling - elif a1 == 0 and a2_next == a2 + 1 and a2 != n_moras: - text += '↓' - # Rising - elif a2 == 1 and a2_next == 2: - text += '↑' - if i BaseModel: - obj["base64string"] = obj.get("data") - return super().parse_obj(obj) diff --git a/spaces/JohnnyPittt/audio-styling/deepafx_st/__init__.py b/spaces/JohnnyPittt/audio-styling/deepafx_st/__init__.py deleted file mode 100644 index cdaf0a55dc0d0f78b93249728c30b13a4d870205..0000000000000000000000000000000000000000 --- a/spaces/JohnnyPittt/audio-styling/deepafx_st/__init__.py +++ /dev/null @@ -1,4 +0,0 @@ -#!/usr/bin/env python -"""Top-level module for deepafx_st""" - -from .version import version as __version__ diff --git a/spaces/KenjieDec/GPEN/retinaface/data/__init__.py b/spaces/KenjieDec/GPEN/retinaface/data/__init__.py deleted file mode 100644 index ea50ebaf88d64e75f4960bc99b14f138a343e575..0000000000000000000000000000000000000000 --- a/spaces/KenjieDec/GPEN/retinaface/data/__init__.py +++ /dev/null @@ -1,3 +0,0 @@ -from .wider_face import WiderFaceDetection, detection_collate -from .data_augment import * -from .config import * diff --git a/spaces/Kevin676/AutoGPT/tests/test_config.py b/spaces/Kevin676/AutoGPT/tests/test_config.py deleted file mode 100644 index b472a24c78edd1f931a76c68e08ed544bbe61d98..0000000000000000000000000000000000000000 --- a/spaces/Kevin676/AutoGPT/tests/test_config.py +++ /dev/null @@ -1,84 +0,0 @@ -from unittest import TestCase - -from autogpt.config import Config - - -class TestConfig(TestCase): - """ - Test cases for the Config class, which handles the configuration settings - for the AI and ensures it behaves as a singleton. - """ - - def setUp(self): - """ - Set up the test environment by creating an instance of the Config class. - """ - self.config = Config() - - def test_singleton(self): - """ - Test if the Config class behaves as a singleton by ensuring that two instances are the same. - """ - config2 = Config() - self.assertIs(self.config, config2) - - def test_initial_values(self): - """ - Test if the initial values of the Config class attributes are set correctly. - """ - self.assertFalse(self.config.debug_mode) - self.assertFalse(self.config.continuous_mode) - self.assertFalse(self.config.speak_mode) - self.assertEqual(self.config.fast_llm_model, "gpt-3.5-turbo") - self.assertEqual(self.config.smart_llm_model, "gpt-4") - self.assertEqual(self.config.fast_token_limit, 4000) - self.assertEqual(self.config.smart_token_limit, 8000) - - def test_set_continuous_mode(self): - """ - Test if the set_continuous_mode() method updates the continuous_mode attribute. - """ - self.config.set_continuous_mode(True) - self.assertTrue(self.config.continuous_mode) - - def test_set_speak_mode(self): - """ - Test if the set_speak_mode() method updates the speak_mode attribute. - """ - self.config.set_speak_mode(True) - self.assertTrue(self.config.speak_mode) - - def test_set_fast_llm_model(self): - """ - Test if the set_fast_llm_model() method updates the fast_llm_model attribute. - """ - self.config.set_fast_llm_model("gpt-3.5-turbo-test") - self.assertEqual(self.config.fast_llm_model, "gpt-3.5-turbo-test") - - def test_set_smart_llm_model(self): - """ - Test if the set_smart_llm_model() method updates the smart_llm_model attribute. - """ - self.config.set_smart_llm_model("gpt-4-test") - self.assertEqual(self.config.smart_llm_model, "gpt-4-test") - - def test_set_fast_token_limit(self): - """ - Test if the set_fast_token_limit() method updates the fast_token_limit attribute. - """ - self.config.set_fast_token_limit(5000) - self.assertEqual(self.config.fast_token_limit, 5000) - - def test_set_smart_token_limit(self): - """ - Test if the set_smart_token_limit() method updates the smart_token_limit attribute. - """ - self.config.set_smart_token_limit(9000) - self.assertEqual(self.config.smart_token_limit, 9000) - - def test_set_debug_mode(self): - """ - Test if the set_debug_mode() method updates the debug_mode attribute. - """ - self.config.set_debug_mode(True) - self.assertTrue(self.config.debug_mode) diff --git a/spaces/Kororinpa/Amadeus_Project/modules.py b/spaces/Kororinpa/Amadeus_Project/modules.py deleted file mode 100644 index 9c7fd9cd6eb8b7e0ec0e08957e970744a374a924..0000000000000000000000000000000000000000 --- a/spaces/Kororinpa/Amadeus_Project/modules.py +++ /dev/null @@ -1,390 +0,0 @@ -import copy -import math -import numpy as np -import scipy -import torch -from torch import nn -from torch.nn import functional as F - -from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm - -import commons -from commons import init_weights, get_padding -from transforms import piecewise_rational_quadratic_transform - - -LRELU_SLOPE = 0.1 - - -class LayerNorm(nn.Module): - def __init__(self, channels, eps=1e-5): - super().__init__() - self.channels = channels - self.eps = eps - - self.gamma = nn.Parameter(torch.ones(channels)) - self.beta = nn.Parameter(torch.zeros(channels)) - - def forward(self, x): - x = x.transpose(1, -1) - x = F.layer_norm(x, (self.channels,), self.gamma, self.beta, self.eps) - return x.transpose(1, -1) - - -class ConvReluNorm(nn.Module): - def __init__(self, in_channels, hidden_channels, out_channels, kernel_size, n_layers, p_dropout): - super().__init__() - self.in_channels = in_channels - self.hidden_channels = hidden_channels - self.out_channels = out_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.p_dropout = p_dropout - assert n_layers > 1, "Number of layers should be larger than 0." - - self.conv_layers = nn.ModuleList() - self.norm_layers = nn.ModuleList() - self.conv_layers.append(nn.Conv1d(in_channels, hidden_channels, kernel_size, padding=kernel_size//2)) - self.norm_layers.append(LayerNorm(hidden_channels)) - self.relu_drop = nn.Sequential( - nn.ReLU(), - nn.Dropout(p_dropout)) - for _ in range(n_layers-1): - self.conv_layers.append(nn.Conv1d(hidden_channels, hidden_channels, kernel_size, padding=kernel_size//2)) - self.norm_layers.append(LayerNorm(hidden_channels)) - self.proj = nn.Conv1d(hidden_channels, out_channels, 1) - self.proj.weight.data.zero_() - self.proj.bias.data.zero_() - - def forward(self, x, x_mask): - x_org = x - for i in range(self.n_layers): - x = self.conv_layers[i](x * x_mask) - x = self.norm_layers[i](x) - x = self.relu_drop(x) - x = x_org + self.proj(x) - return x * x_mask - - -class DDSConv(nn.Module): - """ - Dialted and Depth-Separable Convolution - """ - def __init__(self, channels, kernel_size, n_layers, p_dropout=0.): - super().__init__() - self.channels = channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.p_dropout = p_dropout - - self.drop = nn.Dropout(p_dropout) - self.convs_sep = nn.ModuleList() - self.convs_1x1 = nn.ModuleList() - self.norms_1 = nn.ModuleList() - self.norms_2 = nn.ModuleList() - for i in range(n_layers): - dilation = kernel_size ** i - padding = (kernel_size * dilation - dilation) // 2 - self.convs_sep.append(nn.Conv1d(channels, channels, kernel_size, - groups=channels, dilation=dilation, padding=padding - )) - self.convs_1x1.append(nn.Conv1d(channels, channels, 1)) - self.norms_1.append(LayerNorm(channels)) - self.norms_2.append(LayerNorm(channels)) - - def forward(self, x, x_mask, g=None): - if g is not None: - x = x + g - for i in range(self.n_layers): - y = self.convs_sep[i](x * x_mask) - y = self.norms_1[i](y) - y = F.gelu(y) - y = self.convs_1x1[i](y) - y = self.norms_2[i](y) - y = F.gelu(y) - y = self.drop(y) - x = x + y - return x * x_mask - - -class WN(torch.nn.Module): - def __init__(self, hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=0, p_dropout=0): - super(WN, self).__init__() - assert(kernel_size % 2 == 1) - self.hidden_channels =hidden_channels - self.kernel_size = kernel_size, - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - self.p_dropout = p_dropout - - self.in_layers = torch.nn.ModuleList() - self.res_skip_layers = torch.nn.ModuleList() - self.drop = nn.Dropout(p_dropout) - - if gin_channels != 0: - cond_layer = torch.nn.Conv1d(gin_channels, 2*hidden_channels*n_layers, 1) - self.cond_layer = torch.nn.utils.weight_norm(cond_layer, name='weight') - - for i in range(n_layers): - dilation = dilation_rate ** i - padding = int((kernel_size * dilation - dilation) / 2) - in_layer = torch.nn.Conv1d(hidden_channels, 2*hidden_channels, kernel_size, - dilation=dilation, padding=padding) - in_layer = torch.nn.utils.weight_norm(in_layer, name='weight') - self.in_layers.append(in_layer) - - # last one is not necessary - if i < n_layers - 1: - res_skip_channels = 2 * hidden_channels - else: - res_skip_channels = hidden_channels - - res_skip_layer = torch.nn.Conv1d(hidden_channels, res_skip_channels, 1) - res_skip_layer = torch.nn.utils.weight_norm(res_skip_layer, name='weight') - self.res_skip_layers.append(res_skip_layer) - - def forward(self, x, x_mask, g=None, **kwargs): - output = torch.zeros_like(x) - n_channels_tensor = torch.IntTensor([self.hidden_channels]) - - if g is not None: - g = self.cond_layer(g) - - for i in range(self.n_layers): - x_in = self.in_layers[i](x) - if g is not None: - cond_offset = i * 2 * self.hidden_channels - g_l = g[:,cond_offset:cond_offset+2*self.hidden_channels,:] - else: - g_l = torch.zeros_like(x_in) - - acts = commons.fused_add_tanh_sigmoid_multiply( - x_in, - g_l, - n_channels_tensor) - acts = self.drop(acts) - - res_skip_acts = self.res_skip_layers[i](acts) - if i < self.n_layers - 1: - res_acts = res_skip_acts[:,:self.hidden_channels,:] - x = (x + res_acts) * x_mask - output = output + res_skip_acts[:,self.hidden_channels:,:] - else: - output = output + res_skip_acts - return output * x_mask - - def remove_weight_norm(self): - if self.gin_channels != 0: - torch.nn.utils.remove_weight_norm(self.cond_layer) - for l in self.in_layers: - torch.nn.utils.remove_weight_norm(l) - for l in self.res_skip_layers: - torch.nn.utils.remove_weight_norm(l) - - -class ResBlock1(torch.nn.Module): - def __init__(self, channels, kernel_size=3, dilation=(1, 3, 5)): - super(ResBlock1, self).__init__() - self.convs1 = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[2], - padding=get_padding(kernel_size, dilation[2]))) - ]) - self.convs1.apply(init_weights) - - self.convs2 = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))) - ]) - self.convs2.apply(init_weights) - - def forward(self, x, x_mask=None): - for c1, c2 in zip(self.convs1, self.convs2): - xt = F.leaky_relu(x, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c1(xt) - xt = F.leaky_relu(xt, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c2(xt) - x = xt + x - if x_mask is not None: - x = x * x_mask - return x - - def remove_weight_norm(self): - for l in self.convs1: - remove_weight_norm(l) - for l in self.convs2: - remove_weight_norm(l) - - -class ResBlock2(torch.nn.Module): - def __init__(self, channels, kernel_size=3, dilation=(1, 3)): - super(ResBlock2, self).__init__() - self.convs = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]))) - ]) - self.convs.apply(init_weights) - - def forward(self, x, x_mask=None): - for c in self.convs: - xt = F.leaky_relu(x, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c(xt) - x = xt + x - if x_mask is not None: - x = x * x_mask - return x - - def remove_weight_norm(self): - for l in self.convs: - remove_weight_norm(l) - - -class Log(nn.Module): - def forward(self, x, x_mask, reverse=False, **kwargs): - if not reverse: - y = torch.log(torch.clamp_min(x, 1e-5)) * x_mask - logdet = torch.sum(-y, [1, 2]) - return y, logdet - else: - x = torch.exp(x) * x_mask - return x - - -class Flip(nn.Module): - def forward(self, x, *args, reverse=False, **kwargs): - x = torch.flip(x, [1]) - if not reverse: - logdet = torch.zeros(x.size(0)).to(dtype=x.dtype, device=x.device) - return x, logdet - else: - return x - - -class ElementwiseAffine(nn.Module): - def __init__(self, channels): - super().__init__() - self.channels = channels - self.m = nn.Parameter(torch.zeros(channels,1)) - self.logs = nn.Parameter(torch.zeros(channels,1)) - - def forward(self, x, x_mask, reverse=False, **kwargs): - if not reverse: - y = self.m + torch.exp(self.logs) * x - y = y * x_mask - logdet = torch.sum(self.logs * x_mask, [1,2]) - return y, logdet - else: - x = (x - self.m) * torch.exp(-self.logs) * x_mask - return x - - -class ResidualCouplingLayer(nn.Module): - def __init__(self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - p_dropout=0, - gin_channels=0, - mean_only=False): - assert channels % 2 == 0, "channels should be divisible by 2" - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.half_channels = channels // 2 - self.mean_only = mean_only - - self.pre = nn.Conv1d(self.half_channels, hidden_channels, 1) - self.enc = WN(hidden_channels, kernel_size, dilation_rate, n_layers, p_dropout=p_dropout, gin_channels=gin_channels) - self.post = nn.Conv1d(hidden_channels, self.half_channels * (2 - mean_only), 1) - self.post.weight.data.zero_() - self.post.bias.data.zero_() - - def forward(self, x, x_mask, g=None, reverse=False): - x0, x1 = torch.split(x, [self.half_channels]*2, 1) - h = self.pre(x0) * x_mask - h = self.enc(h, x_mask, g=g) - stats = self.post(h) * x_mask - if not self.mean_only: - m, logs = torch.split(stats, [self.half_channels]*2, 1) - else: - m = stats - logs = torch.zeros_like(m) - - if not reverse: - x1 = m + x1 * torch.exp(logs) * x_mask - x = torch.cat([x0, x1], 1) - logdet = torch.sum(logs, [1,2]) - return x, logdet - else: - x1 = (x1 - m) * torch.exp(-logs) * x_mask - x = torch.cat([x0, x1], 1) - return x - - -class ConvFlow(nn.Module): - def __init__(self, in_channels, filter_channels, kernel_size, n_layers, num_bins=10, tail_bound=5.0): - super().__init__() - self.in_channels = in_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.num_bins = num_bins - self.tail_bound = tail_bound - self.half_channels = in_channels // 2 - - self.pre = nn.Conv1d(self.half_channels, filter_channels, 1) - self.convs = DDSConv(filter_channels, kernel_size, n_layers, p_dropout=0.) - self.proj = nn.Conv1d(filter_channels, self.half_channels * (num_bins * 3 - 1), 1) - self.proj.weight.data.zero_() - self.proj.bias.data.zero_() - - def forward(self, x, x_mask, g=None, reverse=False): - x0, x1 = torch.split(x, [self.half_channels]*2, 1) - h = self.pre(x0) - h = self.convs(h, x_mask, g=g) - h = self.proj(h) * x_mask - - b, c, t = x0.shape - h = h.reshape(b, c, -1, t).permute(0, 1, 3, 2) # [b, cx?, t] -> [b, c, t, ?] - - unnormalized_widths = h[..., :self.num_bins] / math.sqrt(self.filter_channels) - unnormalized_heights = h[..., self.num_bins:2*self.num_bins] / math.sqrt(self.filter_channels) - unnormalized_derivatives = h[..., 2 * self.num_bins:] - - x1, logabsdet = piecewise_rational_quadratic_transform(x1, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=reverse, - tails='linear', - tail_bound=self.tail_bound - ) - - x = torch.cat([x0, x1], 1) * x_mask - logdet = torch.sum(logabsdet * x_mask, [1,2]) - if not reverse: - return x, logdet - else: - return x diff --git a/spaces/KunalKharalkar/imagetostory/README.md b/spaces/KunalKharalkar/imagetostory/README.md deleted file mode 100644 index 3a246576ef59c81688ccc8fd9d2fddd856255d93..0000000000000000000000000000000000000000 --- a/spaces/KunalKharalkar/imagetostory/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Imagetotext -emoji: 🌍 -colorFrom: green -colorTo: pink -sdk: streamlit -sdk_version: 1.25.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/KyanChen/RSPrompter/mmpretrain/datasets/coco_vqa.py b/spaces/KyanChen/RSPrompter/mmpretrain/datasets/coco_vqa.py deleted file mode 100644 index 85f4bdcf39ef82ec47a2072dc198e6b8792d8768..0000000000000000000000000000000000000000 --- a/spaces/KyanChen/RSPrompter/mmpretrain/datasets/coco_vqa.py +++ /dev/null @@ -1,114 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import os.path as osp -import re -from collections import Counter -from typing import List - -import mmengine -from mmengine.dataset import BaseDataset - -from mmpretrain.registry import DATASETS - - -@DATASETS.register_module() -class COCOVQA(BaseDataset): - """VQAv2 dataset. - - Args: - data_root (str): The root directory for ``data_prefix``, ``ann_file`` - and ``question_file``. - data_prefix (str): The directory of images. - question_file (str): Question file path. - ann_file (str, optional): Annotation file path for training and - validation. Defaults to an empty string. - **kwargs: Other keyword arguments in :class:`BaseDataset`. - """ - - def __init__(self, - data_root: str, - data_prefix: str, - question_file: str, - ann_file: str = '', - **kwarg): - self.question_file = question_file - super().__init__( - data_root=data_root, - data_prefix=dict(img_path=data_prefix), - ann_file=ann_file, - **kwarg, - ) - - def _join_prefix(self): - if not mmengine.is_abs(self.question_file) and self.question_file: - self.question_file = osp.join(self.data_root, self.question_file) - - return super()._join_prefix() - - def _create_image_index(self): - img_prefix = self.data_prefix['img_path'] - - files = mmengine.list_dir_or_file(img_prefix, list_dir=False) - image_index = {} - for file in files: - image_id = re.findall(r'\d{12}', file) - if len(image_id) > 0: - image_id = int(image_id[-1]) - image_index[image_id] = mmengine.join_path(img_prefix, file) - - return image_index - - def load_data_list(self) -> List[dict]: - """Load data list.""" - questions = mmengine.load(self.question_file)['questions'] - if self.ann_file: - annotations = mmengine.load(self.ann_file)['annotations'] - assert len(questions) == len(annotations) - else: - annotations = [None] * len(questions) - - # The original VQAv2 annotation file and question file includes - # only image id but no image file paths. - self.image_index = self._create_image_index() - - data_list = [] - for question, ann in zip(questions, annotations): - # question example - # { - # 'image_id': 262144, - # 'question': "Is the ball flying towards the batter?", - # 'question_id': 262144000 - # } - # - # ann example - # { - # 'question_type': "what are the", - # 'answer_type': "other", - # 'answers': [ - # {'answer': 'watching', - # 'answer_id': 1, - # 'answer_confidence': 'yes'}, - # ... - # ], - # 'image_id': 262148, - # 'question_id': 262148000, - # 'multiple_choice_answer': 'watching', - # 'answer_type': 'other', - # } - - data_info = question - data_info['img_path'] = self.image_index[question['image_id']] - - if ann is not None: - assert ann['question_id'] == question['question_id'] - - # add answer_weight & answer_count, delete duplicate answer - answers = [item['answer'] for item in ann.pop('answers')] - count = Counter(answers) - answer_weight = [i / len(answers) for i in count.values()] - data_info['gt_answer'] = list(count.keys()) - data_info['gt_answer_weight'] = answer_weight - data_info.update(ann) - - data_list.append(data_info) - - return data_list diff --git a/spaces/L0SG/BigVGAN/alias_free_torch/filter.py b/spaces/L0SG/BigVGAN/alias_free_torch/filter.py deleted file mode 100644 index 7ad6ea87c1f10ddd94c544037791d7a4634d5ae1..0000000000000000000000000000000000000000 --- a/spaces/L0SG/BigVGAN/alias_free_torch/filter.py +++ /dev/null @@ -1,95 +0,0 @@ -# Adapted from https://github.com/junjun3518/alias-free-torch under the Apache License 2.0 -# LICENSE is in incl_licenses directory. - -import torch -import torch.nn as nn -import torch.nn.functional as F -import math - -if 'sinc' in dir(torch): - sinc = torch.sinc -else: - # This code is adopted from adefossez's julius.core.sinc under the MIT License - # https://adefossez.github.io/julius/julius/core.html - # LICENSE is in incl_licenses directory. - def sinc(x: torch.Tensor): - """ - Implementation of sinc, i.e. sin(pi * x) / (pi * x) - __Warning__: Different to julius.sinc, the input is multiplied by `pi`! - """ - return torch.where(x == 0, - torch.tensor(1., device=x.device, dtype=x.dtype), - torch.sin(math.pi * x) / math.pi / x) - - -# This code is adopted from adefossez's julius.lowpass.LowPassFilters under the MIT License -# https://adefossez.github.io/julius/julius/lowpass.html -# LICENSE is in incl_licenses directory. -def kaiser_sinc_filter1d(cutoff, half_width, kernel_size): # return filter [1,1,kernel_size] - even = (kernel_size % 2 == 0) - half_size = kernel_size // 2 - - #For kaiser window - delta_f = 4 * half_width - A = 2.285 * (half_size - 1) * math.pi * delta_f + 7.95 - if A > 50.: - beta = 0.1102 * (A - 8.7) - elif A >= 21.: - beta = 0.5842 * (A - 21)**0.4 + 0.07886 * (A - 21.) - else: - beta = 0. - window = torch.kaiser_window(kernel_size, beta=beta, periodic=False) - - # ratio = 0.5/cutoff -> 2 * cutoff = 1 / ratio - if even: - time = (torch.arange(-half_size, half_size) + 0.5) - else: - time = torch.arange(kernel_size) - half_size - if cutoff == 0: - filter_ = torch.zeros_like(time) - else: - filter_ = 2 * cutoff * window * sinc(2 * cutoff * time) - # Normalize filter to have sum = 1, otherwise we will have a small leakage - # of the constant component in the input signal. - filter_ /= filter_.sum() - filter = filter_.view(1, 1, kernel_size) - - return filter - - -class LowPassFilter1d(nn.Module): - def __init__(self, - cutoff=0.5, - half_width=0.6, - stride: int = 1, - padding: bool = True, - padding_mode: str = 'replicate', - kernel_size: int = 12): - # kernel_size should be even number for stylegan3 setup, - # in this implementation, odd number is also possible. - super().__init__() - if cutoff < -0.: - raise ValueError("Minimum cutoff must be larger than zero.") - if cutoff > 0.5: - raise ValueError("A cutoff above 0.5 does not make sense.") - self.kernel_size = kernel_size - self.even = (kernel_size % 2 == 0) - self.pad_left = kernel_size // 2 - int(self.even) - self.pad_right = kernel_size // 2 - self.stride = stride - self.padding = padding - self.padding_mode = padding_mode - filter = kaiser_sinc_filter1d(cutoff, half_width, kernel_size) - self.register_buffer("filter", filter) - - #input [B, C, T] - def forward(self, x): - _, C, _ = x.shape - - if self.padding: - x = F.pad(x, (self.pad_left, self.pad_right), - mode=self.padding_mode) - out = F.conv1d(x, self.filter.expand(C, -1, -1), - stride=self.stride, groups=C) - - return out \ No newline at end of file diff --git a/spaces/LangChainHub-Prompts/langchain_submission/README.md b/spaces/LangChainHub-Prompts/langchain_submission/README.md deleted file mode 100644 index f5268f3cb33274bf176668cdf8304c46ee5c3a5c..0000000000000000000000000000000000000000 --- a/spaces/LangChainHub-Prompts/langchain_submission/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Langchain Form Submission -emoji: 🏢 -colorFrom: blue -colorTo: blue -sdk: gradio -sdk_version: 3.17.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/LaynzKunz/Advanced-RVC-Inference/vc_infer_pipeline.py b/spaces/LaynzKunz/Advanced-RVC-Inference/vc_infer_pipeline.py deleted file mode 100644 index 82c15f59a8072e1b317fa1d750ccc1b814a6989d..0000000000000000000000000000000000000000 --- a/spaces/LaynzKunz/Advanced-RVC-Inference/vc_infer_pipeline.py +++ /dev/null @@ -1,443 +0,0 @@ -import numpy as np, parselmouth, torch, pdb, sys, os -from time import time as ttime -import torch.nn.functional as F -import scipy.signal as signal -import pyworld, os, traceback, faiss, librosa, torchcrepe -from scipy import signal -from functools import lru_cache - -now_dir = os.getcwd() -sys.path.append(now_dir) - -bh, ah = signal.butter(N=5, Wn=48, btype="high", fs=16000) - -input_audio_path2wav = {} - - -@lru_cache -def cache_harvest_f0(input_audio_path, fs, f0max, f0min, frame_period): - audio = input_audio_path2wav[input_audio_path] - f0, t = pyworld.harvest( - audio, - fs=fs, - f0_ceil=f0max, - f0_floor=f0min, - frame_period=frame_period, - ) - f0 = pyworld.stonemask(audio, f0, t, fs) - return f0 - - -def change_rms(data1, sr1, data2, sr2, rate): # 1是输入音频,2是输出音频,rate是2的占比 - # print(data1.max(),data2.max()) - rms1 = librosa.feature.rms( - y=data1, frame_length=sr1 // 2 * 2, hop_length=sr1 // 2 - ) # 每半秒一个点 - rms2 = librosa.feature.rms(y=data2, frame_length=sr2 // 2 * 2, hop_length=sr2 // 2) - rms1 = torch.from_numpy(rms1) - rms1 = F.interpolate( - rms1.unsqueeze(0), size=data2.shape[0], mode="linear" - ).squeeze() - rms2 = torch.from_numpy(rms2) - rms2 = F.interpolate( - rms2.unsqueeze(0), size=data2.shape[0], mode="linear" - ).squeeze() - rms2 = torch.max(rms2, torch.zeros_like(rms2) + 1e-6) - data2 *= ( - torch.pow(rms1, torch.tensor(1 - rate)) - * torch.pow(rms2, torch.tensor(rate - 1)) - ).numpy() - return data2 - - -class VC(object): - def __init__(self, tgt_sr, config): - self.x_pad, self.x_query, self.x_center, self.x_max, self.is_half = ( - config.x_pad, - config.x_query, - config.x_center, - config.x_max, - config.is_half, - ) - self.sr = 16000 # hubert输入采样率 - self.window = 160 # 每帧点数 - self.t_pad = self.sr * self.x_pad # 每条前后pad时间 - self.t_pad_tgt = tgt_sr * self.x_pad - self.t_pad2 = self.t_pad * 2 - self.t_query = self.sr * self.x_query # 查询切点前后查询时间 - self.t_center = self.sr * self.x_center # 查询切点位置 - self.t_max = self.sr * self.x_max # 免查询时长阈值 - self.device = config.device - - def get_f0( - self, - input_audio_path, - x, - p_len, - f0_up_key, - f0_method, - filter_radius, - inp_f0=None, - ): - global input_audio_path2wav - time_step = self.window / self.sr * 1000 - f0_min = 50 - f0_max = 1100 - f0_mel_min = 1127 * np.log(1 + f0_min / 700) - f0_mel_max = 1127 * np.log(1 + f0_max / 700) - if f0_method == "pm": - f0 = ( - parselmouth.Sound(x, self.sr) - .to_pitch_ac( - time_step=time_step / 1000, - voicing_threshold=0.6, - pitch_floor=f0_min, - pitch_ceiling=f0_max, - ) - .selected_array["frequency"] - ) - pad_size = (p_len - len(f0) + 1) // 2 - if pad_size > 0 or p_len - len(f0) - pad_size > 0: - f0 = np.pad( - f0, [[pad_size, p_len - len(f0) - pad_size]], mode="constant" - ) - elif f0_method == "harvest": - input_audio_path2wav[input_audio_path] = x.astype(np.double) - f0 = cache_harvest_f0(input_audio_path, self.sr, f0_max, f0_min, 10) - if filter_radius > 2: - f0 = signal.medfilt(f0, 3) - elif f0_method == "crepe": - model = "full" - # Pick a batch size that doesn't cause memory errors on your gpu - batch_size = 512 - # Compute pitch using first gpu - audio = torch.tensor(np.copy(x))[None].float() - f0, pd = torchcrepe.predict( - audio, - self.sr, - self.window, - f0_min, - f0_max, - model, - batch_size=batch_size, - device=self.device, - return_periodicity=True, - ) - pd = torchcrepe.filter.median(pd, 3) - f0 = torchcrepe.filter.mean(f0, 3) - f0[pd < 0.1] = 0 - f0 = f0[0].cpu().numpy() - elif f0_method == "rmvpe": - if hasattr(self, "model_rmvpe") == False: - from rmvpe import RMVPE - - print("loading rmvpe model") - self.model_rmvpe = RMVPE( - "rmvpe.pt", is_half=self.is_half, device=self.device - ) - f0 = self.model_rmvpe.infer_from_audio(x, thred=0.03) - f0 *= pow(2, f0_up_key / 12) - # with open("test.txt","w")as f:f.write("\n".join([str(i)for i in f0.tolist()])) - tf0 = self.sr // self.window # 每秒f0点数 - if inp_f0 is not None: - delta_t = np.round( - (inp_f0[:, 0].max() - inp_f0[:, 0].min()) * tf0 + 1 - ).astype("int16") - replace_f0 = np.interp( - list(range(delta_t)), inp_f0[:, 0] * 100, inp_f0[:, 1] - ) - shape = f0[self.x_pad * tf0 : self.x_pad * tf0 + len(replace_f0)].shape[0] - f0[self.x_pad * tf0 : self.x_pad * tf0 + len(replace_f0)] = replace_f0[ - :shape - ] - # with open("test_opt.txt","w")as f:f.write("\n".join([str(i)for i in f0.tolist()])) - f0bak = f0.copy() - f0_mel = 1127 * np.log(1 + f0 / 700) - f0_mel[f0_mel > 0] = (f0_mel[f0_mel > 0] - f0_mel_min) * 254 / ( - f0_mel_max - f0_mel_min - ) + 1 - f0_mel[f0_mel <= 1] = 1 - f0_mel[f0_mel > 255] = 255 - f0_coarse = np.rint(f0_mel).astype(np.int) - return f0_coarse, f0bak # 1-0 - - def vc( - self, - model, - net_g, - sid, - audio0, - pitch, - pitchf, - times, - index, - big_npy, - index_rate, - version, - protect, - ): # ,file_index,file_big_npy - feats = torch.from_numpy(audio0) - if self.is_half: - feats = feats.half() - else: - feats = feats.float() - if feats.dim() == 2: # double channels - feats = feats.mean(-1) - assert feats.dim() == 1, feats.dim() - feats = feats.view(1, -1) - padding_mask = torch.BoolTensor(feats.shape).to(self.device).fill_(False) - - inputs = { - "source": feats.to(self.device), - "padding_mask": padding_mask, - "output_layer": 9 if version == "v1" else 12, - } - t0 = ttime() - with torch.no_grad(): - logits = model.extract_features(**inputs) - feats = model.final_proj(logits[0]) if version == "v1" else logits[0] - if protect < 0.5 and pitch != None and pitchf != None: - feats0 = feats.clone() - if ( - isinstance(index, type(None)) == False - and isinstance(big_npy, type(None)) == False - and index_rate != 0 - ): - npy = feats[0].cpu().numpy() - if self.is_half: - npy = npy.astype("float32") - - # _, I = index.search(npy, 1) - # npy = big_npy[I.squeeze()] - - score, ix = index.search(npy, k=8) - weight = np.square(1 / score) - weight /= weight.sum(axis=1, keepdims=True) - npy = np.sum(big_npy[ix] * np.expand_dims(weight, axis=2), axis=1) - - if self.is_half: - npy = npy.astype("float16") - feats = ( - torch.from_numpy(npy).unsqueeze(0).to(self.device) * index_rate - + (1 - index_rate) * feats - ) - - feats = F.interpolate(feats.permute(0, 2, 1), scale_factor=2).permute(0, 2, 1) - if protect < 0.5 and pitch != None and pitchf != None: - feats0 = F.interpolate(feats0.permute(0, 2, 1), scale_factor=2).permute( - 0, 2, 1 - ) - t1 = ttime() - p_len = audio0.shape[0] // self.window - if feats.shape[1] < p_len: - p_len = feats.shape[1] - if pitch != None and pitchf != None: - pitch = pitch[:, :p_len] - pitchf = pitchf[:, :p_len] - - if protect < 0.5 and pitch != None and pitchf != None: - pitchff = pitchf.clone() - pitchff[pitchf > 0] = 1 - pitchff[pitchf < 1] = protect - pitchff = pitchff.unsqueeze(-1) - feats = feats * pitchff + feats0 * (1 - pitchff) - feats = feats.to(feats0.dtype) - p_len = torch.tensor([p_len], device=self.device).long() - with torch.no_grad(): - if pitch != None and pitchf != None: - audio1 = ( - (net_g.infer(feats, p_len, pitch, pitchf, sid)[0][0, 0]) - .data.cpu() - .float() - .numpy() - ) - else: - audio1 = ( - (net_g.infer(feats, p_len, sid)[0][0, 0]).data.cpu().float().numpy() - ) - del feats, p_len, padding_mask - if torch.cuda.is_available(): - torch.cuda.empty_cache() - t2 = ttime() - times[0] += t1 - t0 - times[2] += t2 - t1 - return audio1 - - def pipeline( - self, - model, - net_g, - sid, - audio, - input_audio_path, - times, - f0_up_key, - f0_method, - file_index, - # file_big_npy, - index_rate, - if_f0, - filter_radius, - tgt_sr, - resample_sr, - rms_mix_rate, - version, - protect, - f0_file=None, - ): - if ( - file_index != "" - # and file_big_npy != "" - # and os.path.exists(file_big_npy) == True - and os.path.exists(file_index) == True - and index_rate != 0 - ): - try: - index = faiss.read_index(file_index) - # big_npy = np.load(file_big_npy) - big_npy = index.reconstruct_n(0, index.ntotal) - except: - traceback.print_exc() - index = big_npy = None - else: - index = big_npy = None - audio = signal.filtfilt(bh, ah, audio) - audio_pad = np.pad(audio, (self.window // 2, self.window // 2), mode="reflect") - opt_ts = [] - if audio_pad.shape[0] > self.t_max: - audio_sum = np.zeros_like(audio) - for i in range(self.window): - audio_sum += audio_pad[i : i - self.window] - for t in range(self.t_center, audio.shape[0], self.t_center): - opt_ts.append( - t - - self.t_query - + np.where( - np.abs(audio_sum[t - self.t_query : t + self.t_query]) - == np.abs(audio_sum[t - self.t_query : t + self.t_query]).min() - )[0][0] - ) - s = 0 - audio_opt = [] - t = None - t1 = ttime() - audio_pad = np.pad(audio, (self.t_pad, self.t_pad), mode="reflect") - p_len = audio_pad.shape[0] // self.window - inp_f0 = None - if hasattr(f0_file, "name") == True: - try: - with open(f0_file.name, "r") as f: - lines = f.read().strip("\n").split("\n") - inp_f0 = [] - for line in lines: - inp_f0.append([float(i) for i in line.split(",")]) - inp_f0 = np.array(inp_f0, dtype="float32") - except: - traceback.print_exc() - sid = torch.tensor(sid, device=self.device).unsqueeze(0).long() - pitch, pitchf = None, None - if if_f0 == 1: - pitch, pitchf = self.get_f0( - input_audio_path, - audio_pad, - p_len, - f0_up_key, - f0_method, - filter_radius, - inp_f0, - ) - pitch = pitch[:p_len] - pitchf = pitchf[:p_len] - if self.device == "mps": - pitchf = pitchf.astype(np.float32) - pitch = torch.tensor(pitch, device=self.device).unsqueeze(0).long() - pitchf = torch.tensor(pitchf, device=self.device).unsqueeze(0).float() - t2 = ttime() - times[1] += t2 - t1 - for t in opt_ts: - t = t // self.window * self.window - if if_f0 == 1: - audio_opt.append( - self.vc( - model, - net_g, - sid, - audio_pad[s : t + self.t_pad2 + self.window], - pitch[:, s // self.window : (t + self.t_pad2) // self.window], - pitchf[:, s // self.window : (t + self.t_pad2) // self.window], - times, - index, - big_npy, - index_rate, - version, - protect, - )[self.t_pad_tgt : -self.t_pad_tgt] - ) - else: - audio_opt.append( - self.vc( - model, - net_g, - sid, - audio_pad[s : t + self.t_pad2 + self.window], - None, - None, - times, - index, - big_npy, - index_rate, - version, - protect, - )[self.t_pad_tgt : -self.t_pad_tgt] - ) - s = t - if if_f0 == 1: - audio_opt.append( - self.vc( - model, - net_g, - sid, - audio_pad[t:], - pitch[:, t // self.window :] if t is not None else pitch, - pitchf[:, t // self.window :] if t is not None else pitchf, - times, - index, - big_npy, - index_rate, - version, - protect, - )[self.t_pad_tgt : -self.t_pad_tgt] - ) - else: - audio_opt.append( - self.vc( - model, - net_g, - sid, - audio_pad[t:], - None, - None, - times, - index, - big_npy, - index_rate, - version, - protect, - )[self.t_pad_tgt : -self.t_pad_tgt] - ) - audio_opt = np.concatenate(audio_opt) - if rms_mix_rate != 1: - audio_opt = change_rms(audio, 16000, audio_opt, tgt_sr, rms_mix_rate) - if resample_sr >= 16000 and tgt_sr != resample_sr: - audio_opt = librosa.resample( - audio_opt, orig_sr=tgt_sr, target_sr=resample_sr - ) - audio_max = np.abs(audio_opt).max() / 0.99 - max_int16 = 32768 - if audio_max > 1: - max_int16 /= audio_max - audio_opt = (audio_opt * max_int16).astype(np.int16) - del pitch, pitchf, sid - if torch.cuda.is_available(): - torch.cuda.empty_cache() - return audio_opt diff --git a/spaces/LaynzKunz/RVC-Inference-webui-grado-colab-huggingafce/lib/infer_pack/transforms.py b/spaces/LaynzKunz/RVC-Inference-webui-grado-colab-huggingafce/lib/infer_pack/transforms.py deleted file mode 100644 index a11f799e023864ff7082c1f49c0cc18351a13b47..0000000000000000000000000000000000000000 --- a/spaces/LaynzKunz/RVC-Inference-webui-grado-colab-huggingafce/lib/infer_pack/transforms.py +++ /dev/null @@ -1,209 +0,0 @@ -import torch -from torch.nn import functional as F - -import numpy as np - - -DEFAULT_MIN_BIN_WIDTH = 1e-3 -DEFAULT_MIN_BIN_HEIGHT = 1e-3 -DEFAULT_MIN_DERIVATIVE = 1e-3 - - -def piecewise_rational_quadratic_transform( - inputs, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=False, - tails=None, - tail_bound=1.0, - min_bin_width=DEFAULT_MIN_BIN_WIDTH, - min_bin_height=DEFAULT_MIN_BIN_HEIGHT, - min_derivative=DEFAULT_MIN_DERIVATIVE, -): - if tails is None: - spline_fn = rational_quadratic_spline - spline_kwargs = {} - else: - spline_fn = unconstrained_rational_quadratic_spline - spline_kwargs = {"tails": tails, "tail_bound": tail_bound} - - outputs, logabsdet = spline_fn( - inputs=inputs, - unnormalized_widths=unnormalized_widths, - unnormalized_heights=unnormalized_heights, - unnormalized_derivatives=unnormalized_derivatives, - inverse=inverse, - min_bin_width=min_bin_width, - min_bin_height=min_bin_height, - min_derivative=min_derivative, - **spline_kwargs - ) - return outputs, logabsdet - - -def searchsorted(bin_locations, inputs, eps=1e-6): - bin_locations[..., -1] += eps - return torch.sum(inputs[..., None] >= bin_locations, dim=-1) - 1 - - -def unconstrained_rational_quadratic_spline( - inputs, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=False, - tails="linear", - tail_bound=1.0, - min_bin_width=DEFAULT_MIN_BIN_WIDTH, - min_bin_height=DEFAULT_MIN_BIN_HEIGHT, - min_derivative=DEFAULT_MIN_DERIVATIVE, -): - inside_interval_mask = (inputs >= -tail_bound) & (inputs <= tail_bound) - outside_interval_mask = ~inside_interval_mask - - outputs = torch.zeros_like(inputs) - logabsdet = torch.zeros_like(inputs) - - if tails == "linear": - unnormalized_derivatives = F.pad(unnormalized_derivatives, pad=(1, 1)) - constant = np.log(np.exp(1 - min_derivative) - 1) - unnormalized_derivatives[..., 0] = constant - unnormalized_derivatives[..., -1] = constant - - outputs[outside_interval_mask] = inputs[outside_interval_mask] - logabsdet[outside_interval_mask] = 0 - else: - raise RuntimeError("{} tails are not implemented.".format(tails)) - - ( - outputs[inside_interval_mask], - logabsdet[inside_interval_mask], - ) = rational_quadratic_spline( - inputs=inputs[inside_interval_mask], - unnormalized_widths=unnormalized_widths[inside_interval_mask, :], - unnormalized_heights=unnormalized_heights[inside_interval_mask, :], - unnormalized_derivatives=unnormalized_derivatives[inside_interval_mask, :], - inverse=inverse, - left=-tail_bound, - right=tail_bound, - bottom=-tail_bound, - top=tail_bound, - min_bin_width=min_bin_width, - min_bin_height=min_bin_height, - min_derivative=min_derivative, - ) - - return outputs, logabsdet - - -def rational_quadratic_spline( - inputs, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=False, - left=0.0, - right=1.0, - bottom=0.0, - top=1.0, - min_bin_width=DEFAULT_MIN_BIN_WIDTH, - min_bin_height=DEFAULT_MIN_BIN_HEIGHT, - min_derivative=DEFAULT_MIN_DERIVATIVE, -): - if torch.min(inputs) < left or torch.max(inputs) > right: - raise ValueError("Input to a transform is not within its domain") - - num_bins = unnormalized_widths.shape[-1] - - if min_bin_width * num_bins > 1.0: - raise ValueError("Minimal bin width too large for the number of bins") - if min_bin_height * num_bins > 1.0: - raise ValueError("Minimal bin height too large for the number of bins") - - widths = F.softmax(unnormalized_widths, dim=-1) - widths = min_bin_width + (1 - min_bin_width * num_bins) * widths - cumwidths = torch.cumsum(widths, dim=-1) - cumwidths = F.pad(cumwidths, pad=(1, 0), mode="constant", value=0.0) - cumwidths = (right - left) * cumwidths + left - cumwidths[..., 0] = left - cumwidths[..., -1] = right - widths = cumwidths[..., 1:] - cumwidths[..., :-1] - - derivatives = min_derivative + F.softplus(unnormalized_derivatives) - - heights = F.softmax(unnormalized_heights, dim=-1) - heights = min_bin_height + (1 - min_bin_height * num_bins) * heights - cumheights = torch.cumsum(heights, dim=-1) - cumheights = F.pad(cumheights, pad=(1, 0), mode="constant", value=0.0) - cumheights = (top - bottom) * cumheights + bottom - cumheights[..., 0] = bottom - cumheights[..., -1] = top - heights = cumheights[..., 1:] - cumheights[..., :-1] - - if inverse: - bin_idx = searchsorted(cumheights, inputs)[..., None] - else: - bin_idx = searchsorted(cumwidths, inputs)[..., None] - - input_cumwidths = cumwidths.gather(-1, bin_idx)[..., 0] - input_bin_widths = widths.gather(-1, bin_idx)[..., 0] - - input_cumheights = cumheights.gather(-1, bin_idx)[..., 0] - delta = heights / widths - input_delta = delta.gather(-1, bin_idx)[..., 0] - - input_derivatives = derivatives.gather(-1, bin_idx)[..., 0] - input_derivatives_plus_one = derivatives[..., 1:].gather(-1, bin_idx)[..., 0] - - input_heights = heights.gather(-1, bin_idx)[..., 0] - - if inverse: - a = (inputs - input_cumheights) * ( - input_derivatives + input_derivatives_plus_one - 2 * input_delta - ) + input_heights * (input_delta - input_derivatives) - b = input_heights * input_derivatives - (inputs - input_cumheights) * ( - input_derivatives + input_derivatives_plus_one - 2 * input_delta - ) - c = -input_delta * (inputs - input_cumheights) - - discriminant = b.pow(2) - 4 * a * c - assert (discriminant >= 0).all() - - root = (2 * c) / (-b - torch.sqrt(discriminant)) - outputs = root * input_bin_widths + input_cumwidths - - theta_one_minus_theta = root * (1 - root) - denominator = input_delta + ( - (input_derivatives + input_derivatives_plus_one - 2 * input_delta) - * theta_one_minus_theta - ) - derivative_numerator = input_delta.pow(2) * ( - input_derivatives_plus_one * root.pow(2) - + 2 * input_delta * theta_one_minus_theta - + input_derivatives * (1 - root).pow(2) - ) - logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator) - - return outputs, -logabsdet - else: - theta = (inputs - input_cumwidths) / input_bin_widths - theta_one_minus_theta = theta * (1 - theta) - - numerator = input_heights * ( - input_delta * theta.pow(2) + input_derivatives * theta_one_minus_theta - ) - denominator = input_delta + ( - (input_derivatives + input_derivatives_plus_one - 2 * input_delta) - * theta_one_minus_theta - ) - outputs = input_cumheights + numerator / denominator - - derivative_numerator = input_delta.pow(2) * ( - input_derivatives_plus_one * theta.pow(2) - + 2 * input_delta * theta_one_minus_theta - + input_derivatives * (1 - theta).pow(2) - ) - logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator) - - return outputs, logabsdet diff --git a/spaces/LeeroyVonJenkins/cat-dog-classifier/README.md b/spaces/LeeroyVonJenkins/cat-dog-classifier/README.md deleted file mode 100644 index b6c8c8d677d08be9a9b7e6f6f44befdcfb001086..0000000000000000000000000000000000000000 --- a/spaces/LeeroyVonJenkins/cat-dog-classifier/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Cat Dog Classifier -emoji: 💩 -colorFrom: green -colorTo: red -sdk: gradio -sdk_version: 3.12.0 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Liu-LAB/GPT-academic/theme.py b/spaces/Liu-LAB/GPT-academic/theme.py deleted file mode 100644 index 5ef7e9605896dbdddcaea09e7d804baf3f5696cf..0000000000000000000000000000000000000000 --- a/spaces/Liu-LAB/GPT-academic/theme.py +++ /dev/null @@ -1,353 +0,0 @@ -import gradio as gr -from toolbox import get_conf -CODE_HIGHLIGHT, ADD_WAIFU = get_conf('CODE_HIGHLIGHT', 'ADD_WAIFU') -# gradio可用颜色列表 -# gr.themes.utils.colors.slate (石板色) -# gr.themes.utils.colors.gray (灰色) -# gr.themes.utils.colors.zinc (锌色) -# gr.themes.utils.colors.neutral (中性色) -# gr.themes.utils.colors.stone (石头色) -# gr.themes.utils.colors.red (红色) -# gr.themes.utils.colors.orange (橙色) -# gr.themes.utils.colors.amber (琥珀色) -# gr.themes.utils.colors.yellow (黄色) -# gr.themes.utils.colors.lime (酸橙色) -# gr.themes.utils.colors.green (绿色) -# gr.themes.utils.colors.emerald (祖母绿) -# gr.themes.utils.colors.teal (青蓝色) -# gr.themes.utils.colors.cyan (青色) -# gr.themes.utils.colors.sky (天蓝色) -# gr.themes.utils.colors.blue (蓝色) -# gr.themes.utils.colors.indigo (靛蓝色) -# gr.themes.utils.colors.violet (紫罗兰色) -# gr.themes.utils.colors.purple (紫色) -# gr.themes.utils.colors.fuchsia (洋红色) -# gr.themes.utils.colors.pink (粉红色) -# gr.themes.utils.colors.rose (玫瑰色) - - -def adjust_theme(): - - try: - color_er = gr.themes.utils.colors.fuchsia - set_theme = gr.themes.Default( - primary_hue=gr.themes.utils.colors.orange, - neutral_hue=gr.themes.utils.colors.gray, - font=["sans-serif", "Microsoft YaHei", "ui-sans-serif", "system-ui", - "sans-serif", gr.themes.utils.fonts.GoogleFont("Source Sans Pro")], - font_mono=["ui-monospace", "Consolas", "monospace", gr.themes.utils.fonts.GoogleFont("IBM Plex Mono")]) - set_theme.set( - # Colors - input_background_fill_dark="*neutral_800", - # Transition - button_transition="none", - # Shadows - button_shadow="*shadow_drop", - button_shadow_hover="*shadow_drop_lg", - button_shadow_active="*shadow_inset", - input_shadow="0 0 0 *shadow_spread transparent, *shadow_inset", - input_shadow_focus="0 0 0 *shadow_spread *secondary_50, *shadow_inset", - input_shadow_focus_dark="0 0 0 *shadow_spread *neutral_700, *shadow_inset", - checkbox_label_shadow="*shadow_drop", - block_shadow="*shadow_drop", - form_gap_width="1px", - # Button borders - input_border_width="1px", - input_background_fill="white", - # Gradients - stat_background_fill="linear-gradient(to right, *primary_400, *primary_200)", - stat_background_fill_dark="linear-gradient(to right, *primary_400, *primary_600)", - error_background_fill=f"linear-gradient(to right, {color_er.c100}, *background_fill_secondary)", - error_background_fill_dark="*background_fill_primary", - checkbox_label_background_fill="linear-gradient(to top, *neutral_50, white)", - checkbox_label_background_fill_dark="linear-gradient(to top, *neutral_900, *neutral_800)", - checkbox_label_background_fill_hover="linear-gradient(to top, *neutral_100, white)", - checkbox_label_background_fill_hover_dark="linear-gradient(to top, *neutral_900, *neutral_800)", - button_primary_background_fill="linear-gradient(to bottom right, *primary_100, *primary_300)", - button_primary_background_fill_dark="linear-gradient(to bottom right, *primary_500, *primary_600)", - button_primary_background_fill_hover="linear-gradient(to bottom right, *primary_100, *primary_200)", - button_primary_background_fill_hover_dark="linear-gradient(to bottom right, *primary_500, *primary_500)", - button_primary_border_color_dark="*primary_500", - button_secondary_background_fill="linear-gradient(to bottom right, *neutral_100, *neutral_200)", - button_secondary_background_fill_dark="linear-gradient(to bottom right, *neutral_600, *neutral_700)", - button_secondary_background_fill_hover="linear-gradient(to bottom right, *neutral_100, *neutral_100)", - button_secondary_background_fill_hover_dark="linear-gradient(to bottom right, *neutral_600, *neutral_600)", - button_cancel_background_fill=f"linear-gradient(to bottom right, {color_er.c100}, {color_er.c200})", - button_cancel_background_fill_dark=f"linear-gradient(to bottom right, {color_er.c600}, {color_er.c700})", - button_cancel_background_fill_hover=f"linear-gradient(to bottom right, {color_er.c100}, {color_er.c100})", - button_cancel_background_fill_hover_dark=f"linear-gradient(to bottom right, {color_er.c600}, {color_er.c600})", - button_cancel_border_color=color_er.c200, - button_cancel_border_color_dark=color_er.c600, - button_cancel_text_color=color_er.c600, - button_cancel_text_color_dark="white", - ) - - # 添加一个萌萌的看板娘 - if ADD_WAIFU: - js = """ - - - - """ - gradio_original_template_fn = gr.routes.templates.TemplateResponse - def gradio_new_template_fn(*args, **kwargs): - res = gradio_original_template_fn(*args, **kwargs) - res.body = res.body.replace(b'', f'{js}'.encode("utf8")) - res.init_headers() - return res - gr.routes.templates.TemplateResponse = gradio_new_template_fn # override gradio template - except: - set_theme = None - print('gradio版本较旧, 不能自定义字体和颜色') - return set_theme - - -advanced_css = """ -.markdown-body table { - margin: 1em 0; - border-collapse: collapse; - empty-cells: show; -} - -.markdown-body th, .markdown-body td { - border: 1.2px solid var(--border-color-primary); - padding: 5px; -} - -.markdown-body thead { - background-color: rgba(175,184,193,0.2); -} - -.markdown-body thead th { - padding: .5em .2em; -} - -.markdown-body ol, .markdown-body ul { - padding-inline-start: 2em !important; -} - -/* chat box. */ -[class *= "message"] { - border-radius: var(--radius-xl) !important; - /* padding: var(--spacing-xl) !important; */ - /* font-size: var(--text-md) !important; */ - /* line-height: var(--line-md) !important; */ - /* min-height: calc(var(--text-md)*var(--line-md) + 2*var(--spacing-xl)); */ - /* min-width: calc(var(--text-md)*var(--line-md) + 2*var(--spacing-xl)); */ -} -[data-testid = "bot"] { - max-width: 95%; - /* width: auto !important; */ - border-bottom-left-radius: 0 !important; -} -[data-testid = "user"] { - max-width: 100%; - /* width: auto !important; */ - border-bottom-right-radius: 0 !important; -} - -/* linein code block. */ -.markdown-body code { - display: inline; - white-space: break-spaces; - border-radius: 6px; - margin: 0 2px 0 2px; - padding: .2em .4em .1em .4em; - background-color: rgba(13, 17, 23, 0.95); - color: #c9d1d9; -} - -.dark .markdown-body code { - display: inline; - white-space: break-spaces; - border-radius: 6px; - margin: 0 2px 0 2px; - padding: .2em .4em .1em .4em; - background-color: rgba(175,184,193,0.2); -} - -/* code block css */ -.markdown-body pre code { - display: block; - overflow: auto; - white-space: pre; - background-color: rgba(13, 17, 23, 0.95); - border-radius: 10px; - padding: 1em; - margin: 1em 2em 1em 0.5em; -} - -.dark .markdown-body pre code { - display: block; - overflow: auto; - white-space: pre; - background-color: rgba(175,184,193,0.2); - border-radius: 10px; - padding: 1em; - margin: 1em 2em 1em 0.5em; -} - -""" - -if CODE_HIGHLIGHT: - advanced_css += """ - -.codehilite .hll { background-color: #6e7681 } -.codehilite .c { color: #8b949e; font-style: italic } /* Comment */ -.codehilite .err { color: #f85149 } /* Error */ -.codehilite .esc { color: #c9d1d9 } /* Escape */ -.codehilite .g { color: #c9d1d9 } /* Generic */ -.codehilite .k { color: #ff7b72 } /* Keyword */ -.codehilite .l { color: #a5d6ff } /* Literal */ -.codehilite .n { color: #c9d1d9 } /* Name */ -.codehilite .o { color: #ff7b72; font-weight: bold } /* Operator */ -.codehilite .x { color: #c9d1d9 } /* Other */ -.codehilite .p { color: #c9d1d9 } /* Punctuation */ -.codehilite .ch { color: #8b949e; font-style: italic } /* Comment.Hashbang */ -.codehilite .cm { color: #8b949e; font-style: italic } /* Comment.Multiline */ -.codehilite .cp { color: #8b949e; font-weight: bold; font-style: italic } /* Comment.Preproc */ -.codehilite .cpf { color: #8b949e; font-style: italic } /* Comment.PreprocFile */ -.codehilite .c1 { color: #8b949e; font-style: italic } /* Comment.Single */ -.codehilite .cs { color: #8b949e; font-weight: bold; font-style: italic } /* Comment.Special */ -.codehilite .gd { color: #ffa198; background-color: #490202 } /* Generic.Deleted */ -.codehilite .ge { color: #c9d1d9; font-style: italic } /* Generic.Emph */ -.codehilite .gr { color: #ffa198 } /* Generic.Error */ -.codehilite .gh { color: #79c0ff; font-weight: bold } /* Generic.Heading */ -.codehilite .gi { color: #56d364; background-color: #0f5323 } /* Generic.Inserted */ -.codehilite .go { color: #8b949e } /* Generic.Output */ -.codehilite .gp { color: #8b949e } /* Generic.Prompt */ -.codehilite .gs { color: #c9d1d9; font-weight: bold } /* Generic.Strong */ -.codehilite .gu { color: #79c0ff } /* Generic.Subheading */ -.codehilite .gt { color: #ff7b72 } /* Generic.Traceback */ -.codehilite .g-Underline { color: #c9d1d9; text-decoration: underline } /* Generic.Underline */ -.codehilite .kc { color: #79c0ff } /* Keyword.Constant */ -.codehilite .kd { color: #ff7b72 } /* Keyword.Declaration */ -.codehilite .kn { color: #ff7b72 } /* Keyword.Namespace */ -.codehilite .kp { color: #79c0ff } /* Keyword.Pseudo */ -.codehilite .kr { color: #ff7b72 } /* Keyword.Reserved */ -.codehilite .kt { color: #ff7b72 } /* Keyword.Type */ -.codehilite .ld { color: #79c0ff } /* Literal.Date */ -.codehilite .m { color: #a5d6ff } /* Literal.Number */ -.codehilite .s { color: #a5d6ff } /* Literal.String */ -.codehilite .na { color: #c9d1d9 } /* Name.Attribute */ -.codehilite .nb { color: #c9d1d9 } /* Name.Builtin */ -.codehilite .nc { color: #f0883e; font-weight: bold } /* Name.Class */ -.codehilite .no { color: #79c0ff; font-weight: bold } /* Name.Constant */ -.codehilite .nd { color: #d2a8ff; font-weight: bold } /* Name.Decorator */ -.codehilite .ni { color: #ffa657 } /* Name.Entity */ -.codehilite .ne { color: #f0883e; font-weight: bold } /* Name.Exception */ -.codehilite .nf { color: #d2a8ff; font-weight: bold } /* Name.Function */ -.codehilite .nl { color: #79c0ff; font-weight: bold } /* Name.Label */ -.codehilite .nn { color: #ff7b72 } /* Name.Namespace */ -.codehilite .nx { color: #c9d1d9 } /* Name.Other */ -.codehilite .py { color: #79c0ff } /* Name.Property */ -.codehilite .nt { color: #7ee787 } /* Name.Tag */ -.codehilite .nv { color: #79c0ff } /* Name.Variable */ -.codehilite .ow { color: #ff7b72; font-weight: bold } /* Operator.Word */ -.codehilite .pm { color: #c9d1d9 } /* Punctuation.Marker */ -.codehilite .w { color: #6e7681 } /* Text.Whitespace */ -.codehilite .mb { color: #a5d6ff } /* Literal.Number.Bin */ -.codehilite .mf { color: #a5d6ff } /* Literal.Number.Float */ -.codehilite .mh { color: #a5d6ff } /* Literal.Number.Hex */ -.codehilite .mi { color: #a5d6ff } /* Literal.Number.Integer */ -.codehilite .mo { color: #a5d6ff } /* Literal.Number.Oct */ -.codehilite .sa { color: #79c0ff } /* Literal.String.Affix */ -.codehilite .sb { color: #a5d6ff } /* Literal.String.Backtick */ -.codehilite .sc { color: #a5d6ff } /* Literal.String.Char */ -.codehilite .dl { color: #79c0ff } /* Literal.String.Delimiter */ -.codehilite .sd { color: #a5d6ff } /* Literal.String.Doc */ -.codehilite .s2 { color: #a5d6ff } /* Literal.String.Double */ -.codehilite .se { color: #79c0ff } /* Literal.String.Escape */ -.codehilite .sh { color: #79c0ff } /* Literal.String.Heredoc */ -.codehilite .si { color: #a5d6ff } /* Literal.String.Interpol */ -.codehilite .sx { color: #a5d6ff } /* Literal.String.Other */ -.codehilite .sr { color: #79c0ff } /* Literal.String.Regex */ -.codehilite .s1 { color: #a5d6ff } /* Literal.String.Single */ -.codehilite .ss { color: #a5d6ff } /* Literal.String.Symbol */ -.codehilite .bp { color: #c9d1d9 } /* Name.Builtin.Pseudo */ -.codehilite .fm { color: #d2a8ff; font-weight: bold } /* Name.Function.Magic */ -.codehilite .vc { color: #79c0ff } /* Name.Variable.Class */ -.codehilite .vg { color: #79c0ff } /* Name.Variable.Global */ -.codehilite .vi { color: #79c0ff } /* Name.Variable.Instance */ -.codehilite .vm { color: #79c0ff } /* Name.Variable.Magic */ -.codehilite .il { color: #a5d6ff } /* Literal.Number.Integer.Long */ - -.dark .codehilite .hll { background-color: #2C3B41 } -.dark .codehilite .c { color: #79d618; font-style: italic } /* Comment */ -.dark .codehilite .err { color: #FF5370 } /* Error */ -.dark .codehilite .esc { color: #89DDFF } /* Escape */ -.dark .codehilite .g { color: #EEFFFF } /* Generic */ -.dark .codehilite .k { color: #BB80B3 } /* Keyword */ -.dark .codehilite .l { color: #C3E88D } /* Literal */ -.dark .codehilite .n { color: #EEFFFF } /* Name */ -.dark .codehilite .o { color: #89DDFF } /* Operator */ -.dark .codehilite .p { color: #89DDFF } /* Punctuation */ -.dark .codehilite .ch { color: #79d618; font-style: italic } /* Comment.Hashbang */ -.dark .codehilite .cm { color: #79d618; font-style: italic } /* Comment.Multiline */ -.dark .codehilite .cp { color: #79d618; font-style: italic } /* Comment.Preproc */ -.dark .codehilite .cpf { color: #79d618; font-style: italic } /* Comment.PreprocFile */ -.dark .codehilite .c1 { color: #79d618; font-style: italic } /* Comment.Single */ -.dark .codehilite .cs { color: #79d618; font-style: italic } /* Comment.Special */ -.dark .codehilite .gd { color: #FF5370 } /* Generic.Deleted */ -.dark .codehilite .ge { color: #89DDFF } /* Generic.Emph */ -.dark .codehilite .gr { color: #FF5370 } /* Generic.Error */ -.dark .codehilite .gh { color: #C3E88D } /* Generic.Heading */ -.dark .codehilite .gi { color: #C3E88D } /* Generic.Inserted */ -.dark .codehilite .go { color: #79d618 } /* Generic.Output */ -.dark .codehilite .gp { color: #FFCB6B } /* Generic.Prompt */ -.dark .codehilite .gs { color: #FF5370 } /* Generic.Strong */ -.dark .codehilite .gu { color: #89DDFF } /* Generic.Subheading */ -.dark .codehilite .gt { color: #FF5370 } /* Generic.Traceback */ -.dark .codehilite .kc { color: #89DDFF } /* Keyword.Constant */ -.dark .codehilite .kd { color: #BB80B3 } /* Keyword.Declaration */ -.dark .codehilite .kn { color: #89DDFF; font-style: italic } /* Keyword.Namespace */ -.dark .codehilite .kp { color: #89DDFF } /* Keyword.Pseudo */ -.dark .codehilite .kr { color: #BB80B3 } /* Keyword.Reserved */ -.dark .codehilite .kt { color: #BB80B3 } /* Keyword.Type */ -.dark .codehilite .ld { color: #C3E88D } /* Literal.Date */ -.dark .codehilite .m { color: #F78C6C } /* Literal.Number */ -.dark .codehilite .s { color: #C3E88D } /* Literal.String */ -.dark .codehilite .na { color: #BB80B3 } /* Name.Attribute */ -.dark .codehilite .nb { color: #82AAFF } /* Name.Builtin */ -.dark .codehilite .nc { color: #FFCB6B } /* Name.Class */ -.dark .codehilite .no { color: #EEFFFF } /* Name.Constant */ -.dark .codehilite .nd { color: #82AAFF } /* Name.Decorator */ -.dark .codehilite .ni { color: #89DDFF } /* Name.Entity */ -.dark .codehilite .ne { color: #FFCB6B } /* Name.Exception */ -.dark .codehilite .nf { color: #82AAFF } /* Name.Function */ -.dark .codehilite .nl { color: #82AAFF } /* Name.Label */ -.dark .codehilite .nn { color: #FFCB6B } /* Name.Namespace */ -.dark .codehilite .nx { color: #EEFFFF } /* Name.Other */ -.dark .codehilite .py { color: #FFCB6B } /* Name.Property */ -.dark .codehilite .nt { color: #FF5370 } /* Name.Tag */ -.dark .codehilite .nv { color: #89DDFF } /* Name.Variable */ -.dark .codehilite .ow { color: #89DDFF; font-style: italic } /* Operator.Word */ -.dark .codehilite .pm { color: #89DDFF } /* Punctuation.Marker */ -.dark .codehilite .w { color: #EEFFFF } /* Text.Whitespace */ -.dark .codehilite .mb { color: #F78C6C } /* Literal.Number.Bin */ -.dark .codehilite .mf { color: #F78C6C } /* Literal.Number.Float */ -.dark .codehilite .mh { color: #F78C6C } /* Literal.Number.Hex */ -.dark .codehilite .mi { color: #F78C6C } /* Literal.Number.Integer */ -.dark .codehilite .mo { color: #F78C6C } /* Literal.Number.Oct */ -.dark .codehilite .sa { color: #BB80B3 } /* Literal.String.Affix */ -.dark .codehilite .sb { color: #C3E88D } /* Literal.String.Backtick */ -.dark .codehilite .sc { color: #C3E88D } /* Literal.String.Char */ -.dark .codehilite .dl { color: #EEFFFF } /* Literal.String.Delimiter */ -.dark .codehilite .sd { color: #79d618; font-style: italic } /* Literal.String.Doc */ -.dark .codehilite .s2 { color: #C3E88D } /* Literal.String.Double */ -.dark .codehilite .se { color: #EEFFFF } /* Literal.String.Escape */ -.dark .codehilite .sh { color: #C3E88D } /* Literal.String.Heredoc */ -.dark .codehilite .si { color: #89DDFF } /* Literal.String.Interpol */ -.dark .codehilite .sx { color: #C3E88D } /* Literal.String.Other */ -.dark .codehilite .sr { color: #89DDFF } /* Literal.String.Regex */ -.dark .codehilite .s1 { color: #C3E88D } /* Literal.String.Single */ -.dark .codehilite .ss { color: #89DDFF } /* Literal.String.Symbol */ -.dark .codehilite .bp { color: #89DDFF } /* Name.Builtin.Pseudo */ -.dark .codehilite .fm { color: #82AAFF } /* Name.Function.Magic */ -.dark .codehilite .vc { color: #89DDFF } /* Name.Variable.Class */ -.dark .codehilite .vg { color: #89DDFF } /* Name.Variable.Global */ -.dark .codehilite .vi { color: #89DDFF } /* Name.Variable.Instance */ -.dark .codehilite .vm { color: #82AAFF } /* Name.Variable.Magic */ -.dark .codehilite .il { color: #F78C6C } /* Literal.Number.Integer.Long */ - -""" diff --git a/spaces/LuxOAI/ChatGpt-Web/app/api/config/route.ts b/spaces/LuxOAI/ChatGpt-Web/app/api/config/route.ts deleted file mode 100644 index 2b3bcbf203e9cfbf671b3143dd51160cb3e1f812..0000000000000000000000000000000000000000 --- a/spaces/LuxOAI/ChatGpt-Web/app/api/config/route.ts +++ /dev/null @@ -1,26 +0,0 @@ -import { NextResponse } from "next/server"; - -import { getServerSideConfig } from "../../config/server"; - -const serverConfig = getServerSideConfig(); - -// Danger! Don not write any secret value here! -// 警告!不要在这里写入任何敏感信息! -const DANGER_CONFIG = { - needCode: serverConfig.needCode, - hideUserApiKey: serverConfig.hideUserApiKey, - enableGPT4: serverConfig.enableGPT4, -}; - -declare global { - type DangerConfig = typeof DANGER_CONFIG; -} - -async function handle() { - return NextResponse.json(DANGER_CONFIG); -} - -export const GET = handle; -export const POST = handle; - -export const runtime = "edge"; diff --git a/spaces/LuxOAI/ChatGpt-Web/app/command.ts b/spaces/LuxOAI/ChatGpt-Web/app/command.ts deleted file mode 100644 index 919e94e53ee823d0c52fd28ac99779f134efee8c..0000000000000000000000000000000000000000 --- a/spaces/LuxOAI/ChatGpt-Web/app/command.ts +++ /dev/null @@ -1,28 +0,0 @@ -import { useSearchParams } from "react-router-dom"; - -type Command = (param: string) => void; -interface Commands { - fill?: Command; - submit?: Command; - mask?: Command; -} - -export function useCommand(commands: Commands = {}) { - const [searchParams, setSearchParams] = useSearchParams(); - - if (commands === undefined) return; - - let shouldUpdate = false; - searchParams.forEach((param, name) => { - const commandName = name as keyof Commands; - if (typeof commands[commandName] === "function") { - commands[commandName]!(param); - searchParams.delete(name); - shouldUpdate = true; - } - }); - - if (shouldUpdate) { - setSearchParams(searchParams); - } -} \ No newline at end of file diff --git a/spaces/M52395239m/Image_Face_Upscale_Restoration-GFPGAN/app.py b/spaces/M52395239m/Image_Face_Upscale_Restoration-GFPGAN/app.py deleted file mode 100644 index 67fcac0171bbb77d2b1d3b23b7293635b6297e28..0000000000000000000000000000000000000000 --- a/spaces/M52395239m/Image_Face_Upscale_Restoration-GFPGAN/app.py +++ /dev/null @@ -1,142 +0,0 @@ -import os - -import cv2 -import gradio as gr -import torch -from basicsr.archs.srvgg_arch import SRVGGNetCompact -from gfpgan.utils import GFPGANer -from realesrgan.utils import RealESRGANer - -os.system("pip freeze") -# download weights -if not os.path.exists('realesr-general-x4v3.pth'): - os.system("wget https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.5.0/realesr-general-x4v3.pth -P .") -if not os.path.exists('GFPGANv1.2.pth'): - os.system("wget https://github.com/TencentARC/GFPGAN/releases/download/v1.3.0/GFPGANv1.2.pth -P .") -if not os.path.exists('GFPGANv1.3.pth'): - os.system("wget https://github.com/TencentARC/GFPGAN/releases/download/v1.3.0/GFPGANv1.3.pth -P .") -if not os.path.exists('GFPGANv1.4.pth'): - os.system("wget https://github.com/TencentARC/GFPGAN/releases/download/v1.3.0/GFPGANv1.4.pth -P .") -if not os.path.exists('RestoreFormer.pth'): - os.system("wget https://github.com/TencentARC/GFPGAN/releases/download/v1.3.4/RestoreFormer.pth -P .") -if not os.path.exists('CodeFormer.pth'): - os.system("wget https://github.com/TencentARC/GFPGAN/releases/download/v1.3.4/CodeFormer.pth -P .") - -torch.hub.download_url_to_file( - 'https://thumbs.dreamstime.com/b/tower-bridge-traditional-red-bus-black-white-colors-view-to-tower-bridge-london-black-white-colors-108478942.jpg', - 'a1.jpg') -torch.hub.download_url_to_file( - 'https://media.istockphoto.com/id/523514029/photo/london-skyline-b-w.jpg?s=612x612&w=0&k=20&c=kJS1BAtfqYeUDaORupj0sBPc1hpzJhBUUqEFfRnHzZ0=', - 'a2.jpg') -torch.hub.download_url_to_file( - 'https://i.guim.co.uk/img/media/06f614065ed82ca0e917b149a32493c791619854/0_0_3648_2789/master/3648.jpg?width=700&quality=85&auto=format&fit=max&s=05764b507c18a38590090d987c8b6202', - 'a3.jpg') -torch.hub.download_url_to_file( - 'https://i.pinimg.com/736x/46/96/9e/46969eb94aec2437323464804d27706d--victorian-london-victorian-era.jpg', - 'a4.jpg') - -# background enhancer with RealESRGAN -model = SRVGGNetCompact(num_in_ch=3, num_out_ch=3, num_feat=64, num_conv=32, upscale=4, act_type='prelu') -model_path = 'realesr-general-x4v3.pth' -half = True if torch.cuda.is_available() else False -upsampler = RealESRGANer(scale=4, model_path=model_path, model=model, tile=0, tile_pad=10, pre_pad=0, half=half) - -os.makedirs('output', exist_ok=True) - - -# def inference(img, version, scale, weight): -def inference(img, version, scale): - # weight /= 100 - print(img, version, scale) - try: - extension = os.path.splitext(os.path.basename(str(img)))[1] - img = cv2.imread(img, cv2.IMREAD_UNCHANGED) - if len(img.shape) == 3 and img.shape[2] == 4: - img_mode = 'RGBA' - elif len(img.shape) == 2: # for gray inputs - img_mode = None - img = cv2.cvtColor(img, cv2.COLOR_GRAY2BGR) - else: - img_mode = None - - h, w = img.shape[0:2] - if h < 300: - img = cv2.resize(img, (w * 2, h * 2), interpolation=cv2.INTER_LANCZOS4) - - if version == 'v1.2': - face_enhancer = GFPGANer( - model_path='GFPGANv1.2.pth', upscale=2, arch='clean', channel_multiplier=2, bg_upsampler=upsampler) - elif version == 'v1.3': - face_enhancer = GFPGANer( - model_path='GFPGANv1.3.pth', upscale=2, arch='clean', channel_multiplier=2, bg_upsampler=upsampler) - elif version == 'v1.4': - face_enhancer = GFPGANer( - model_path='GFPGANv1.4.pth', upscale=2, arch='clean', channel_multiplier=2, bg_upsampler=upsampler) - elif version == 'RestoreFormer': - face_enhancer = GFPGANer( - model_path='RestoreFormer.pth', upscale=2, arch='RestoreFormer', channel_multiplier=2, bg_upsampler=upsampler) - elif version == 'CodeFormer': - face_enhancer = GFPGANer( - model_path='CodeFormer.pth', upscale=2, arch='CodeFormer', channel_multiplier=2, bg_upsampler=upsampler) - elif version == 'RealESR-General-x4v3': - face_enhancer = GFPGANer( - model_path='realesr-general-x4v3.pth', upscale=2, arch='realesr-general', channel_multiplier=2, bg_upsampler=upsampler) - - try: - # _, _, output = face_enhancer.enhance(img, has_aligned=False, only_center_face=False, paste_back=True, weight=weight) - _, _, output = face_enhancer.enhance(img, has_aligned=False, only_center_face=False, paste_back=True) - except RuntimeError as error: - print('Error', error) - - try: - if scale != 2: - interpolation = cv2.INTER_AREA if scale < 2 else cv2.INTER_LANCZOS4 - h, w = img.shape[0:2] - output = cv2.resize(output, (int(w * scale / 2), int(h * scale / 2)), interpolation=interpolation) - except Exception as error: - print('wrong scale input.', error) - if img_mode == 'RGBA': # RGBA images should be saved in png format - extension = 'png' - else: - extension = 'jpg' - save_path = f'output/out.{extension}' - cv2.imwrite(save_path, output) - - output = cv2.cvtColor(output, cv2.COLOR_BGR2RGB) - return output, save_path - except Exception as error: - print('global exception', error) - return None, None - - -title = "Image Upscaling & Restoration(esp. Face) using GFPGAN Algorithm" -description = r"""Gradio demo for GFPGAN: Towards Real-World Blind Face Restoration and Upscalling of the image with a Generative Facial Prior.
      -Practically the algorithm is used to restore your **old photos** or improve **AI-generated faces**.
      -To use it, simply just upload the concerned image.
      -""" -article = r""" -[![download](https://img.shields.io/github/downloads/TencentARC/GFPGAN/total.svg)](https://github.com/TencentARC/GFPGAN/releases) -[![GitHub Stars](https://img.shields.io/github/stars/TencentARC/GFPGAN?style=social)](https://github.com/TencentARC/GFPGAN) -[![arXiv](https://img.shields.io/badge/arXiv-Paper-.svg)](https://arxiv.org/abs/2101.04061) -
      visitor badge
      -""" -demo = gr.Interface( - inference, [ - gr.inputs.Image(type="filepath", label="Input"), - # gr.inputs.Radio(['v1.2', 'v1.3', 'v1.4', 'RestoreFormer', 'CodeFormer'], type="value", default='v1.4', label='version'), - gr.inputs.Radio(['v1.2', 'v1.3', 'v1.4', 'RestoreFormer','CodeFormer','RealESR-General-x4v3'], type="value", default='v1.4', label='version'), - gr.inputs.Number(label="Rescaling factor", default=2), - # gr.Slider(0, 100, label='Weight, only for CodeFormer. 0 for better quality, 100 for better identity', default=50) - ], [ - gr.outputs.Image(type="numpy", label="Output (The whole image)"), - gr.outputs.File(label="Download the output image") - ], - title=title, - description=description, - article=article, - # examples=[['AI-generate.jpg', 'v1.4', 2, 50], ['lincoln.jpg', 'v1.4', 2, 50], ['Blake_Lively.jpg', 'v1.4', 2, 50], - # ['10045.png', 'v1.4', 2, 50]]).launch() - examples=[['a1.jpg', 'v1.4', 2], ['a2.jpg', 'v1.4', 2], ['a3.jpg', 'v1.4', 2],['a4.jpg', 'v1.4', 2]]) - -demo.queue(concurrency_count=4) -demo.launch() \ No newline at end of file diff --git a/spaces/Make-A-Protagonist/Make-A-Protagonist-inference/Make-A-Protagonist/experts/XMem/inference/interact/s2m/s2m_network.py b/spaces/Make-A-Protagonist/Make-A-Protagonist-inference/Make-A-Protagonist/experts/XMem/inference/interact/s2m/s2m_network.py deleted file mode 100644 index e4f9a3fc4fcc9cc4210485fe24e4d740464d3f8a..0000000000000000000000000000000000000000 --- a/spaces/Make-A-Protagonist/Make-A-Protagonist-inference/Make-A-Protagonist/experts/XMem/inference/interact/s2m/s2m_network.py +++ /dev/null @@ -1,65 +0,0 @@ -# Credit: https://github.com/VainF/DeepLabV3Plus-Pytorch - -from .utils import IntermediateLayerGetter -from ._deeplab import DeepLabHead, DeepLabHeadV3Plus, DeepLabV3 -from . import s2m_resnet - -def _segm_resnet(name, backbone_name, num_classes, output_stride, pretrained_backbone): - - if output_stride==8: - replace_stride_with_dilation=[False, True, True] - aspp_dilate = [12, 24, 36] - else: - replace_stride_with_dilation=[False, False, True] - aspp_dilate = [6, 12, 18] - - backbone = s2m_resnet.__dict__[backbone_name]( - pretrained=pretrained_backbone, - replace_stride_with_dilation=replace_stride_with_dilation) - - inplanes = 2048 - low_level_planes = 256 - - if name=='deeplabv3plus': - return_layers = {'layer4': 'out', 'layer1': 'low_level'} - classifier = DeepLabHeadV3Plus(inplanes, low_level_planes, num_classes, aspp_dilate) - elif name=='deeplabv3': - return_layers = {'layer4': 'out'} - classifier = DeepLabHead(inplanes , num_classes, aspp_dilate) - backbone = IntermediateLayerGetter(backbone, return_layers=return_layers) - - model = DeepLabV3(backbone, classifier) - return model - -def _load_model(arch_type, backbone, num_classes, output_stride, pretrained_backbone): - - if backbone.startswith('resnet'): - model = _segm_resnet(arch_type, backbone, num_classes, output_stride=output_stride, pretrained_backbone=pretrained_backbone) - else: - raise NotImplementedError - return model - - -# Deeplab v3 -def deeplabv3_resnet50(num_classes=1, output_stride=16, pretrained_backbone=False): - """Constructs a DeepLabV3 model with a ResNet-50 backbone. - - Args: - num_classes (int): number of classes. - output_stride (int): output stride for deeplab. - pretrained_backbone (bool): If True, use the pretrained backbone. - """ - return _load_model('deeplabv3', 'resnet50', num_classes, output_stride=output_stride, pretrained_backbone=pretrained_backbone) - - -# Deeplab v3+ -def deeplabv3plus_resnet50(num_classes=1, output_stride=16, pretrained_backbone=False): - """Constructs a DeepLabV3 model with a ResNet-50 backbone. - - Args: - num_classes (int): number of classes. - output_stride (int): output stride for deeplab. - pretrained_backbone (bool): If True, use the pretrained backbone. - """ - return _load_model('deeplabv3plus', 'resnet50', num_classes, output_stride=output_stride, pretrained_backbone=pretrained_backbone) - diff --git a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/configs/_base_/datasets/cityscapes_769x769.py b/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/configs/_base_/datasets/cityscapes_769x769.py deleted file mode 100644 index 336c7b254fe392b4703039fec86a83acdbd2e1a5..0000000000000000000000000000000000000000 --- a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/configs/_base_/datasets/cityscapes_769x769.py +++ /dev/null @@ -1,35 +0,0 @@ -_base_ = './cityscapes.py' -img_norm_cfg = dict( - mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True) -crop_size = (769, 769) -train_pipeline = [ - dict(type='LoadImageFromFile'), - dict(type='LoadAnnotations'), - dict(type='Resize', img_scale=(2049, 1025), ratio_range=(0.5, 2.0)), - dict(type='RandomCrop', crop_size=crop_size, cat_max_ratio=0.75), - dict(type='RandomFlip', prob=0.5), - dict(type='PhotoMetricDistortion'), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size=crop_size, pad_val=0, seg_pad_val=255), - dict(type='DefaultFormatBundle'), - dict(type='Collect', keys=['img', 'gt_semantic_seg']), -] -test_pipeline = [ - dict(type='LoadImageFromFile'), - dict( - type='MultiScaleFlipAug', - img_scale=(2049, 1025), - # img_ratios=[0.5, 0.75, 1.0, 1.25, 1.5, 1.75], - flip=False, - transforms=[ - dict(type='Resize', keep_ratio=True), - dict(type='RandomFlip'), - dict(type='Normalize', **img_norm_cfg), - dict(type='ImageToTensor', keys=['img']), - dict(type='Collect', keys=['img']), - ]) -] -data = dict( - train=dict(pipeline=train_pipeline), - val=dict(pipeline=test_pipeline), - test=dict(pipeline=test_pipeline)) diff --git a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmseg/models/backbones/mobilenet_v3.py b/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmseg/models/backbones/mobilenet_v3.py deleted file mode 100644 index 16817400b4102899794fe64c9644713a4e54e2f9..0000000000000000000000000000000000000000 --- a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmseg/models/backbones/mobilenet_v3.py +++ /dev/null @@ -1,255 +0,0 @@ -import logging - -import annotator.uniformer.mmcv as mmcv -import torch.nn as nn -from annotator.uniformer.mmcv.cnn import ConvModule, constant_init, kaiming_init -from annotator.uniformer.mmcv.cnn.bricks import Conv2dAdaptivePadding -from annotator.uniformer.mmcv.runner import load_checkpoint -from torch.nn.modules.batchnorm import _BatchNorm - -from ..builder import BACKBONES -from ..utils import InvertedResidualV3 as InvertedResidual - - -@BACKBONES.register_module() -class MobileNetV3(nn.Module): - """MobileNetV3 backbone. - - This backbone is the improved implementation of `Searching for MobileNetV3 - `_. - - Args: - arch (str): Architecture of mobilnetv3, from {'small', 'large'}. - Default: 'small'. - conv_cfg (dict): Config dict for convolution layer. - Default: None, which means using conv2d. - norm_cfg (dict): Config dict for normalization layer. - Default: dict(type='BN'). - out_indices (tuple[int]): Output from which layer. - Default: (0, 1, 12). - frozen_stages (int): Stages to be frozen (all param fixed). - Default: -1, which means not freezing any parameters. - norm_eval (bool): Whether to set norm layers to eval mode, namely, - freeze running stats (mean and var). Note: Effect on Batch Norm - and its variants only. Default: False. - with_cp (bool): Use checkpoint or not. Using checkpoint will save - some memory while slowing down the training speed. - Default: False. - """ - # Parameters to build each block: - # [kernel size, mid channels, out channels, with_se, act type, stride] - arch_settings = { - 'small': [[3, 16, 16, True, 'ReLU', 2], # block0 layer1 os=4 - [3, 72, 24, False, 'ReLU', 2], # block1 layer2 os=8 - [3, 88, 24, False, 'ReLU', 1], - [5, 96, 40, True, 'HSwish', 2], # block2 layer4 os=16 - [5, 240, 40, True, 'HSwish', 1], - [5, 240, 40, True, 'HSwish', 1], - [5, 120, 48, True, 'HSwish', 1], # block3 layer7 os=16 - [5, 144, 48, True, 'HSwish', 1], - [5, 288, 96, True, 'HSwish', 2], # block4 layer9 os=32 - [5, 576, 96, True, 'HSwish', 1], - [5, 576, 96, True, 'HSwish', 1]], - 'large': [[3, 16, 16, False, 'ReLU', 1], # block0 layer1 os=2 - [3, 64, 24, False, 'ReLU', 2], # block1 layer2 os=4 - [3, 72, 24, False, 'ReLU', 1], - [5, 72, 40, True, 'ReLU', 2], # block2 layer4 os=8 - [5, 120, 40, True, 'ReLU', 1], - [5, 120, 40, True, 'ReLU', 1], - [3, 240, 80, False, 'HSwish', 2], # block3 layer7 os=16 - [3, 200, 80, False, 'HSwish', 1], - [3, 184, 80, False, 'HSwish', 1], - [3, 184, 80, False, 'HSwish', 1], - [3, 480, 112, True, 'HSwish', 1], # block4 layer11 os=16 - [3, 672, 112, True, 'HSwish', 1], - [5, 672, 160, True, 'HSwish', 2], # block5 layer13 os=32 - [5, 960, 160, True, 'HSwish', 1], - [5, 960, 160, True, 'HSwish', 1]] - } # yapf: disable - - def __init__(self, - arch='small', - conv_cfg=None, - norm_cfg=dict(type='BN'), - out_indices=(0, 1, 12), - frozen_stages=-1, - reduction_factor=1, - norm_eval=False, - with_cp=False): - super(MobileNetV3, self).__init__() - assert arch in self.arch_settings - assert isinstance(reduction_factor, int) and reduction_factor > 0 - assert mmcv.is_tuple_of(out_indices, int) - for index in out_indices: - if index not in range(0, len(self.arch_settings[arch]) + 2): - raise ValueError( - 'the item in out_indices must in ' - f'range(0, {len(self.arch_settings[arch])+2}). ' - f'But received {index}') - - if frozen_stages not in range(-1, len(self.arch_settings[arch]) + 2): - raise ValueError('frozen_stages must be in range(-1, ' - f'{len(self.arch_settings[arch])+2}). ' - f'But received {frozen_stages}') - self.arch = arch - self.conv_cfg = conv_cfg - self.norm_cfg = norm_cfg - self.out_indices = out_indices - self.frozen_stages = frozen_stages - self.reduction_factor = reduction_factor - self.norm_eval = norm_eval - self.with_cp = with_cp - self.layers = self._make_layer() - - def _make_layer(self): - layers = [] - - # build the first layer (layer0) - in_channels = 16 - layer = ConvModule( - in_channels=3, - out_channels=in_channels, - kernel_size=3, - stride=2, - padding=1, - conv_cfg=dict(type='Conv2dAdaptivePadding'), - norm_cfg=self.norm_cfg, - act_cfg=dict(type='HSwish')) - self.add_module('layer0', layer) - layers.append('layer0') - - layer_setting = self.arch_settings[self.arch] - for i, params in enumerate(layer_setting): - (kernel_size, mid_channels, out_channels, with_se, act, - stride) = params - - if self.arch == 'large' and i >= 12 or self.arch == 'small' and \ - i >= 8: - mid_channels = mid_channels // self.reduction_factor - out_channels = out_channels // self.reduction_factor - - if with_se: - se_cfg = dict( - channels=mid_channels, - ratio=4, - act_cfg=(dict(type='ReLU'), - dict(type='HSigmoid', bias=3.0, divisor=6.0))) - else: - se_cfg = None - - layer = InvertedResidual( - in_channels=in_channels, - out_channels=out_channels, - mid_channels=mid_channels, - kernel_size=kernel_size, - stride=stride, - se_cfg=se_cfg, - with_expand_conv=(in_channels != mid_channels), - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=dict(type=act), - with_cp=self.with_cp) - in_channels = out_channels - layer_name = 'layer{}'.format(i + 1) - self.add_module(layer_name, layer) - layers.append(layer_name) - - # build the last layer - # block5 layer12 os=32 for small model - # block6 layer16 os=32 for large model - layer = ConvModule( - in_channels=in_channels, - out_channels=576 if self.arch == 'small' else 960, - kernel_size=1, - stride=1, - dilation=4, - padding=0, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=dict(type='HSwish')) - layer_name = 'layer{}'.format(len(layer_setting) + 1) - self.add_module(layer_name, layer) - layers.append(layer_name) - - # next, convert backbone MobileNetV3 to a semantic segmentation version - if self.arch == 'small': - self.layer4.depthwise_conv.conv.stride = (1, 1) - self.layer9.depthwise_conv.conv.stride = (1, 1) - for i in range(4, len(layers)): - layer = getattr(self, layers[i]) - if isinstance(layer, InvertedResidual): - modified_module = layer.depthwise_conv.conv - else: - modified_module = layer.conv - - if i < 9: - modified_module.dilation = (2, 2) - pad = 2 - else: - modified_module.dilation = (4, 4) - pad = 4 - - if not isinstance(modified_module, Conv2dAdaptivePadding): - # Adjust padding - pad *= (modified_module.kernel_size[0] - 1) // 2 - modified_module.padding = (pad, pad) - else: - self.layer7.depthwise_conv.conv.stride = (1, 1) - self.layer13.depthwise_conv.conv.stride = (1, 1) - for i in range(7, len(layers)): - layer = getattr(self, layers[i]) - if isinstance(layer, InvertedResidual): - modified_module = layer.depthwise_conv.conv - else: - modified_module = layer.conv - - if i < 13: - modified_module.dilation = (2, 2) - pad = 2 - else: - modified_module.dilation = (4, 4) - pad = 4 - - if not isinstance(modified_module, Conv2dAdaptivePadding): - # Adjust padding - pad *= (modified_module.kernel_size[0] - 1) // 2 - modified_module.padding = (pad, pad) - - return layers - - def init_weights(self, pretrained=None): - if isinstance(pretrained, str): - logger = logging.getLogger() - load_checkpoint(self, pretrained, strict=False, logger=logger) - elif pretrained is None: - for m in self.modules(): - if isinstance(m, nn.Conv2d): - kaiming_init(m) - elif isinstance(m, nn.BatchNorm2d): - constant_init(m, 1) - else: - raise TypeError('pretrained must be a str or None') - - def forward(self, x): - outs = [] - for i, layer_name in enumerate(self.layers): - layer = getattr(self, layer_name) - x = layer(x) - if i in self.out_indices: - outs.append(x) - return outs - - def _freeze_stages(self): - for i in range(self.frozen_stages + 1): - layer = getattr(self, f'layer{i}') - layer.eval() - for param in layer.parameters(): - param.requires_grad = False - - def train(self, mode=True): - super(MobileNetV3, self).train(mode) - self._freeze_stages() - if mode and self.norm_eval: - for m in self.modules(): - if isinstance(m, _BatchNorm): - m.eval() diff --git a/spaces/MercuryLeafer/img-to-music/constants.py b/spaces/MercuryLeafer/img-to-music/constants.py deleted file mode 100644 index 86863d1b778d4c66f0d8e1e0b699f1bb937c1d50..0000000000000000000000000000000000000000 --- a/spaces/MercuryLeafer/img-to-music/constants.py +++ /dev/null @@ -1,9 +0,0 @@ -import numpy as np -import os - -MUBERT_LICENSE = os.environ.get('MUBERT_LICENSE') -MUBERT_TOKEN = os.environ.get('MUBERT_TOKEN') - -MUBERT_MODE = "loop" -MUBERT_TAGS_STRING = 'tribal,action,kids,neo-classic,run 130,pumped,jazz / funk,ethnic,dubtechno,reggae,acid jazz,liquidfunk,funk,witch house,tech house,underground,artists,mystical,disco,sensorium,r&b,agender,psychedelic trance / psytrance,peaceful,run 140,piano,run 160,setting,meditation,christmas,ambient,horror,cinematic,electro house,idm,bass,minimal,underscore,drums,glitchy,beautiful,technology,tribal house,country pop,jazz & funk,documentary,space,classical,valentines,chillstep,experimental,trap,new jack swing,drama,post-rock,tense,corporate,neutral,happy,analog,funky,spiritual,sberzvuk special,chill hop,dramatic,catchy,holidays,fitness 90,optimistic,orchestra,acid techno,energizing,romantic,minimal house,breaks,hyper pop,warm up,dreamy,dark,urban,microfunk,dub,nu disco,vogue,keys,hardcore,aggressive,indie,electro funk,beauty,relaxing,trance,pop,hiphop,soft,acoustic,chillrave / ethno-house,deep techno,angry,dance,fun,dubstep,tropical,latin pop,heroic,world music,inspirational,uplifting,atmosphere,art,epic,advertising,chillout,scary,spooky,slow ballad,saxophone,summer,erotic,jazzy,energy 100,kara mar,xmas,atmospheric,indie pop,hip-hop,yoga,reggaeton,lounge,travel,running,folk,chillrave & ethno-house,detective,darkambient,chill,fantasy,minimal techno,special,night,tropical house,downtempo,lullaby,meditative,upbeat,glitch hop,fitness,neurofunk,sexual,indie rock,future pop,jazz,cyberpunk,melancholic,happy hardcore,family / kids,synths,electric guitar,comedy,psychedelic trance & psytrance,edm,psychedelic rock,calm,zen,bells,podcast,melodic house,ethnic percussion,nature,heavy,bassline,indie dance,techno,drumnbass,synth pop,vaporwave,sad,8-bit,chillgressive,deep,orchestral,futuristic,hardtechno,nostalgic,big room,sci-fi,tutorial,joyful,pads,minimal 170,drill,ethnic 108,amusing,sleepy ambient,psychill,italo disco,lofi,house,acoustic guitar,bassline house,rock,k-pop,synthwave,deep house,electronica,gabber,nightlife,sport & fitness,road trip,celebration,electro,disco house,electronic' -MUBERT_TAGS = np.array(MUBERT_TAGS_STRING.split(',')) \ No newline at end of file diff --git a/spaces/Miuzarte/SUI-svc-4.0/cluster/__init__.py b/spaces/Miuzarte/SUI-svc-4.0/cluster/__init__.py deleted file mode 100644 index f1b9bde04e73e9218a5d534227caa4c25332f424..0000000000000000000000000000000000000000 --- a/spaces/Miuzarte/SUI-svc-4.0/cluster/__init__.py +++ /dev/null @@ -1,29 +0,0 @@ -import numpy as np -import torch -from sklearn.cluster import KMeans - -def get_cluster_model(ckpt_path): - checkpoint = torch.load(ckpt_path) - kmeans_dict = {} - for spk, ckpt in checkpoint.items(): - km = KMeans(ckpt["n_features_in_"]) - km.__dict__["n_features_in_"] = ckpt["n_features_in_"] - km.__dict__["_n_threads"] = ckpt["_n_threads"] - km.__dict__["cluster_centers_"] = ckpt["cluster_centers_"] - kmeans_dict[spk] = km - return kmeans_dict - -def get_cluster_result(model, x, speaker): - """ - x: np.array [t, 256] - return cluster class result - """ - return model[speaker].predict(x) - -def get_cluster_center_result(model, x,speaker): - """x: np.array [t, 256]""" - predict = model[speaker].predict(x) - return model[speaker].cluster_centers_[predict] - -def get_center(model, x,speaker): - return model[speaker].cluster_centers_[x] diff --git a/spaces/Mountchicken/MAERec-Gradio/tools/dataset_converters/textrecog/ic11_converter.py b/spaces/Mountchicken/MAERec-Gradio/tools/dataset_converters/textrecog/ic11_converter.py deleted file mode 100644 index 3de125d39bd87c137b2ed1d470fa6bcfd19836ba..0000000000000000000000000000000000000000 --- a/spaces/Mountchicken/MAERec-Gradio/tools/dataset_converters/textrecog/ic11_converter.py +++ /dev/null @@ -1,66 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import argparse -import os.path as osp - -from mmocr.utils import dump_ocr_data - - -def convert_annotations(root_path, split): - """Convert original annotations to mmocr format. - - The annotation format of this dataset is as the following: - word_1.png, "flying" - word_2.png, "today" - word_3.png, "means" - See the format of converted annotation in mmocr.utils.dump_ocr_data. - - Args: - root_path (str): The root path of the dataset - split (str): The split of dataset. Namely: Train or Test - """ - assert isinstance(root_path, str) - assert isinstance(split, str) - - img_info = [] - with open( - osp.join(root_path, 'annotations', - f'Challenge1_{split}_Task3_GT.txt'), - encoding='"utf-8-sig') as f: - annos = f.readlines() - for anno in annos: - # text may contain comma ',' - dst_img_name, word = anno.split(', "') - word = word.replace('"\n', '') - - img_info.append({ - 'file_name': dst_img_name, - 'anno_info': [{ - 'text': word - }] - }) - - return img_info - - -def parse_args(): - parser = argparse.ArgumentParser( - description='Generate training and test set of IC11') - parser.add_argument('root_path', help='Root dir path of IC11') - args = parser.parse_args() - return args - - -def main(): - args = parse_args() - root_path = args.root_path - - for split in ['Train', 'Test']: - img_info = convert_annotations(root_path, split) - dump_ocr_data(img_info, - osp.join(root_path, f'{split.lower()}_label.json'), - 'textrecog') - print(f'{split} split converted.') - - -if __name__ == '__main__': - main() diff --git a/spaces/NCTCMumbai/NCTC/models/official/nlp/modeling/layers/position_embedding.py b/spaces/NCTCMumbai/NCTC/models/official/nlp/modeling/layers/position_embedding.py deleted file mode 100644 index 169e54de112d9a3ce65e9fa68f066a107d35c7a4..0000000000000000000000000000000000000000 --- a/spaces/NCTCMumbai/NCTC/models/official/nlp/modeling/layers/position_embedding.py +++ /dev/null @@ -1,205 +0,0 @@ -# Copyright 2019 The TensorFlow Authors. All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -# ============================================================================== -"""Keras-based positional embedding layer.""" -# pylint: disable=g-classes-have-attributes -from __future__ import absolute_import -from __future__ import division -# from __future__ import google_type_annotations -from __future__ import print_function - -import math - -import tensorflow as tf - -from official.modeling import tf_utils - - -@tf.keras.utils.register_keras_serializable(package="Text") -class PositionEmbedding(tf.keras.layers.Layer): - """Creates a positional embedding. - - This layer creates a positional embedding as described in "BERT: Pre-training - of Deep Bidirectional Transformers for Language Understanding" - (https://arxiv.org/abs/1810.04805). - - This layer can be set up to either create a statically shaped slice or a - dynamically shaped slice. If `use_dynamic_slicing` is True, the input tensor - can have a dynamic 1st dimension, while if `use_dynamic_slicing` is False the - input size must be fixed. - - Arguments: - use_dynamic_slicing: Whether to use the dynamic slicing path. - max_sequence_length: The maximum size of the dynamic sequence. Only - applicable if `use_dynamic_slicing` is True. - initializer: The initializer to use for the embedding weights. Defaults to - "glorot_uniform". - """ - - def __init__(self, - initializer="glorot_uniform", - use_dynamic_slicing=False, - max_sequence_length=None, - **kwargs): - # We need to have a default dtype of float32, since the inputs (which Keras - # usually uses to infer the dtype) will always be int32. - if "dtype" not in kwargs: - kwargs["dtype"] = "float32" - - super(PositionEmbedding, self).__init__(**kwargs) - if use_dynamic_slicing and max_sequence_length is None: - raise ValueError( - "If `use_dynamic_slicing` is True, `max_sequence_length` must be set." - ) - self._max_sequence_length = max_sequence_length - self._initializer = tf.keras.initializers.get(initializer) - self._use_dynamic_slicing = use_dynamic_slicing - - def get_config(self): - config = { - "max_sequence_length": self._max_sequence_length, - "initializer": tf.keras.initializers.serialize(self._initializer), - "use_dynamic_slicing": self._use_dynamic_slicing, - } - base_config = super(PositionEmbedding, self).get_config() - return dict(list(base_config.items()) + list(config.items())) - - def build(self, input_shape): - """Implements build() for the layer.""" - dimension_list = input_shape.as_list() - - if len(dimension_list) != 3: - raise ValueError("PositionEmbedding expects a 3-dimensional input tensor " - "of shape [batch, sequence, width]") - seq_length = dimension_list[1] - width = dimension_list[2] - - # If we are not using dynamic slicing, we must assume that the sequence - # length is fixed and max_sequence_length should not be specified. - if not self._use_dynamic_slicing: - if seq_length is None: - raise ValueError( - "PositionEmbedding must have `use_dynamic_slicing` set " - "to True (and max_sequence_length set) when the " - "sequence (1st) dimension of the input is None.") - if self._max_sequence_length is not None: - raise ValueError( - "When `use_dynamic_slicing` is False, max_sequence_length should " - "not be specified and we ought to use seq_length to get the " - "variable shape.") - - if self._max_sequence_length is not None: - weight_sequence_length = self._max_sequence_length - else: - weight_sequence_length = seq_length - - self._position_embeddings = self.add_weight( - "embeddings", - shape=[weight_sequence_length, width], - initializer=self._initializer) - - super(PositionEmbedding, self).build(input_shape) - - def call(self, inputs): - """Implements call() for the layer.""" - input_shape = tf_utils.get_shape_list(inputs, expected_rank=3) - if self._use_dynamic_slicing: - position_embeddings = self._position_embeddings[:input_shape[1], :] - else: - position_embeddings = self._position_embeddings - - return tf.broadcast_to(position_embeddings, input_shape) - - -@tf.keras.utils.register_keras_serializable(package="Text") -class RelativePositionEmbedding(tf.keras.layers.Layer): - """Creates a positional embedding. - - This layer calculates the position encoding as a mix of sine and cosine - functions with geometrically increasing wavelengths. Defined and formulized in - "Attention is All You Need", section 3.5. - (https://arxiv.org/abs/1706.03762). - - Arguments: - hidden_size: Size of the hidden layer. - min_timescale: Minimum scale that will be applied at each position - max_timescale: Maximum scale that will be applied at each position. - """ - - def __init__(self, - hidden_size, - min_timescale=1.0, - max_timescale=1.0e4, - **kwargs): - # We need to have a default dtype of float32, since the inputs (which Keras - # usually uses to infer the dtype) will always be int32. - # We compute the positional encoding in float32 even if the model uses - # float16, as many of the ops used, like log and exp, are numerically - # unstable in float16. - if "dtype" not in kwargs: - kwargs["dtype"] = "float32" - - super(RelativePositionEmbedding, self).__init__(**kwargs) - self._hidden_size = hidden_size - self._min_timescale = min_timescale - self._max_timescale = max_timescale - - def get_config(self): - config = { - "hidden_size": self._hidden_size, - "min_timescale": self._min_timescale, - "max_timescale": self._max_timescale, - "length": self._length, - } - base_config = super(RelativePositionEmbedding, self).get_config() - return dict(list(base_config.items()) + list(config.items())) - - def call(self, inputs, length=None): - """Implements call() for the layer. - - Args: - inputs: An tensor whose second dimension will be used as `length`. If - `None`, the other `length` argument must be specified. - length: An optional integer specifying the number of positions. If both - `inputs` and `length` are spcified, `length` must be equal to the - second dimension of `inputs`. - - Returns: - A tensor in shape of [length, hidden_size]. - """ - if inputs is None and length is None: - raise ValueError( - "If inputs is None, `length` must be set in " - "RelativePositionEmbedding().") - if inputs is not None: - input_shape = tf_utils.get_shape_list(inputs) - if length is not None and length != input_shape[1]: - raise ValueError( - "If inputs is not None, `length` must equal to input_shape[1]." - ) - length = input_shape[1] - position = tf.cast(tf.range(length), tf.float32) - num_timescales = self._hidden_size // 2 - min_timescale, max_timescale = self._min_timescale, self._max_timescale - log_timescale_increment = ( - math.log(float(max_timescale) / float(min_timescale)) / - (tf.cast(num_timescales, tf.float32) - 1)) - inv_timescales = min_timescale * tf.exp( - tf.cast(tf.range(num_timescales), tf.float32) * - -log_timescale_increment) - scaled_time = tf.expand_dims(position, 1) * tf.expand_dims(inv_timescales, - 0) - position_embeddings = tf.concat([tf.sin(scaled_time), tf.cos(scaled_time)], - axis=1) - return position_embeddings diff --git a/spaces/NCTCMumbai/NCTC/models/research/autoaugment/data_utils.py b/spaces/NCTCMumbai/NCTC/models/research/autoaugment/data_utils.py deleted file mode 100644 index 9bf911560d10065a7b2acebb417828d47c176ea5..0000000000000000000000000000000000000000 --- a/spaces/NCTCMumbai/NCTC/models/research/autoaugment/data_utils.py +++ /dev/null @@ -1,184 +0,0 @@ -# Copyright 2018 The TensorFlow Authors All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -# ============================================================================== - -"""Data utils for CIFAR-10 and CIFAR-100.""" - -from __future__ import absolute_import -from __future__ import division -from __future__ import print_function - -import copy -import cPickle -import os -import augmentation_transforms -import numpy as np -import policies as found_policies -import tensorflow as tf - - -# pylint:disable=logging-format-interpolation - - -class DataSet(object): - """Dataset object that produces augmented training and eval data.""" - - def __init__(self, hparams): - self.hparams = hparams - self.epochs = 0 - self.curr_train_index = 0 - - all_labels = [] - - self.good_policies = found_policies.good_policies() - - # Determine how many databatched to load - num_data_batches_to_load = 5 - total_batches_to_load = num_data_batches_to_load - train_batches_to_load = total_batches_to_load - assert hparams.train_size + hparams.validation_size <= 50000 - if hparams.eval_test: - total_batches_to_load += 1 - # Determine how many images we have loaded - total_dataset_size = 10000 * num_data_batches_to_load - train_dataset_size = total_dataset_size - if hparams.eval_test: - total_dataset_size += 10000 - - if hparams.dataset == 'cifar10': - all_data = np.empty((total_batches_to_load, 10000, 3072), dtype=np.uint8) - elif hparams.dataset == 'cifar100': - assert num_data_batches_to_load == 5 - all_data = np.empty((1, 50000, 3072), dtype=np.uint8) - if hparams.eval_test: - test_data = np.empty((1, 10000, 3072), dtype=np.uint8) - if hparams.dataset == 'cifar10': - tf.logging.info('Cifar10') - datafiles = [ - 'data_batch_1', 'data_batch_2', 'data_batch_3', 'data_batch_4', - 'data_batch_5'] - - datafiles = datafiles[:train_batches_to_load] - if hparams.eval_test: - datafiles.append('test_batch') - num_classes = 10 - elif hparams.dataset == 'cifar100': - datafiles = ['train'] - if hparams.eval_test: - datafiles.append('test') - num_classes = 100 - else: - raise NotImplementedError('Unimplemented dataset: ', hparams.dataset) - if hparams.dataset != 'test': - for file_num, f in enumerate(datafiles): - d = unpickle(os.path.join(hparams.data_path, f)) - if f == 'test': - test_data[0] = copy.deepcopy(d['data']) - all_data = np.concatenate([all_data, test_data], axis=1) - else: - all_data[file_num] = copy.deepcopy(d['data']) - if hparams.dataset == 'cifar10': - labels = np.array(d['labels']) - else: - labels = np.array(d['fine_labels']) - nsamples = len(labels) - for idx in range(nsamples): - all_labels.append(labels[idx]) - - all_data = all_data.reshape(total_dataset_size, 3072) - all_data = all_data.reshape(-1, 3, 32, 32) - all_data = all_data.transpose(0, 2, 3, 1).copy() - all_data = all_data / 255.0 - mean = augmentation_transforms.MEANS - std = augmentation_transforms.STDS - tf.logging.info('mean:{} std: {}'.format(mean, std)) - - all_data = (all_data - mean) / std - all_labels = np.eye(num_classes)[np.array(all_labels, dtype=np.int32)] - assert len(all_data) == len(all_labels) - tf.logging.info( - 'In CIFAR10 loader, number of images: {}'.format(len(all_data))) - - # Break off test data - if hparams.eval_test: - self.test_images = all_data[train_dataset_size:] - self.test_labels = all_labels[train_dataset_size:] - - # Shuffle the rest of the data - all_data = all_data[:train_dataset_size] - all_labels = all_labels[:train_dataset_size] - np.random.seed(0) - perm = np.arange(len(all_data)) - np.random.shuffle(perm) - all_data = all_data[perm] - all_labels = all_labels[perm] - - # Break into train and val - train_size, val_size = hparams.train_size, hparams.validation_size - assert 50000 >= train_size + val_size - self.train_images = all_data[:train_size] - self.train_labels = all_labels[:train_size] - self.val_images = all_data[train_size:train_size + val_size] - self.val_labels = all_labels[train_size:train_size + val_size] - self.num_train = self.train_images.shape[0] - - def next_batch(self): - """Return the next minibatch of augmented data.""" - next_train_index = self.curr_train_index + self.hparams.batch_size - if next_train_index > self.num_train: - # Increase epoch number - epoch = self.epochs + 1 - self.reset() - self.epochs = epoch - batched_data = ( - self.train_images[self.curr_train_index: - self.curr_train_index + self.hparams.batch_size], - self.train_labels[self.curr_train_index: - self.curr_train_index + self.hparams.batch_size]) - final_imgs = [] - - images, labels = batched_data - for data in images: - epoch_policy = self.good_policies[np.random.choice( - len(self.good_policies))] - final_img = augmentation_transforms.apply_policy( - epoch_policy, data) - final_img = augmentation_transforms.random_flip( - augmentation_transforms.zero_pad_and_crop(final_img, 4)) - # Apply cutout - final_img = augmentation_transforms.cutout_numpy(final_img) - final_imgs.append(final_img) - batched_data = (np.array(final_imgs, np.float32), labels) - self.curr_train_index += self.hparams.batch_size - return batched_data - - def reset(self): - """Reset training data and index into the training data.""" - self.epochs = 0 - # Shuffle the training data - perm = np.arange(self.num_train) - np.random.shuffle(perm) - assert self.num_train == self.train_images.shape[ - 0], 'Error incorrect shuffling mask' - self.train_images = self.train_images[perm] - self.train_labels = self.train_labels[perm] - self.curr_train_index = 0 - - -def unpickle(f): - tf.logging.info('loading file: {}'.format(f)) - fo = tf.gfile.Open(f, 'r') - d = cPickle.load(fo) - fo.close() - return d diff --git a/spaces/NCTCMumbai/NCTC/models/research/cognitive_mapping_and_planning/tfcode/__init__.py b/spaces/NCTCMumbai/NCTC/models/research/cognitive_mapping_and_planning/tfcode/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/NarendraC/MyAIChatBot/README.md b/spaces/NarendraC/MyAIChatBot/README.md deleted file mode 100644 index 2ee4c73d5c1ad83edda2e4b5c0b6b191040c03e7..0000000000000000000000000000000000000000 --- a/spaces/NarendraC/MyAIChatBot/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: MyAIChatBot -emoji: 👀 -colorFrom: purple -colorTo: gray -sdk: gradio -sdk_version: 3.43.2 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Nyari/Super-Resolution-Anime-Diffusion/RealESRGANv030/realesrgan/data/realesrgan_dataset.py b/spaces/Nyari/Super-Resolution-Anime-Diffusion/RealESRGANv030/realesrgan/data/realesrgan_dataset.py deleted file mode 100644 index 5d2a2fbd7b19d1eb7e320a170531fbf676ce7cec..0000000000000000000000000000000000000000 --- a/spaces/Nyari/Super-Resolution-Anime-Diffusion/RealESRGANv030/realesrgan/data/realesrgan_dataset.py +++ /dev/null @@ -1,216 +0,0 @@ -import cv2 -import math -import numpy as np -import os -import os.path as osp -import random -import time -import torch -from basicsr.data.degradations import circular_lowpass_kernel, random_mixed_kernels -from basicsr.data.transforms import augment -from basicsr.utils import FileClient, get_root_logger, imfrombytes, img2tensor -from basicsr.utils.registry import DATASET_REGISTRY -from torch.utils import data as data - - -@DATASET_REGISTRY.register() -class RealESRGANDataset(data.Dataset): - """Dataset used for Real-ESRGAN model: - Real-ESRGAN: Training Real-World Blind Super-Resolution with Pure Synthetic Data. - - It loads gt (Ground-Truth) images, and augments them. - It also generates blur kernels and sinc kernels for generating low-quality images. - Note that the low-quality images are processed in tensors on GPUS for faster processing. - - Args: - opt (dict): Config for train datasets. It contains the following keys: - dataroot_gt (str): Data root path for gt. - meta_info (str): Path for meta information file. - io_backend (dict): IO backend type and other kwarg. - use_hflip (bool): Use horizontal flips. - use_rot (bool): Use rotation (use vertical flip and transposing h and w for implementation). - Please see more options in the codes. - """ - - def __init__(self, opt): - super(RealESRGANDataset, self).__init__() - self.opt = opt - self.file_client = None - self.io_backend_opt = opt["io_backend"] - self.gt_folder = opt["dataroot_gt"] - - # file client (lmdb io backend) - if self.io_backend_opt["type"] == "lmdb": - self.io_backend_opt["db_paths"] = [self.gt_folder] - self.io_backend_opt["client_keys"] = ["gt"] - if not self.gt_folder.endswith(".lmdb"): - raise ValueError( - f"'dataroot_gt' should end with '.lmdb', but received {self.gt_folder}" - ) - with open(osp.join(self.gt_folder, "meta_info.txt")) as fin: - self.paths = [line.split(".")[0] for line in fin] - else: - # disk backend with meta_info - # Each line in the meta_info describes the relative path to an image - with open(self.opt["meta_info"]) as fin: - paths = [line.strip().split(" ")[0] for line in fin] - self.paths = [os.path.join(self.gt_folder, v) for v in paths] - - # blur settings for the first degradation - self.blur_kernel_size = opt["blur_kernel_size"] - self.kernel_list = opt["kernel_list"] - self.kernel_prob = opt["kernel_prob"] # a list for each kernel probability - self.blur_sigma = opt["blur_sigma"] - self.betag_range = opt[ - "betag_range" - ] # betag used in generalized Gaussian blur kernels - self.betap_range = opt["betap_range"] # betap used in plateau blur kernels - self.sinc_prob = opt["sinc_prob"] # the probability for sinc filters - - # blur settings for the second degradation - self.blur_kernel_size2 = opt["blur_kernel_size2"] - self.kernel_list2 = opt["kernel_list2"] - self.kernel_prob2 = opt["kernel_prob2"] - self.blur_sigma2 = opt["blur_sigma2"] - self.betag_range2 = opt["betag_range2"] - self.betap_range2 = opt["betap_range2"] - self.sinc_prob2 = opt["sinc_prob2"] - - # a final sinc filter - self.final_sinc_prob = opt["final_sinc_prob"] - - self.kernel_range = [ - 2 * v + 1 for v in range(3, 11) - ] # kernel size ranges from 7 to 21 - # TODO: kernel range is now hard-coded, should be in the configure file - self.pulse_tensor = torch.zeros( - 21, 21 - ).float() # convolving with pulse tensor brings no blurry effect - self.pulse_tensor[10, 10] = 1 - - def __getitem__(self, index): - if self.file_client is None: - self.file_client = FileClient( - self.io_backend_opt.pop("type"), **self.io_backend_opt - ) - - # -------------------------------- Load gt images -------------------------------- # - # Shape: (h, w, c); channel order: BGR; image range: [0, 1], float32. - gt_path = self.paths[index] - # avoid errors caused by high latency in reading files - retry = 3 - while retry > 0: - try: - img_bytes = self.file_client.get(gt_path, "gt") - except (IOError, OSError) as e: - logger = get_root_logger() - logger.warn( - f"File client error: {e}, remaining retry times: {retry - 1}" - ) - # change another file to read - index = random.randint(0, self.__len__()) - gt_path = self.paths[index] - time.sleep(1) # sleep 1s for occasional server congestion - else: - break - finally: - retry -= 1 - img_gt = imfrombytes(img_bytes, float32=True) - - # -------------------- Do augmentation for training: flip, rotation -------------------- # - img_gt = augment(img_gt, self.opt["use_hflip"], self.opt["use_rot"]) - - # crop or pad to 400 - # TODO: 400 is hard-coded. You may change it accordingly - h, w = img_gt.shape[0:2] - crop_pad_size = 400 - # pad - if h < crop_pad_size or w < crop_pad_size: - pad_h = max(0, crop_pad_size - h) - pad_w = max(0, crop_pad_size - w) - img_gt = cv2.copyMakeBorder( - img_gt, 0, pad_h, 0, pad_w, cv2.BORDER_REFLECT_101 - ) - # crop - if img_gt.shape[0] > crop_pad_size or img_gt.shape[1] > crop_pad_size: - h, w = img_gt.shape[0:2] - # randomly choose top and left coordinates - top = random.randint(0, h - crop_pad_size) - left = random.randint(0, w - crop_pad_size) - img_gt = img_gt[top : top + crop_pad_size, left : left + crop_pad_size, ...] - - # ------------------------ Generate kernels (used in the first degradation) ------------------------ # - kernel_size = random.choice(self.kernel_range) - if np.random.uniform() < self.opt["sinc_prob"]: - # this sinc filter setting is for kernels ranging from [7, 21] - if kernel_size < 13: - omega_c = np.random.uniform(np.pi / 3, np.pi) - else: - omega_c = np.random.uniform(np.pi / 5, np.pi) - kernel = circular_lowpass_kernel(omega_c, kernel_size, pad_to=False) - else: - kernel = random_mixed_kernels( - self.kernel_list, - self.kernel_prob, - kernel_size, - self.blur_sigma, - self.blur_sigma, - [-math.pi, math.pi], - self.betag_range, - self.betap_range, - noise_range=None, - ) - # pad kernel - pad_size = (21 - kernel_size) // 2 - kernel = np.pad(kernel, ((pad_size, pad_size), (pad_size, pad_size))) - - # ------------------------ Generate kernels (used in the second degradation) ------------------------ # - kernel_size = random.choice(self.kernel_range) - if np.random.uniform() < self.opt["sinc_prob2"]: - if kernel_size < 13: - omega_c = np.random.uniform(np.pi / 3, np.pi) - else: - omega_c = np.random.uniform(np.pi / 5, np.pi) - kernel2 = circular_lowpass_kernel(omega_c, kernel_size, pad_to=False) - else: - kernel2 = random_mixed_kernels( - self.kernel_list2, - self.kernel_prob2, - kernel_size, - self.blur_sigma2, - self.blur_sigma2, - [-math.pi, math.pi], - self.betag_range2, - self.betap_range2, - noise_range=None, - ) - - # pad kernel - pad_size = (21 - kernel_size) // 2 - kernel2 = np.pad(kernel2, ((pad_size, pad_size), (pad_size, pad_size))) - - # ------------------------------------- the final sinc kernel ------------------------------------- # - if np.random.uniform() < self.opt["final_sinc_prob"]: - kernel_size = random.choice(self.kernel_range) - omega_c = np.random.uniform(np.pi / 3, np.pi) - sinc_kernel = circular_lowpass_kernel(omega_c, kernel_size, pad_to=21) - sinc_kernel = torch.FloatTensor(sinc_kernel) - else: - sinc_kernel = self.pulse_tensor - - # BGR to RGB, HWC to CHW, numpy to tensor - img_gt = img2tensor([img_gt], bgr2rgb=True, float32=True)[0] - kernel = torch.FloatTensor(kernel) - kernel2 = torch.FloatTensor(kernel2) - - return_d = { - "gt": img_gt, - "kernel1": kernel, - "kernel2": kernel2, - "sinc_kernel": sinc_kernel, - "gt_path": gt_path, - } - return return_d - - def __len__(self): - return len(self.paths) diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/joint_alignment_translation/README.md b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/joint_alignment_translation/README.md deleted file mode 100644 index cd9c0ea65f5292198296a8f427b42e01b584e2d9..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/joint_alignment_translation/README.md +++ /dev/null @@ -1,89 +0,0 @@ -# Jointly Learning to Align and Translate with Transformer Models (Garg et al., 2019) - -This page includes instructions for training models described in [Jointly Learning to Align and Translate with Transformer Models (Garg et al., 2019)](https://arxiv.org/abs/1909.02074). - -## Training a joint alignment-translation model on WMT'18 En-De - -##### 1. Extract and preprocess the WMT'18 En-De data -```bash -./prepare-wmt18en2de_no_norm_no_escape_no_agressive.sh -``` - -##### 2. Generate alignments from statistical alignment toolkits e.g. Giza++/FastAlign. -In this example, we use FastAlign. -```bash -git clone git@github.com:clab/fast_align.git -pushd fast_align -mkdir build -cd build -cmake .. -make -popd -ALIGN=fast_align/build/fast_align -paste bpe.32k/train.en bpe.32k/train.de | awk -F '\t' '{print $1 " ||| " $2}' > bpe.32k/train.en-de -$ALIGN -i bpe.32k/train.en-de -d -o -v > bpe.32k/train.align -``` - -##### 3. Preprocess the dataset with the above generated alignments. -```bash -fairseq-preprocess \ - --source-lang en --target-lang de \ - --trainpref bpe.32k/train \ - --validpref bpe.32k/valid \ - --testpref bpe.32k/test \ - --align-suffix align \ - --destdir binarized/ \ - --joined-dictionary \ - --workers 32 -``` - -##### 4. Train a model -```bash -fairseq-train \ - binarized \ - --arch transformer_wmt_en_de_big_align --share-all-embeddings \ - --optimizer adam --adam-betas '(0.9, 0.98)' --clip-norm 0.0 --activation-fn relu\ - --lr 0.0002 --lr-scheduler inverse_sqrt --warmup-updates 4000 --warmup-init-lr 1e-07 \ - --dropout 0.3 --attention-dropout 0.1 --weight-decay 0.0 \ - --max-tokens 3500 --label-smoothing 0.1 \ - --save-dir ./checkpoints --log-interval 1000 --max-update 60000 \ - --keep-interval-updates -1 --save-interval-updates 0 \ - --load-alignments --criterion label_smoothed_cross_entropy_with_alignment \ - --fp16 -``` - -Note that the `--fp16` flag requires you have CUDA 9.1 or greater and a Volta GPU or newer. - -If you want to train the above model with big batches (assuming your machine has 8 GPUs): -- add `--update-freq 8` to simulate training on 8x8=64 GPUs -- increase the learning rate; 0.0007 works well for big batches - -##### 5. Evaluate and generate the alignments (BPE level) -```bash -fairseq-generate \ - binarized --gen-subset test --print-alignment \ - --source-lang en --target-lang de \ - --path checkpoints/checkpoint_best.pt --beam 5 --nbest 1 -``` - -##### 6. Other resources. -The code for: -1. preparing alignment test sets -2. converting BPE level alignments to token level alignments -3. symmetrizing bidirectional alignments -4. evaluating alignments using AER metric -can be found [here](https://github.com/lilt/alignment-scripts) - -## Citation - -```bibtex -@inproceedings{garg2019jointly, - title = {Jointly Learning to Align and Translate with Transformer Models}, - author = {Garg, Sarthak and Peitz, Stephan and Nallasamy, Udhyakumar and Paulik, Matthias}, - booktitle = {Conference on Empirical Methods in Natural Language Processing (EMNLP)}, - address = {Hong Kong}, - month = {November}, - url = {https://arxiv.org/abs/1909.02074}, - year = {2019}, -} -``` diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/model_parallel/models/pipeline_parallel_transformer/layers.py b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/model_parallel/models/pipeline_parallel_transformer/layers.py deleted file mode 100644 index eb81ded341257ba0a43c4d0867e8f3c83f276bc7..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/model_parallel/models/pipeline_parallel_transformer/layers.py +++ /dev/null @@ -1,600 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import math -from collections import namedtuple - -import torch -import torch.nn as nn -import torch.nn.functional as F -from fairseq import options, utils -from fairseq.modules import ( - AdaptiveSoftmax, - LayerNorm, - MultiheadAttention, - PositionalEmbedding, -) - - -EncoderOut = namedtuple( - "TransformerEncoderOut", - [ - "encoder_out", # T x B x C - "encoder_padding_mask", # B x T - "encoder_embedding", # B x T x C - "encoder_states", # List[T x B x C] - ], -) - - -class TransformerEncoderEmbedding(nn.Module): - """ Encoder Embedding + Positional Embedding """ - - def __init__(self, args, embed_tokens): - super().__init__() - self.dropout = args.dropout - self.max_source_positions = args.max_source_positions - self.embed_tokens = embed_tokens - if isinstance(embed_tokens, nn.ModuleList): - self.padding_idx = embed_tokens[0].padding_idx - embed_dim = sum(e.embedding_dim for e in embed_tokens) - else: - self.padding_idx = embed_tokens.padding_idx - embed_dim = embed_tokens.embedding_dim - self.embed_scale = math.sqrt(embed_dim) - self.embed_positions = ( - PositionalEmbedding( - args.max_source_positions, - embed_dim, - self.padding_idx, - learned=args.encoder_learned_pos, - ) - if not args.no_token_positional_embeddings - else None - ) - if getattr(args, "layernorm_embedding", False): - self.layernorm_embedding = LayerNorm(embed_dim) - else: - self.layernorm_embedding = None - - def forward(self, input): - # embed tokens and positions - src_tokens = input[0] - prev_output_tokens = input[2] - if isinstance(self.embed_tokens, nn.ModuleList): - x_embed_list = [] - for embed_tokens_part in self.embed_tokens: - x_embed_list.append(embed_tokens_part(src_tokens)) - - embedded = torch.cat(x_embed_list, dim=-1) - else: - embedded = self.embed_tokens(src_tokens) - x = embed = self.embed_scale * embedded - if self.embed_positions is not None: - x = embed + self.embed_positions(src_tokens) - if self.layernorm_embedding: - x = self.layernorm_embedding(x) - x = F.dropout(x, p=self.dropout, training=self.training) - # B x T x C -> T x B x C - x = x.transpose(0, 1) - - # compute padding mask - encoder_padding_mask = src_tokens.eq(self.padding_idx) - return (x, encoder_padding_mask, prev_output_tokens) - - -class TransformerEncoderLayerNorm(nn.Module): - """ - Layer norm at the the end of all encoder layers if - args.encoder_enormalize_before = True - """ - - def __init__(self, args, embed_dim): - super().__init__() - if args.encoder_normalize_before: - self.layer_norm = LayerNorm(embed_dim) - else: - self.layer_norm = None - - def forward(self, input): - x = input[0] - encoder_padding_mask = input[1] - prev_output_tokens = input[2] - if self.layer_norm: - x = self.layer_norm(x) - # keeping track of the incremental_state is not supported yet - return (x, encoder_padding_mask, prev_output_tokens) - - -class TransformerDecoderEmbedding(nn.Module): - """ Decoder Embedding + Positional Embedding """ - - def __init__(self, args, embed_tokens): - super().__init__() - self.dropout = args.dropout - self.share_input_output_embed = args.share_decoder_input_output_embed - input_embed_dim = ( - sum(e.embedding_dim for e in embed_tokens) - if isinstance(embed_tokens, nn.ModuleList) - else embed_tokens.embedding_dim - ) - embed_dim = args.decoder_embed_dim - self.output_embed_dim = args.decoder_output_dim - - padding_idx = ( - embed_tokens[0].padding_idx - if isinstance(embed_tokens, nn.ModuleList) - else embed_tokens.padding_idx - ) - self.max_target_positions = args.max_target_positions - - self.embed_tokens = embed_tokens - self.embed_scale = math.sqrt(embed_dim) # todo: try with input_embed_dim - - self.project_in_dim = ( - Linear(input_embed_dim, embed_dim, bias=False) - if embed_dim != input_embed_dim - else None - ) - - self.embed_positions = ( - PositionalEmbedding( - args.max_target_positions, - embed_dim, - padding_idx, - learned=args.decoder_learned_pos, - ) - if not args.no_token_positional_embeddings - else None - ) - - def forward(self, input): - mt_task = False - if isinstance(input, tuple): - if len(input) == 3: - encoder_out = input[0] - encoder_padding_mask = input[1] - prev_output_tokens = input[2] - incremental_state = None # Hardcoding to avoid passing of None objects - mt_task = True - else: - # HACK for now, need to fix (TODO sidgoyal) - prev_output_tokens = input[0] - # discard "src_lengths" - encoder_out = None - encoder_padding_mask = None - incremental_state = None - - else: - prev_output_tokens = input - encoder_out = None - encoder_padding_mask = None - incremental_state = None - - positions = ( - self.embed_positions( - prev_output_tokens, - incremental_state=incremental_state, - ) - if self.embed_positions is not None - else None - ) - - if incremental_state is not None: - prev_output_tokens = prev_output_tokens[:, -1:] - if positions is not None: - positions = positions[:, -1:] - - # embed tokens and positions - - if isinstance(self.embed_tokens, nn.ModuleList): - x_embed_list = [] - for embed_tokens_part in self.embed_tokens: - x_embed_list.append(embed_tokens_part(prev_output_tokens)) - - x = self.embed_scale * torch.cat(x_embed_list, dim=-1) - else: - x = self.embed_scale * self.embed_tokens(prev_output_tokens) - - if self.project_in_dim is not None: - x = self.project_in_dim(x) - - if positions is not None: - x += positions - x = F.dropout(x, p=self.dropout, training=self.training) - - # B x T x C -> T x B x C - x = x.transpose(0, 1) - if mt_task: - return (x, encoder_out, encoder_padding_mask) - return x - - -class TransformerDecoderOutputLayer(nn.Module): - def __init__(self, args, embed_tokens, dictionary): - super().__init__() - self.share_input_output_embed = args.share_decoder_input_output_embed - self.embed_tokens = embed_tokens - self.output_embed_dim = args.decoder_output_dim - embed_dim = args.decoder_embed_dim - - self.project_out_dim = ( - Linear(embed_dim, self.output_embed_dim, bias=False) - if embed_dim != self.output_embed_dim and not args.tie_adaptive_weights - else None - ) - self.adaptive_softmax = None - if args.adaptive_softmax_cutoff is not None: - assert not isinstance(embed_tokens, nn.ModuleList) - self.adaptive_softmax = AdaptiveSoftmax( - len(dictionary), - self.output_embed_dim, - options.eval_str_list(args.adaptive_softmax_cutoff, type=int), - dropout=args.adaptive_softmax_dropout, - adaptive_inputs=embed_tokens if args.tie_adaptive_weights else None, - factor=args.adaptive_softmax_factor, - tie_proj=args.tie_adaptive_proj, - ) - elif not self.share_input_output_embed: - self.embed_tokens = nn.Parameter( - torch.Tensor(len(dictionary), self.output_embed_dim) - ) - nn.init.normal_( - self.embed_tokens, mean=0, std=self.output_embed_dim ** -0.5 - ) - - if args.decoder_normalize_before and not getattr( - args, "no_decoder_final_norm", False - ): - self.layer_norm = LayerNorm(embed_dim) - else: - self.layer_norm = None - - def forward(self, input, apply_final_proj=True): - if isinstance(input, tuple): - x = input[0] - else: - x = input - - if self.layer_norm: - x = self.layer_norm(x) - - # T x B x C -> B x T x C - x = x.transpose(0, 1) - - if self.project_out_dim is not None: - x = self.project_out_dim(x) - if apply_final_proj: - x = self.output_layer(x) - return x - - def output_layer(self, features, **kwargs): - """Project features to the vocabulary size.""" - if self.adaptive_softmax is None: - # project back to size of vocabulary - if self.share_input_output_embed: - if isinstance(self.embed_tokens, nn.ModuleList): - output = None - for i, emb in enumerate(self.embed_tokens): - sidx = i * emb.embedding_dim - eidx = (i + 1) * emb.embedding_dim - if output is None: - output = F.linear(features[:, :, sidx:eidx], emb.weight) - else: - output += F.linear(features[:, :, sidx:eidx], emb.weight) - - return output - else: - return F.linear(features, self.embed_tokens.weight) - else: - return F.linear(features, self.embed_tokens) - else: - return features - - -class TransformerEncoderLayer(nn.Module): - """Encoder layer block. - In the original paper each operation (multi-head attention or FFN) is - postprocessed with: `dropout -> add residual -> layernorm`. In the - tensor2tensor code they suggest that learning is more robust when - preprocessing each layer with layernorm and postprocessing with: - `dropout -> add residual`. We default to the approach in the paper, but the - tensor2tensor approach can be enabled by setting - *args.encoder_normalize_before* to ``True``. - - Args: - args (argparse.Namespace): parsed command-line arguments - """ - - def __init__(self, args): - super().__init__() - self.embed_dim = args.encoder_embed_dim - self.self_attn = MultiheadAttention( - self.embed_dim, - args.encoder_attention_heads, - dropout=args.attention_dropout, - self_attention=True, - ) - self.self_attn_layer_norm = LayerNorm(self.embed_dim) - self.dropout = args.dropout - self.activation_fn = utils.get_activation_fn( - activation=getattr(args, "activation_fn", "relu") - ) - self.activation_dropout = getattr(args, "activation_dropout", 0) - if self.activation_dropout == 0: - # for backwards compatibility with models that use args.relu_dropout - self.activation_dropout = getattr(args, "relu_dropout", 0) - self.normalize_before = args.encoder_normalize_before - self.fc1 = Linear(self.embed_dim, args.encoder_ffn_embed_dim) - self.fc2 = Linear(args.encoder_ffn_embed_dim, self.embed_dim) - self.final_layer_norm = LayerNorm(self.embed_dim) - - def upgrade_state_dict_named(self, state_dict, name): - """ - Rename layer norm states from `...layer_norms.0.weight` to - `...self_attn_layer_norm.weight` and `...layer_norms.1.weight` to - `...final_layer_norm.weight` - """ - layer_norm_map = {"0": "self_attn_layer_norm", "1": "final_layer_norm"} - for old, new in layer_norm_map.items(): - for m in ("weight", "bias"): - k = "{}.layer_norms.{}.{}".format(name, old, m) - if k in state_dict: - state_dict["{}.{}.{}".format(name, new, m)] = state_dict[k] - del state_dict[k] - - def forward(self, input): - """ - Args: - input (Tuple): - input[0] (Tensor): input to the layer of shape `(seq_len, batch, embed_dim)` - input[1] (ByteTensor/FloatTensor): encoder padding mask - - binary ByteTensor of shape `(batch, src_len)` where padding elements - are indicated by ``1``. - input[2] (LongTensor): previous decoder outputs of shape - `(batch, tgt_len)`, for teacher forcing) - Returns: - output (Tuple): - output[0] (Tensor): encoded output of shape `(batch, src_len, embed_dim)` - output[1] (ByteTensor/FloatTensor): encoder padding mask - output[2] (LongTensor): previous decoder outputs - """ - x = input[0] - encoder_padding_mask = input[1] - prev_output_tokens = input[2] - residual = x - x = self.maybe_layer_norm(self.self_attn_layer_norm, x, before=True) - x, _ = self.self_attn( - query=x, key=x, value=x, key_padding_mask=encoder_padding_mask - ) - x = F.dropout(x, p=self.dropout, training=self.training) - x = residual + x - x = self.maybe_layer_norm(self.self_attn_layer_norm, x, after=True) - - residual = x - x = self.maybe_layer_norm(self.final_layer_norm, x, before=True) - x = self.activation_fn(self.fc1(x)) - x = F.dropout(x, p=self.activation_dropout, training=self.training) - x = self.fc2(x) - x = F.dropout(x, p=self.dropout, training=self.training) - x = residual + x - x = self.maybe_layer_norm(self.final_layer_norm, x, after=True) - return (x, encoder_padding_mask, prev_output_tokens) - - def maybe_layer_norm(self, layer_norm, x, before=False, after=False): - assert before ^ after - if after ^ self.normalize_before: - return layer_norm(x) - else: - return x - - -class TransformerDecoderLayer(nn.Module): - """Decoder layer block. - - In the original paper each operation (multi-head attention, encoder - attention or FFN) is postprocessed with: `dropout -> add residual -> - layernorm`. In the tensor2tensor code they suggest that learning is more - robust when preprocessing each layer with layernorm and postprocessing with: - `dropout -> add residual`. We default to the approach in the paper, but the - tensor2tensor approach can be enabled by setting - *args.decoder_normalize_before* to ``True``. - - Args: - args (argparse.Namespace): parsed command-line arguments - no_encoder_attn (bool, optional): whether to attend to encoder outputs - (default: False). - """ - - def __init__( - self, args, no_encoder_attn=False, add_bias_kv=False, add_zero_attn=False - ): - super().__init__() - self.embed_dim = args.decoder_embed_dim - self.self_attn = MultiheadAttention( - embed_dim=self.embed_dim, - num_heads=args.decoder_attention_heads, - dropout=args.attention_dropout, - add_bias_kv=add_bias_kv, - add_zero_attn=add_zero_attn, - self_attention=True, - ) - self.dropout = args.dropout - self.activation_fn = utils.get_activation_fn( - activation=getattr(args, "activation_fn", "relu") - ) - self.activation_dropout = getattr(args, "activation_dropout", 0) - if self.activation_dropout == 0: - # for backwards compatibility with models that use args.relu_dropout - self.activation_dropout = getattr(args, "relu_dropout", 0) - self.normalize_before = args.decoder_normalize_before - - # use layerNorm rather than FusedLayerNorm for exporting. - # char_inputs can be used to determint this. - # TODO remove this once we update apex with the fix - export = getattr(args, "char_inputs", False) - self.self_attn_layer_norm = LayerNorm(self.embed_dim, export=export) - - if no_encoder_attn: - self.encoder_attn = None - self.encoder_attn_layer_norm = None - else: - self.encoder_attn = MultiheadAttention( - self.embed_dim, - args.decoder_attention_heads, - kdim=getattr(args, "encoder_embed_dim", None), - vdim=getattr(args, "encoder_embed_dim", None), - dropout=args.attention_dropout, - encoder_decoder_attention=True, - ) - self.encoder_attn_layer_norm = LayerNorm(self.embed_dim, export=export) - - self.fc1 = Linear(self.embed_dim, args.decoder_ffn_embed_dim) - self.fc2 = Linear(args.decoder_ffn_embed_dim, self.embed_dim) - - self.final_layer_norm = LayerNorm(self.embed_dim, export=export) - self.need_attn = True - - self.onnx_trace = False - - def prepare_for_onnx_export_(self): - self.onnx_trace = True - - def forward(self, input): - """ - Args: - input (Tuple): - input[0] (Tensor): input to the layer of shape `(seq_len, batch, embed_dim)` - input[1] (Tensor): encoder output of shape `(batch, src_len, embed_dim)` - input[2] (ByteTensor/FloatTensor): encoder padding mask - - binary ByteTensor of shape `(batch, src_len)` where padding elements - are indicated by ``1``. - Returns: - output (Tuple): - output[0] (Tensor): encoded output of shape `(batch, src_len, embed_dim)` - output[1] (ByteTensor/FloatTensor): encoder padding mask - output[2] (LongTensor): previous decoder outputs - """ - # Note: incremental state is not yet supported - mt_task = False - if isinstance(input, tuple): - x = input[0] - encoder_out = input[1] - encoder_padding_mask = input[2] - incremental_state = None - mt_task = True - else: - x = input - encoder_out = None - encoder_padding_mask = None - incremental_state = None - - if incremental_state is None: - self_attn_mask = self.buffered_future_mask(x) - else: - self_attn_mask = None - - # TODO: add back prev_self_attn_state, prev_attn_state, - # self_attn_padding_mask - prev_self_attn_state = None - prev_attn_state = None - self_attn_padding_mask = None - - residual = x - x = self.maybe_layer_norm(self.self_attn_layer_norm, x, before=True) - if prev_self_attn_state is not None: - if incremental_state is None: - incremental_state = {} - prev_key, prev_value = prev_self_attn_state - saved_state = {"prev_key": prev_key, "prev_value": prev_value} - self.self_attn._set_input_buffer(incremental_state, saved_state) - x, attn = self.self_attn( - query=x, - key=x, - value=x, - key_padding_mask=self_attn_padding_mask, - incremental_state=incremental_state, - need_weights=False, - attn_mask=self_attn_mask, - ) - x = F.dropout(x, p=self.dropout, training=self.training) - x = residual + x - x = self.maybe_layer_norm(self.self_attn_layer_norm, x, after=True) - - if self.encoder_attn is not None: - residual = x - x = self.maybe_layer_norm(self.encoder_attn_layer_norm, x, before=True) - if prev_attn_state is not None: - if incremental_state is None: - incremental_state = {} - prev_key, prev_value = prev_attn_state - saved_state = {"prev_key": prev_key, "prev_value": prev_value} - self.encoder_attn._set_input_buffer(incremental_state, saved_state) - x, attn = self.encoder_attn( - query=x, - key=encoder_out, - value=encoder_out, - key_padding_mask=encoder_padding_mask, - incremental_state=incremental_state, - static_kv=True, - need_weights=(not self.training and self.need_attn), - ) - x = F.dropout(x, p=self.dropout, training=self.training) - x = residual + x - x = self.maybe_layer_norm(self.encoder_attn_layer_norm, x, after=True) - - residual = x - x = self.maybe_layer_norm(self.final_layer_norm, x, before=True) - x = self.activation_fn(self.fc1(x)) - x = F.dropout(x, p=self.activation_dropout, training=self.training) - x = self.fc2(x) - x = F.dropout(x, p=self.dropout, training=self.training) - x = residual + x - x = self.maybe_layer_norm(self.final_layer_norm, x, after=True) - - if mt_task: - return (x, encoder_out, encoder_padding_mask) - return x - - def buffered_future_mask(self, tensor): - dim = tensor.size(0) - if ( - not hasattr(self, "_future_mask") - or self._future_mask is None - or self._future_mask.device != tensor.device - ): - self._future_mask = torch.triu( - utils.fill_with_neg_inf(tensor.new(dim, dim)), 1 - ) - if self._future_mask.size(0) < dim: - self._future_mask = torch.triu( - utils.fill_with_neg_inf(self._future_mask.resize_(dim, dim)), 1 - ) - return self._future_mask[:dim, :dim] - - def maybe_layer_norm(self, layer_norm, x, before=False, after=False): - assert before ^ after - if after ^ self.normalize_before: - return layer_norm(x) - else: - return x - - def make_generation_fast_(self, need_attn=False, **kwargs): - self.need_attn = need_attn - - -def Embedding(num_embeddings, embedding_dim, padding_idx): - m = nn.Embedding(num_embeddings, embedding_dim, padding_idx=padding_idx) - nn.init.normal_(m.weight, mean=0, std=embedding_dim ** -0.5) - nn.init.constant_(m.weight[padding_idx], 0) - return m - - -def Linear(in_features, out_features, bias=True): - m = nn.Linear(in_features, out_features, bias) - nn.init.xavier_uniform_(m.weight) - if bias: - nn.init.constant_(m.bias, 0.0) - return m diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/wav2vec/unsupervised/scripts/prepare_text.sh b/spaces/OFA-Sys/OFA-vqa/fairseq/examples/wav2vec/unsupervised/scripts/prepare_text.sh deleted file mode 100644 index 1caf13cb6a2a0bd84e5322c92124b2fa37368f9a..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/wav2vec/unsupervised/scripts/prepare_text.sh +++ /dev/null @@ -1,82 +0,0 @@ -#!/usr/bin/env zsh -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -lg=$1 -text_path=$2 -target_dir=$3 -min_phones=$4 -phonemizer=$5 -lid_path=$6 - -if [ -z "$lid_path" ]; then - lid_path="lid.187.bin" -fi - -ph_lg=${lg:l} -if test "$lg" = 'fr'; then - ph_lg='fr-fr' -elif test "$lg" = 'en'; then - ph_lg='en-us' -elif test "$lg" = 'pt'; then - ph_lg='pt-br' -fi - -ESPEAK_PATH='' -if test "$phonemizer" = 'espeak'; then - ESPEAK_PATH=$(which espeak) -elif test "$phonemizer" = 'espeak-ng'; then - ESPEAK_PATH=$(which espeak-ng) -elif test "$phonemizer" = 'G2P'; then - ESPEAK_PATH='' -else - echo "Unknown phonemizer $phonemizer. Valid options are espeak, espean-ng and G2P" - exit 1 -fi - -echo $lg -echo $ph_lg -echo $text_path -echo $target_dir -echo "min phone seen threshold is $min_phones" - -mkdir -p $target_dir -python $FAIRSEQ_ROOT/examples/wav2vec/unsupervised/scripts/normalize_and_filter_text.py --lang $lg --fasttext-model $lid_path < $text_path | grep -v '\-\-\-' >! $target_dir/lm.upper.lid.txt -python $FAIRSEQ_ROOT/fairseq_cli/preprocess.py --dataset-impl mmap --trainpref $target_dir/lm.upper.lid.txt --only-source --destdir $target_dir --thresholdsrc 2 --padding-factor 1 --dict-only -cut -f1 -d' ' $target_dir/dict.txt | grep -v -x '[[:punct:]]*' | grep -Pv '\d\d\d\d\d+' >! $target_dir/words.txt - - -if [ -z "$ESPEAK_PATH" ]; then - python $FAIRSEQ_ROOT/examples/wav2vec/unsupervised/scripts/g2p_wrd_to_phn.py --compact < $target_dir/words.txt > $target_dir/phones.txt -else - # echoing 1 into corpus will prevent the mismatch lines between lexicon and phones in case the phonemizer fails - one=$(echo "1" | PHONEMIZER_ESPEAK_PATH=$ESPEAK_PATH phonemize -p ' ' -w '' -l $ph_lg --language-switch remove-flags) - sed 's/$/ 1/' $target_dir/words.txt | PHONEMIZER_ESPEAK_PATH=$ESPEAK_PATH phonemize -o $target_dir/phones.txt -p ' ' -w '' -l $ph_lg -j 70 --language-switch remove-flags - echo "one is ${one}" - sed -i "s/${one}$//" $target_dir/phones.txt -fi - -paste $target_dir/words.txt $target_dir/phones.txt >! $target_dir/lexicon.lst - -python $FAIRSEQ_ROOT/fairseq_cli/preprocess.py --dataset-impl mmap --trainpref $target_dir/phones.txt --only-source --destdir $target_dir/phones --thresholdsrc $min_phones --padding-factor 1 --dict-only - -python $FAIRSEQ_ROOT/examples/wav2vec/unsupervised/scripts/filter_lexicon.py -d $target_dir/phones/dict.txt < $target_dir/lexicon.lst >! $target_dir/lexicon_filtered.lst -python $FAIRSEQ_ROOT/examples/wav2vec/unsupervised/scripts/phonemize_with_sil.py -s 0.25 --surround --lexicon $target_dir/lexicon_filtered.lst < $target_dir/lm.upper.lid.txt >! $target_dir/phones/lm.phones.filtered.txt -cp $target_dir/phones/dict.txt $target_dir/phones/dict.phn.txt -echo " 0" >> $target_dir/phones/dict.phn.txt -python $FAIRSEQ_ROOT/fairseq_cli/preprocess.py --dataset-impl mmap --trainpref $target_dir/phones/lm.phones.filtered.txt --workers 70 --only-source --destdir $target_dir/phones --srcdict $target_dir/phones/dict.phn.txt - -$KENLM_ROOT/lmplz -o 4 < $target_dir/lm.upper.lid.txt --discount_fallback --prune 0 0 0 3 >! $target_dir/kenlm.wrd.o40003.arpa -$KENLM_ROOT/build_binary $target_dir/kenlm.wrd.o40003.arpa $target_dir/kenlm.wrd.o40003.bin - -lg=$lg python $FAIRSEQ_ROOT/examples/speech_recognition/kaldi/kaldi_initializer.py kaldi_root=$KALDI_ROOT fst_dir=$target_dir/fst/phn_to_words_sil lm_arpa=$target_dir/kenlm.wrd.o40003.arpa wav2letter_lexicon=$target_dir/lexicon_filtered.lst data_dir=$target_dir/phones in_labels=phn "blank_symbol=''" -lg=$lg python $FAIRSEQ_ROOT/examples/speech_recognition/kaldi/kaldi_initializer.py kaldi_root=$KALDI_ROOT fst_dir=$target_dir/fst/phn_to_words lm_arpa=$target_dir/kenlm.wrd.o40003.arpa wav2letter_lexicon=$target_dir/lexicon_filtered.lst data_dir=$target_dir/phones in_labels=phn - -$KENLM_ROOT/lmplz -o 4 < $target_dir/phones/lm.phones.filtered.txt --discount_fallback >! $target_dir/phones/lm.phones.filtered.04.arpa -$KENLM_ROOT/build_binary $target_dir/phones/lm.phones.filtered.04.arpa $target_dir/phones/lm.phones.filtered.04.bin -$KENLM_ROOT/lmplz -o 6 < $target_dir/phones/lm.phones.filtered.txt --discount_fallback >! $target_dir/phones/lm.phones.filtered.06.arpa -$KENLM_ROOT/build_binary $target_dir/phones/lm.phones.filtered.06.arpa $target_dir/phones/lm.phones.filtered.06.bin - -lg=$lg python $FAIRSEQ_ROOT/examples/speech_recognition/kaldi/kaldi_initializer.py kaldi_root=$KALDI_ROOT fst_dir=$target_dir/fst/phn_to_phn_sil lm_arpa=$target_dir/phones/lm.phones.filtered.06.arpa data_dir=$target_dir/phones in_labels=phn "blank_symbol=''" diff --git a/spaces/OkamiFeng/Bark-with-Voice-Cloning/training/train.py b/spaces/OkamiFeng/Bark-with-Voice-Cloning/training/train.py deleted file mode 100644 index be0cccc6145b46d026831cb71f198d2292fae931..0000000000000000000000000000000000000000 --- a/spaces/OkamiFeng/Bark-with-Voice-Cloning/training/train.py +++ /dev/null @@ -1,47 +0,0 @@ -import os -import fnmatch -import shutil - -import numpy -import torchaudio -import gradio - -from bark.hubert.pre_kmeans_hubert import CustomHubert -from bark.hubert.customtokenizer import auto_train -from tqdm.auto import tqdm - - -def training_prepare_files(path, model,progress=gradio.Progress(track_tqdm=True)): - - semanticsfolder = "./training/data/output" - wavfolder = "./training/data/output_wav" - ready = os.path.join(path, 'ready') - - testfiles = fnmatch.filter(os.listdir(ready), '*.npy') - if(len(testfiles) < 1): - # prepare and copy for training - hubert_model = CustomHubert(checkpoint_path=model) - - wavfiles = fnmatch.filter(os.listdir(wavfolder), '*.wav') - for i, f in tqdm(enumerate(wavfiles), total=len(wavfiles)): - semaname = '.'.join(f.split('.')[:-1]) # Cut off the extension - semaname = f'{semaname}.npy' - semafilename = os.path.join(semanticsfolder, semaname) - if not os.path.isfile(semafilename): - print(f'Skipping {f} no semantics pair found!') - continue - - print('Processing', f) - wav, sr = torchaudio.load(os.path.join(wavfolder, f)) - if wav.shape[0] == 2: # Stereo to mono if needed - wav = wav.mean(0, keepdim=True) - output = hubert_model.forward(wav, input_sample_hz=sr) - out_array = output.cpu().numpy() - fname = f'{i}_semantic_features.npy' - numpy.save(os.path.join(ready, fname), out_array) - fname = f'{i}_semantic.npy' - shutil.copy(semafilename, os.path.join(ready, fname)) - -def train(path, save_every, max_epochs): - auto_train(path, save_epochs=save_every) - diff --git a/spaces/OpenGVLab/InternGPT/iGPT/models/inpainting_src/ldm_inpainting/ldm/models/diffusion/plms.py b/spaces/OpenGVLab/InternGPT/iGPT/models/inpainting_src/ldm_inpainting/ldm/models/diffusion/plms.py deleted file mode 100644 index 78eeb1003aa45d27bdbfc6b4a1d7ccbff57cd2e3..0000000000000000000000000000000000000000 --- a/spaces/OpenGVLab/InternGPT/iGPT/models/inpainting_src/ldm_inpainting/ldm/models/diffusion/plms.py +++ /dev/null @@ -1,236 +0,0 @@ -"""SAMPLING ONLY.""" - -import torch -import numpy as np -from tqdm import tqdm -from functools import partial - -from ldm.modules.diffusionmodules.util import make_ddim_sampling_parameters, make_ddim_timesteps, noise_like - - -class PLMSSampler(object): - def __init__(self, model, schedule="linear", **kwargs): - super().__init__() - self.model = model - self.ddpm_num_timesteps = model.num_timesteps - self.schedule = schedule - - def register_buffer(self, name, attr): - if type(attr) == torch.Tensor: - if attr.device != torch.device("cuda"): - attr = attr.to(torch.device("cuda")) - setattr(self, name, attr) - - def make_schedule(self, ddim_num_steps, ddim_discretize="uniform", ddim_eta=0., verbose=True): - if ddim_eta != 0: - raise ValueError('ddim_eta must be 0 for PLMS') - self.ddim_timesteps = make_ddim_timesteps(ddim_discr_method=ddim_discretize, num_ddim_timesteps=ddim_num_steps, - num_ddpm_timesteps=self.ddpm_num_timesteps,verbose=verbose) - alphas_cumprod = self.model.alphas_cumprod - assert alphas_cumprod.shape[0] == self.ddpm_num_timesteps, 'alphas have to be defined for each timestep' - to_torch = lambda x: x.clone().detach().to(torch.float32).to(self.model.device) - - self.register_buffer('betas', to_torch(self.model.betas)) - self.register_buffer('alphas_cumprod', to_torch(alphas_cumprod)) - self.register_buffer('alphas_cumprod_prev', to_torch(self.model.alphas_cumprod_prev)) - - # calculations for diffusion q(x_t | x_{t-1}) and others - self.register_buffer('sqrt_alphas_cumprod', to_torch(np.sqrt(alphas_cumprod.cpu()))) - self.register_buffer('sqrt_one_minus_alphas_cumprod', to_torch(np.sqrt(1. - alphas_cumprod.cpu()))) - self.register_buffer('log_one_minus_alphas_cumprod', to_torch(np.log(1. - alphas_cumprod.cpu()))) - self.register_buffer('sqrt_recip_alphas_cumprod', to_torch(np.sqrt(1. / alphas_cumprod.cpu()))) - self.register_buffer('sqrt_recipm1_alphas_cumprod', to_torch(np.sqrt(1. / alphas_cumprod.cpu() - 1))) - - # ddim sampling parameters - ddim_sigmas, ddim_alphas, ddim_alphas_prev = make_ddim_sampling_parameters(alphacums=alphas_cumprod.cpu(), - ddim_timesteps=self.ddim_timesteps, - eta=ddim_eta,verbose=verbose) - self.register_buffer('ddim_sigmas', ddim_sigmas) - self.register_buffer('ddim_alphas', ddim_alphas) - self.register_buffer('ddim_alphas_prev', ddim_alphas_prev) - self.register_buffer('ddim_sqrt_one_minus_alphas', np.sqrt(1. - ddim_alphas)) - sigmas_for_original_sampling_steps = ddim_eta * torch.sqrt( - (1 - self.alphas_cumprod_prev) / (1 - self.alphas_cumprod) * ( - 1 - self.alphas_cumprod / self.alphas_cumprod_prev)) - self.register_buffer('ddim_sigmas_for_original_num_steps', sigmas_for_original_sampling_steps) - - @torch.no_grad() - def sample(self, - S, - batch_size, - shape, - conditioning=None, - callback=None, - normals_sequence=None, - img_callback=None, - quantize_x0=False, - eta=0., - mask=None, - x0=None, - temperature=1., - noise_dropout=0., - score_corrector=None, - corrector_kwargs=None, - verbose=True, - x_T=None, - log_every_t=100, - unconditional_guidance_scale=1., - unconditional_conditioning=None, - # this has to come in the same format as the conditioning, # e.g. as encoded tokens, ... - **kwargs - ): - if conditioning is not None: - if isinstance(conditioning, dict): - cbs = conditioning[list(conditioning.keys())[0]].shape[0] - if cbs != batch_size: - print(f"Warning: Got {cbs} conditionings but batch-size is {batch_size}") - else: - if conditioning.shape[0] != batch_size: - print(f"Warning: Got {conditioning.shape[0]} conditionings but batch-size is {batch_size}") - - self.make_schedule(ddim_num_steps=S, ddim_eta=eta, verbose=verbose) - # sampling - C, H, W = shape - size = (batch_size, C, H, W) - print(f'Data shape for PLMS sampling is {size}') - - samples, intermediates = self.plms_sampling(conditioning, size, - callback=callback, - img_callback=img_callback, - quantize_denoised=quantize_x0, - mask=mask, x0=x0, - ddim_use_original_steps=False, - noise_dropout=noise_dropout, - temperature=temperature, - score_corrector=score_corrector, - corrector_kwargs=corrector_kwargs, - x_T=x_T, - log_every_t=log_every_t, - unconditional_guidance_scale=unconditional_guidance_scale, - unconditional_conditioning=unconditional_conditioning, - ) - return samples, intermediates - - @torch.no_grad() - def plms_sampling(self, cond, shape, - x_T=None, ddim_use_original_steps=False, - callback=None, timesteps=None, quantize_denoised=False, - mask=None, x0=None, img_callback=None, log_every_t=100, - temperature=1., noise_dropout=0., score_corrector=None, corrector_kwargs=None, - unconditional_guidance_scale=1., unconditional_conditioning=None,): - device = self.model.betas.device - b = shape[0] - if x_T is None: - img = torch.randn(shape, device=device) - else: - img = x_T - - if timesteps is None: - timesteps = self.ddpm_num_timesteps if ddim_use_original_steps else self.ddim_timesteps - elif timesteps is not None and not ddim_use_original_steps: - subset_end = int(min(timesteps / self.ddim_timesteps.shape[0], 1) * self.ddim_timesteps.shape[0]) - 1 - timesteps = self.ddim_timesteps[:subset_end] - - intermediates = {'x_inter': [img], 'pred_x0': [img]} - time_range = list(reversed(range(0,timesteps))) if ddim_use_original_steps else np.flip(timesteps) - total_steps = timesteps if ddim_use_original_steps else timesteps.shape[0] - print(f"Running PLMS Sampling with {total_steps} timesteps") - - iterator = tqdm(time_range, desc='PLMS Sampler', total=total_steps) - old_eps = [] - - for i, step in enumerate(iterator): - index = total_steps - i - 1 - ts = torch.full((b,), step, device=device, dtype=torch.long) - ts_next = torch.full((b,), time_range[min(i + 1, len(time_range) - 1)], device=device, dtype=torch.long) - - if mask is not None: - assert x0 is not None - img_orig = self.model.q_sample(x0, ts) # TODO: deterministic forward pass? - img = img_orig * mask + (1. - mask) * img - - outs = self.p_sample_plms(img, cond, ts, index=index, use_original_steps=ddim_use_original_steps, - quantize_denoised=quantize_denoised, temperature=temperature, - noise_dropout=noise_dropout, score_corrector=score_corrector, - corrector_kwargs=corrector_kwargs, - unconditional_guidance_scale=unconditional_guidance_scale, - unconditional_conditioning=unconditional_conditioning, - old_eps=old_eps, t_next=ts_next) - img, pred_x0, e_t = outs - old_eps.append(e_t) - if len(old_eps) >= 4: - old_eps.pop(0) - if callback: callback(i) - if img_callback: img_callback(pred_x0, i) - - if index % log_every_t == 0 or index == total_steps - 1: - intermediates['x_inter'].append(img) - intermediates['pred_x0'].append(pred_x0) - - return img, intermediates - - @torch.no_grad() - def p_sample_plms(self, x, c, t, index, repeat_noise=False, use_original_steps=False, quantize_denoised=False, - temperature=1., noise_dropout=0., score_corrector=None, corrector_kwargs=None, - unconditional_guidance_scale=1., unconditional_conditioning=None, old_eps=None, t_next=None): - b, *_, device = *x.shape, x.device - - def get_model_output(x, t): - if unconditional_conditioning is None or unconditional_guidance_scale == 1.: - e_t = self.model.apply_model(x, t, c) - else: - x_in = torch.cat([x] * 2) - t_in = torch.cat([t] * 2) - c_in = torch.cat([unconditional_conditioning, c]) - e_t_uncond, e_t = self.model.apply_model(x_in, t_in, c_in).chunk(2) - e_t = e_t_uncond + unconditional_guidance_scale * (e_t - e_t_uncond) - - if score_corrector is not None: - assert self.model.parameterization == "eps" - e_t = score_corrector.modify_score(self.model, e_t, x, t, c, **corrector_kwargs) - - return e_t - - alphas = self.model.alphas_cumprod if use_original_steps else self.ddim_alphas - alphas_prev = self.model.alphas_cumprod_prev if use_original_steps else self.ddim_alphas_prev - sqrt_one_minus_alphas = self.model.sqrt_one_minus_alphas_cumprod if use_original_steps else self.ddim_sqrt_one_minus_alphas - sigmas = self.model.ddim_sigmas_for_original_num_steps if use_original_steps else self.ddim_sigmas - - def get_x_prev_and_pred_x0(e_t, index): - # select parameters corresponding to the currently considered timestep - a_t = torch.full((b, 1, 1, 1), alphas[index], device=device) - a_prev = torch.full((b, 1, 1, 1), alphas_prev[index], device=device) - sigma_t = torch.full((b, 1, 1, 1), sigmas[index], device=device) - sqrt_one_minus_at = torch.full((b, 1, 1, 1), sqrt_one_minus_alphas[index],device=device) - - # current prediction for x_0 - pred_x0 = (x - sqrt_one_minus_at * e_t) / a_t.sqrt() - if quantize_denoised: - pred_x0, _, *_ = self.model.first_stage_model.quantize(pred_x0) - # direction pointing to x_t - dir_xt = (1. - a_prev - sigma_t**2).sqrt() * e_t - noise = sigma_t * noise_like(x.shape, device, repeat_noise) * temperature - if noise_dropout > 0.: - noise = torch.nn.functional.dropout(noise, p=noise_dropout) - x_prev = a_prev.sqrt() * pred_x0 + dir_xt + noise - return x_prev, pred_x0 - - e_t = get_model_output(x, t) - if len(old_eps) == 0: - # Pseudo Improved Euler (2nd order) - x_prev, pred_x0 = get_x_prev_and_pred_x0(e_t, index) - e_t_next = get_model_output(x_prev, t_next) - e_t_prime = (e_t + e_t_next) / 2 - elif len(old_eps) == 1: - # 2nd order Pseudo Linear Multistep (Adams-Bashforth) - e_t_prime = (3 * e_t - old_eps[-1]) / 2 - elif len(old_eps) == 2: - # 3nd order Pseudo Linear Multistep (Adams-Bashforth) - e_t_prime = (23 * e_t - 16 * old_eps[-1] + 5 * old_eps[-2]) / 12 - elif len(old_eps) >= 3: - # 4nd order Pseudo Linear Multistep (Adams-Bashforth) - e_t_prime = (55 * e_t - 59 * old_eps[-1] + 37 * old_eps[-2] - 9 * old_eps[-3]) / 24 - - x_prev, pred_x0 = get_x_prev_and_pred_x0(e_t_prime, index) - - return x_prev, pred_x0, e_t diff --git a/spaces/PAIR/Text2Video-Zero/annotator/uniformer/configs/_base_/models/encnet_r50-d8.py b/spaces/PAIR/Text2Video-Zero/annotator/uniformer/configs/_base_/models/encnet_r50-d8.py deleted file mode 100644 index be777123a886503172a95fe0719e956a147bbd68..0000000000000000000000000000000000000000 --- a/spaces/PAIR/Text2Video-Zero/annotator/uniformer/configs/_base_/models/encnet_r50-d8.py +++ /dev/null @@ -1,48 +0,0 @@ -# model settings -norm_cfg = dict(type='SyncBN', requires_grad=True) -model = dict( - type='EncoderDecoder', - pretrained='open-mmlab://resnet50_v1c', - backbone=dict( - type='ResNetV1c', - depth=50, - num_stages=4, - out_indices=(0, 1, 2, 3), - dilations=(1, 1, 2, 4), - strides=(1, 2, 1, 1), - norm_cfg=norm_cfg, - norm_eval=False, - style='pytorch', - contract_dilation=True), - decode_head=dict( - type='EncHead', - in_channels=[512, 1024, 2048], - in_index=(1, 2, 3), - channels=512, - num_codes=32, - use_se_loss=True, - add_lateral=False, - dropout_ratio=0.1, - num_classes=19, - norm_cfg=norm_cfg, - align_corners=False, - loss_decode=dict( - type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0), - loss_se_decode=dict( - type='CrossEntropyLoss', use_sigmoid=True, loss_weight=0.2)), - auxiliary_head=dict( - type='FCNHead', - in_channels=1024, - in_index=2, - channels=256, - num_convs=1, - concat_input=False, - dropout_ratio=0.1, - num_classes=19, - norm_cfg=norm_cfg, - align_corners=False, - loss_decode=dict( - type='CrossEntropyLoss', use_sigmoid=False, loss_weight=0.4)), - # model training and testing settings - train_cfg=dict(), - test_cfg=dict(mode='whole')) diff --git a/spaces/PAIR/Text2Video-Zero/annotator/uniformer/configs/_base_/schedules/schedule_160k.py b/spaces/PAIR/Text2Video-Zero/annotator/uniformer/configs/_base_/schedules/schedule_160k.py deleted file mode 100644 index 52603890b10f25faf8eec9f9e5a4468fae09b811..0000000000000000000000000000000000000000 --- a/spaces/PAIR/Text2Video-Zero/annotator/uniformer/configs/_base_/schedules/schedule_160k.py +++ /dev/null @@ -1,9 +0,0 @@ -# optimizer -optimizer = dict(type='SGD', lr=0.01, momentum=0.9, weight_decay=0.0005) -optimizer_config = dict() -# learning policy -lr_config = dict(policy='poly', power=0.9, min_lr=1e-4, by_epoch=False) -# runtime settings -runner = dict(type='IterBasedRunner', max_iters=160000) -checkpoint_config = dict(by_epoch=False, interval=16000) -evaluation = dict(interval=16000, metric='mIoU') diff --git a/spaces/PaddlePaddle/ERNIE-ViLG/README.md b/spaces/PaddlePaddle/ERNIE-ViLG/README.md deleted file mode 100644 index 56f0d6cbc910fb46b95aaf8a6e3e32619e60b26d..0000000000000000000000000000000000000000 --- a/spaces/PaddlePaddle/ERNIE-ViLG/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: ERNIE-ViLG -emoji: 🐼 -colorFrom: yellow -colorTo: pink -sdk: gradio -sdk_version: 3.1.7 -app_file: app.py -pinned: false -license: apache-2.0 -python_version: 3.9.12 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Pengyey/bingo-chuchu/src/components/ui/tooltip.tsx b/spaces/Pengyey/bingo-chuchu/src/components/ui/tooltip.tsx deleted file mode 100644 index af1d48beb90dd5ae311796539843700871052cae..0000000000000000000000000000000000000000 --- a/spaces/Pengyey/bingo-chuchu/src/components/ui/tooltip.tsx +++ /dev/null @@ -1,30 +0,0 @@ -'use client' - -import * as React from 'react' -import * as TooltipPrimitive from '@radix-ui/react-tooltip' - -import { cn } from '@/lib/utils' - -const TooltipProvider = TooltipPrimitive.Provider - -const Tooltip = TooltipPrimitive.Root - -const TooltipTrigger = TooltipPrimitive.Trigger - -const TooltipContent = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, sideOffset = 4, ...props }, ref) => ( - -)) -TooltipContent.displayName = TooltipPrimitive.Content.displayName - -export { Tooltip, TooltipTrigger, TooltipContent, TooltipProvider } diff --git a/spaces/Pie31415/control-animation/annotator/uniformer/mmcv/parallel/data_container.py b/spaces/Pie31415/control-animation/annotator/uniformer/mmcv/parallel/data_container.py deleted file mode 100644 index cedb0d32a51a1f575a622b38de2cee3ab4757821..0000000000000000000000000000000000000000 --- a/spaces/Pie31415/control-animation/annotator/uniformer/mmcv/parallel/data_container.py +++ /dev/null @@ -1,89 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import functools - -import torch - - -def assert_tensor_type(func): - - @functools.wraps(func) - def wrapper(*args, **kwargs): - if not isinstance(args[0].data, torch.Tensor): - raise AttributeError( - f'{args[0].__class__.__name__} has no attribute ' - f'{func.__name__} for type {args[0].datatype}') - return func(*args, **kwargs) - - return wrapper - - -class DataContainer: - """A container for any type of objects. - - Typically tensors will be stacked in the collate function and sliced along - some dimension in the scatter function. This behavior has some limitations. - 1. All tensors have to be the same size. - 2. Types are limited (numpy array or Tensor). - - We design `DataContainer` and `MMDataParallel` to overcome these - limitations. The behavior can be either of the following. - - - copy to GPU, pad all tensors to the same size and stack them - - copy to GPU without stacking - - leave the objects as is and pass it to the model - - pad_dims specifies the number of last few dimensions to do padding - """ - - def __init__(self, - data, - stack=False, - padding_value=0, - cpu_only=False, - pad_dims=2): - self._data = data - self._cpu_only = cpu_only - self._stack = stack - self._padding_value = padding_value - assert pad_dims in [None, 1, 2, 3] - self._pad_dims = pad_dims - - def __repr__(self): - return f'{self.__class__.__name__}({repr(self.data)})' - - def __len__(self): - return len(self._data) - - @property - def data(self): - return self._data - - @property - def datatype(self): - if isinstance(self.data, torch.Tensor): - return self.data.type() - else: - return type(self.data) - - @property - def cpu_only(self): - return self._cpu_only - - @property - def stack(self): - return self._stack - - @property - def padding_value(self): - return self._padding_value - - @property - def pad_dims(self): - return self._pad_dims - - @assert_tensor_type - def size(self, *args, **kwargs): - return self.data.size(*args, **kwargs) - - @assert_tensor_type - def dim(self): - return self.data.dim() diff --git a/spaces/Pie31415/control-animation/annotator/uniformer/mmcv/runner/hooks/optimizer.py b/spaces/Pie31415/control-animation/annotator/uniformer/mmcv/runner/hooks/optimizer.py deleted file mode 100644 index 4ef3e9ff8f9c6926e32bdf027612267b64ed80df..0000000000000000000000000000000000000000 --- a/spaces/Pie31415/control-animation/annotator/uniformer/mmcv/runner/hooks/optimizer.py +++ /dev/null @@ -1,508 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import copy -from collections import defaultdict -from itertools import chain - -from torch.nn.utils import clip_grad - -from annotator.uniformer.mmcv.utils import TORCH_VERSION, _BatchNorm, digit_version -from ..dist_utils import allreduce_grads -from ..fp16_utils import LossScaler, wrap_fp16_model -from .hook import HOOKS, Hook - -try: - # If PyTorch version >= 1.6.0, torch.cuda.amp.GradScaler would be imported - # and used; otherwise, auto fp16 will adopt mmcv's implementation. - from torch.cuda.amp import GradScaler -except ImportError: - pass - - -@HOOKS.register_module() -class OptimizerHook(Hook): - - def __init__(self, grad_clip=None): - self.grad_clip = grad_clip - - def clip_grads(self, params): - params = list( - filter(lambda p: p.requires_grad and p.grad is not None, params)) - if len(params) > 0: - return clip_grad.clip_grad_norm_(params, **self.grad_clip) - - def after_train_iter(self, runner): - runner.optimizer.zero_grad() - runner.outputs['loss'].backward() - if self.grad_clip is not None: - grad_norm = self.clip_grads(runner.model.parameters()) - if grad_norm is not None: - # Add grad norm to the logger - runner.log_buffer.update({'grad_norm': float(grad_norm)}, - runner.outputs['num_samples']) - runner.optimizer.step() - - -@HOOKS.register_module() -class GradientCumulativeOptimizerHook(OptimizerHook): - """Optimizer Hook implements multi-iters gradient cumulating. - - Args: - cumulative_iters (int, optional): Num of gradient cumulative iters. - The optimizer will step every `cumulative_iters` iters. - Defaults to 1. - - Examples: - >>> # Use cumulative_iters to simulate a large batch size - >>> # It is helpful when the hardware cannot handle a large batch size. - >>> loader = DataLoader(data, batch_size=64) - >>> optim_hook = GradientCumulativeOptimizerHook(cumulative_iters=4) - >>> # almost equals to - >>> loader = DataLoader(data, batch_size=256) - >>> optim_hook = OptimizerHook() - """ - - def __init__(self, cumulative_iters=1, **kwargs): - super(GradientCumulativeOptimizerHook, self).__init__(**kwargs) - - assert isinstance(cumulative_iters, int) and cumulative_iters > 0, \ - f'cumulative_iters only accepts positive int, but got ' \ - f'{type(cumulative_iters)} instead.' - - self.cumulative_iters = cumulative_iters - self.divisible_iters = 0 - self.remainder_iters = 0 - self.initialized = False - - def has_batch_norm(self, module): - if isinstance(module, _BatchNorm): - return True - for m in module.children(): - if self.has_batch_norm(m): - return True - return False - - def _init(self, runner): - if runner.iter % self.cumulative_iters != 0: - runner.logger.warning( - 'Resume iter number is not divisible by cumulative_iters in ' - 'GradientCumulativeOptimizerHook, which means the gradient of ' - 'some iters is lost and the result may be influenced slightly.' - ) - - if self.has_batch_norm(runner.model) and self.cumulative_iters > 1: - runner.logger.warning( - 'GradientCumulativeOptimizerHook may slightly decrease ' - 'performance if the model has BatchNorm layers.') - - residual_iters = runner.max_iters - runner.iter - - self.divisible_iters = ( - residual_iters // self.cumulative_iters * self.cumulative_iters) - self.remainder_iters = residual_iters - self.divisible_iters - - self.initialized = True - - def after_train_iter(self, runner): - if not self.initialized: - self._init(runner) - - if runner.iter < self.divisible_iters: - loss_factor = self.cumulative_iters - else: - loss_factor = self.remainder_iters - loss = runner.outputs['loss'] - loss = loss / loss_factor - loss.backward() - - if (self.every_n_iters(runner, self.cumulative_iters) - or self.is_last_iter(runner)): - - if self.grad_clip is not None: - grad_norm = self.clip_grads(runner.model.parameters()) - if grad_norm is not None: - # Add grad norm to the logger - runner.log_buffer.update({'grad_norm': float(grad_norm)}, - runner.outputs['num_samples']) - runner.optimizer.step() - runner.optimizer.zero_grad() - - -if (TORCH_VERSION != 'parrots' - and digit_version(TORCH_VERSION) >= digit_version('1.6.0')): - - @HOOKS.register_module() - class Fp16OptimizerHook(OptimizerHook): - """FP16 optimizer hook (using PyTorch's implementation). - - If you are using PyTorch >= 1.6, torch.cuda.amp is used as the backend, - to take care of the optimization procedure. - - Args: - loss_scale (float | str | dict): Scale factor configuration. - If loss_scale is a float, static loss scaling will be used with - the specified scale. If loss_scale is a string, it must be - 'dynamic', then dynamic loss scaling will be used. - It can also be a dict containing arguments of GradScalar. - Defaults to 512. For Pytorch >= 1.6, mmcv uses official - implementation of GradScaler. If you use a dict version of - loss_scale to create GradScaler, please refer to: - https://pytorch.org/docs/stable/amp.html#torch.cuda.amp.GradScaler - for the parameters. - - Examples: - >>> loss_scale = dict( - ... init_scale=65536.0, - ... growth_factor=2.0, - ... backoff_factor=0.5, - ... growth_interval=2000 - ... ) - >>> optimizer_hook = Fp16OptimizerHook(loss_scale=loss_scale) - """ - - def __init__(self, - grad_clip=None, - coalesce=True, - bucket_size_mb=-1, - loss_scale=512., - distributed=True): - self.grad_clip = grad_clip - self.coalesce = coalesce - self.bucket_size_mb = bucket_size_mb - self.distributed = distributed - self._scale_update_param = None - if loss_scale == 'dynamic': - self.loss_scaler = GradScaler() - elif isinstance(loss_scale, float): - self._scale_update_param = loss_scale - self.loss_scaler = GradScaler(init_scale=loss_scale) - elif isinstance(loss_scale, dict): - self.loss_scaler = GradScaler(**loss_scale) - else: - raise ValueError('loss_scale must be of type float, dict, or ' - f'"dynamic", got {loss_scale}') - - def before_run(self, runner): - """Preparing steps before Mixed Precision Training.""" - # wrap model mode to fp16 - wrap_fp16_model(runner.model) - # resume from state dict - if 'fp16' in runner.meta and 'loss_scaler' in runner.meta['fp16']: - scaler_state_dict = runner.meta['fp16']['loss_scaler'] - self.loss_scaler.load_state_dict(scaler_state_dict) - - def copy_grads_to_fp32(self, fp16_net, fp32_weights): - """Copy gradients from fp16 model to fp32 weight copy.""" - for fp32_param, fp16_param in zip(fp32_weights, - fp16_net.parameters()): - if fp16_param.grad is not None: - if fp32_param.grad is None: - fp32_param.grad = fp32_param.data.new( - fp32_param.size()) - fp32_param.grad.copy_(fp16_param.grad) - - def copy_params_to_fp16(self, fp16_net, fp32_weights): - """Copy updated params from fp32 weight copy to fp16 model.""" - for fp16_param, fp32_param in zip(fp16_net.parameters(), - fp32_weights): - fp16_param.data.copy_(fp32_param.data) - - def after_train_iter(self, runner): - """Backward optimization steps for Mixed Precision Training. For - dynamic loss scaling, please refer to - https://pytorch.org/docs/stable/amp.html#torch.cuda.amp.GradScaler. - - 1. Scale the loss by a scale factor. - 2. Backward the loss to obtain the gradients. - 3. Unscale the optimizer’s gradient tensors. - 4. Call optimizer.step() and update scale factor. - 5. Save loss_scaler state_dict for resume purpose. - """ - # clear grads of last iteration - runner.model.zero_grad() - runner.optimizer.zero_grad() - - self.loss_scaler.scale(runner.outputs['loss']).backward() - self.loss_scaler.unscale_(runner.optimizer) - # grad clip - if self.grad_clip is not None: - grad_norm = self.clip_grads(runner.model.parameters()) - if grad_norm is not None: - # Add grad norm to the logger - runner.log_buffer.update({'grad_norm': float(grad_norm)}, - runner.outputs['num_samples']) - # backward and update scaler - self.loss_scaler.step(runner.optimizer) - self.loss_scaler.update(self._scale_update_param) - - # save state_dict of loss_scaler - runner.meta.setdefault( - 'fp16', {})['loss_scaler'] = self.loss_scaler.state_dict() - - @HOOKS.register_module() - class GradientCumulativeFp16OptimizerHook(GradientCumulativeOptimizerHook, - Fp16OptimizerHook): - """Fp16 optimizer Hook (using PyTorch's implementation) implements - multi-iters gradient cumulating. - - If you are using PyTorch >= 1.6, torch.cuda.amp is used as the backend, - to take care of the optimization procedure. - """ - - def __init__(self, *args, **kwargs): - super(GradientCumulativeFp16OptimizerHook, - self).__init__(*args, **kwargs) - - def after_train_iter(self, runner): - if not self.initialized: - self._init(runner) - - if runner.iter < self.divisible_iters: - loss_factor = self.cumulative_iters - else: - loss_factor = self.remainder_iters - loss = runner.outputs['loss'] - loss = loss / loss_factor - - self.loss_scaler.scale(loss).backward() - - if (self.every_n_iters(runner, self.cumulative_iters) - or self.is_last_iter(runner)): - - # copy fp16 grads in the model to fp32 params in the optimizer - self.loss_scaler.unscale_(runner.optimizer) - - if self.grad_clip is not None: - grad_norm = self.clip_grads(runner.model.parameters()) - if grad_norm is not None: - # Add grad norm to the logger - runner.log_buffer.update( - {'grad_norm': float(grad_norm)}, - runner.outputs['num_samples']) - - # backward and update scaler - self.loss_scaler.step(runner.optimizer) - self.loss_scaler.update(self._scale_update_param) - - # save state_dict of loss_scaler - runner.meta.setdefault( - 'fp16', {})['loss_scaler'] = self.loss_scaler.state_dict() - - # clear grads - runner.model.zero_grad() - runner.optimizer.zero_grad() - -else: - - @HOOKS.register_module() - class Fp16OptimizerHook(OptimizerHook): - """FP16 optimizer hook (mmcv's implementation). - - The steps of fp16 optimizer is as follows. - 1. Scale the loss value. - 2. BP in the fp16 model. - 2. Copy gradients from fp16 model to fp32 weights. - 3. Update fp32 weights. - 4. Copy updated parameters from fp32 weights to fp16 model. - - Refer to https://arxiv.org/abs/1710.03740 for more details. - - Args: - loss_scale (float | str | dict): Scale factor configuration. - If loss_scale is a float, static loss scaling will be used with - the specified scale. If loss_scale is a string, it must be - 'dynamic', then dynamic loss scaling will be used. - It can also be a dict containing arguments of LossScaler. - Defaults to 512. - """ - - def __init__(self, - grad_clip=None, - coalesce=True, - bucket_size_mb=-1, - loss_scale=512., - distributed=True): - self.grad_clip = grad_clip - self.coalesce = coalesce - self.bucket_size_mb = bucket_size_mb - self.distributed = distributed - if loss_scale == 'dynamic': - self.loss_scaler = LossScaler(mode='dynamic') - elif isinstance(loss_scale, float): - self.loss_scaler = LossScaler( - init_scale=loss_scale, mode='static') - elif isinstance(loss_scale, dict): - self.loss_scaler = LossScaler(**loss_scale) - else: - raise ValueError('loss_scale must be of type float, dict, or ' - f'"dynamic", got {loss_scale}') - - def before_run(self, runner): - """Preparing steps before Mixed Precision Training. - - 1. Make a master copy of fp32 weights for optimization. - 2. Convert the main model from fp32 to fp16. - """ - # keep a copy of fp32 weights - old_groups = runner.optimizer.param_groups - runner.optimizer.param_groups = copy.deepcopy( - runner.optimizer.param_groups) - state = defaultdict(dict) - p_map = { - old_p: p - for old_p, p in zip( - chain(*(g['params'] for g in old_groups)), - chain(*(g['params'] - for g in runner.optimizer.param_groups))) - } - for k, v in runner.optimizer.state.items(): - state[p_map[k]] = v - runner.optimizer.state = state - # convert model to fp16 - wrap_fp16_model(runner.model) - # resume from state dict - if 'fp16' in runner.meta and 'loss_scaler' in runner.meta['fp16']: - scaler_state_dict = runner.meta['fp16']['loss_scaler'] - self.loss_scaler.load_state_dict(scaler_state_dict) - - def copy_grads_to_fp32(self, fp16_net, fp32_weights): - """Copy gradients from fp16 model to fp32 weight copy.""" - for fp32_param, fp16_param in zip(fp32_weights, - fp16_net.parameters()): - if fp16_param.grad is not None: - if fp32_param.grad is None: - fp32_param.grad = fp32_param.data.new( - fp32_param.size()) - fp32_param.grad.copy_(fp16_param.grad) - - def copy_params_to_fp16(self, fp16_net, fp32_weights): - """Copy updated params from fp32 weight copy to fp16 model.""" - for fp16_param, fp32_param in zip(fp16_net.parameters(), - fp32_weights): - fp16_param.data.copy_(fp32_param.data) - - def after_train_iter(self, runner): - """Backward optimization steps for Mixed Precision Training. For - dynamic loss scaling, please refer `loss_scalar.py` - - 1. Scale the loss by a scale factor. - 2. Backward the loss to obtain the gradients (fp16). - 3. Copy gradients from the model to the fp32 weight copy. - 4. Scale the gradients back and update the fp32 weight copy. - 5. Copy back the params from fp32 weight copy to the fp16 model. - 6. Save loss_scaler state_dict for resume purpose. - """ - # clear grads of last iteration - runner.model.zero_grad() - runner.optimizer.zero_grad() - # scale the loss value - scaled_loss = runner.outputs['loss'] * self.loss_scaler.loss_scale - scaled_loss.backward() - # copy fp16 grads in the model to fp32 params in the optimizer - - fp32_weights = [] - for param_group in runner.optimizer.param_groups: - fp32_weights += param_group['params'] - self.copy_grads_to_fp32(runner.model, fp32_weights) - # allreduce grads - if self.distributed: - allreduce_grads(fp32_weights, self.coalesce, - self.bucket_size_mb) - - has_overflow = self.loss_scaler.has_overflow(fp32_weights) - # if has overflow, skip this iteration - if not has_overflow: - # scale the gradients back - for param in fp32_weights: - if param.grad is not None: - param.grad.div_(self.loss_scaler.loss_scale) - if self.grad_clip is not None: - grad_norm = self.clip_grads(fp32_weights) - if grad_norm is not None: - # Add grad norm to the logger - runner.log_buffer.update( - {'grad_norm': float(grad_norm)}, - runner.outputs['num_samples']) - # update fp32 params - runner.optimizer.step() - # copy fp32 params to the fp16 model - self.copy_params_to_fp16(runner.model, fp32_weights) - self.loss_scaler.update_scale(has_overflow) - if has_overflow: - runner.logger.warning('Check overflow, downscale loss scale ' - f'to {self.loss_scaler.cur_scale}') - - # save state_dict of loss_scaler - runner.meta.setdefault( - 'fp16', {})['loss_scaler'] = self.loss_scaler.state_dict() - - @HOOKS.register_module() - class GradientCumulativeFp16OptimizerHook(GradientCumulativeOptimizerHook, - Fp16OptimizerHook): - """Fp16 optimizer Hook (using mmcv implementation) implements multi- - iters gradient cumulating.""" - - def __init__(self, *args, **kwargs): - super(GradientCumulativeFp16OptimizerHook, - self).__init__(*args, **kwargs) - - def after_train_iter(self, runner): - if not self.initialized: - self._init(runner) - - if runner.iter < self.divisible_iters: - loss_factor = self.cumulative_iters - else: - loss_factor = self.remainder_iters - - loss = runner.outputs['loss'] - loss = loss / loss_factor - - # scale the loss value - scaled_loss = loss * self.loss_scaler.loss_scale - scaled_loss.backward() - - if (self.every_n_iters(runner, self.cumulative_iters) - or self.is_last_iter(runner)): - - # copy fp16 grads in the model to fp32 params in the optimizer - fp32_weights = [] - for param_group in runner.optimizer.param_groups: - fp32_weights += param_group['params'] - self.copy_grads_to_fp32(runner.model, fp32_weights) - # allreduce grads - if self.distributed: - allreduce_grads(fp32_weights, self.coalesce, - self.bucket_size_mb) - - has_overflow = self.loss_scaler.has_overflow(fp32_weights) - # if has overflow, skip this iteration - if not has_overflow: - # scale the gradients back - for param in fp32_weights: - if param.grad is not None: - param.grad.div_(self.loss_scaler.loss_scale) - if self.grad_clip is not None: - grad_norm = self.clip_grads(fp32_weights) - if grad_norm is not None: - # Add grad norm to the logger - runner.log_buffer.update( - {'grad_norm': float(grad_norm)}, - runner.outputs['num_samples']) - # update fp32 params - runner.optimizer.step() - # copy fp32 params to the fp16 model - self.copy_params_to_fp16(runner.model, fp32_weights) - else: - runner.logger.warning( - 'Check overflow, downscale loss scale ' - f'to {self.loss_scaler.cur_scale}') - - self.loss_scaler.update_scale(has_overflow) - - # save state_dict of loss_scaler - runner.meta.setdefault( - 'fp16', {})['loss_scaler'] = self.loss_scaler.state_dict() - - # clear grads - runner.model.zero_grad() - runner.optimizer.zero_grad() diff --git a/spaces/Pinwheel/GLIP-BLIP-Object-Detection-VQA/maskrcnn_benchmark/csrc/cuda/vision.h b/spaces/Pinwheel/GLIP-BLIP-Object-Detection-VQA/maskrcnn_benchmark/csrc/cuda/vision.h deleted file mode 100644 index 31318c2cb85622682ea41cbfa9cf0654b0d78996..0000000000000000000000000000000000000000 --- a/spaces/Pinwheel/GLIP-BLIP-Object-Detection-VQA/maskrcnn_benchmark/csrc/cuda/vision.h +++ /dev/null @@ -1,116 +0,0 @@ -// Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved. -#pragma once -#include - - -at::Tensor SigmoidFocalLoss_forward_cuda( - const at::Tensor& logits, - const at::Tensor& targets, - const int num_classes, - const float gamma, - const float alpha); - -at::Tensor SigmoidFocalLoss_backward_cuda( - const at::Tensor& logits, - const at::Tensor& targets, - const at::Tensor& d_losses, - const int num_classes, - const float gamma, - const float alpha); - -at::Tensor ROIAlign_forward_cuda(const at::Tensor& input, - const at::Tensor& rois, - const float spatial_scale, - const int pooled_height, - const int pooled_width, - const int sampling_ratio); - -at::Tensor ROIAlign_backward_cuda(const at::Tensor& grad, - const at::Tensor& rois, - const float spatial_scale, - const int pooled_height, - const int pooled_width, - const int batch_size, - const int channels, - const int height, - const int width, - const int sampling_ratio); - - -std::tuple ROIPool_forward_cuda(const at::Tensor& input, - const at::Tensor& rois, - const float spatial_scale, - const int pooled_height, - const int pooled_width); - -at::Tensor ROIPool_backward_cuda(const at::Tensor& grad, - const at::Tensor& input, - const at::Tensor& rois, - const at::Tensor& argmax, - const float spatial_scale, - const int pooled_height, - const int pooled_width, - const int batch_size, - const int channels, - const int height, - const int width); - -at::Tensor nms_cuda(const at::Tensor boxes, float nms_overlap_thresh); -at::Tensor ml_nms_cuda(const at::Tensor boxes, float nms_overlap_thresh); - -int deform_conv_forward_cuda(at::Tensor input, at::Tensor weight, - at::Tensor offset, at::Tensor output, - at::Tensor columns, at::Tensor ones, int kW, - int kH, int dW, int dH, int padW, int padH, - int dilationW, int dilationH, int group, - int deformable_group, int im2col_step); - -int deform_conv_backward_input_cuda(at::Tensor input, at::Tensor offset, - at::Tensor gradOutput, at::Tensor gradInput, - at::Tensor gradOffset, at::Tensor weight, - at::Tensor columns, int kW, int kH, int dW, - int dH, int padW, int padH, int dilationW, - int dilationH, int group, - int deformable_group, int im2col_step); - -int deform_conv_backward_parameters_cuda( - at::Tensor input, at::Tensor offset, at::Tensor gradOutput, - at::Tensor gradWeight, // at::Tensor gradBias, - at::Tensor columns, at::Tensor ones, int kW, int kH, int dW, int dH, - int padW, int padH, int dilationW, int dilationH, int group, - int deformable_group, float scale, int im2col_step); - -void modulated_deform_conv_cuda_forward( - at::Tensor input, at::Tensor weight, at::Tensor bias, at::Tensor ones, - at::Tensor offset, at::Tensor mask, at::Tensor output, at::Tensor columns, - int kernel_h, int kernel_w, const int stride_h, const int stride_w, - const int pad_h, const int pad_w, const int dilation_h, - const int dilation_w, const int group, const int deformable_group, - const bool with_bias); - -void modulated_deform_conv_cuda_backward( - at::Tensor input, at::Tensor weight, at::Tensor bias, at::Tensor ones, - at::Tensor offset, at::Tensor mask, at::Tensor columns, - at::Tensor grad_input, at::Tensor grad_weight, at::Tensor grad_bias, - at::Tensor grad_offset, at::Tensor grad_mask, at::Tensor grad_output, - int kernel_h, int kernel_w, int stride_h, int stride_w, int pad_h, - int pad_w, int dilation_h, int dilation_w, int group, int deformable_group, - const bool with_bias); - -void deform_psroi_pooling_cuda_forward( - at::Tensor input, at::Tensor bbox, at::Tensor trans, at::Tensor out, - at::Tensor top_count, const int no_trans, const float spatial_scale, - const int output_dim, const int group_size, const int pooled_size, - const int part_size, const int sample_per_part, const float trans_std); - -void deform_psroi_pooling_cuda_backward( - at::Tensor out_grad, at::Tensor input, at::Tensor bbox, at::Tensor trans, - at::Tensor top_count, at::Tensor input_grad, at::Tensor trans_grad, - const int no_trans, const float spatial_scale, const int output_dim, - const int group_size, const int pooled_size, const int part_size, - const int sample_per_part, const float trans_std); - - -at::Tensor compute_flow_cuda(const at::Tensor& boxes, - const int height, - const int width); diff --git a/spaces/Pinwheel/GLIP-BLIP-Object-Detection-VQA/maskrcnn_benchmark/modeling/roi_heads/keypoint_head/roi_keypoint_predictors.py b/spaces/Pinwheel/GLIP-BLIP-Object-Detection-VQA/maskrcnn_benchmark/modeling/roi_heads/keypoint_head/roi_keypoint_predictors.py deleted file mode 100644 index 270c72d0068a3b0cf7a777abb8be3c4fe0247f19..0000000000000000000000000000000000000000 --- a/spaces/Pinwheel/GLIP-BLIP-Object-Detection-VQA/maskrcnn_benchmark/modeling/roi_heads/keypoint_head/roi_keypoint_predictors.py +++ /dev/null @@ -1,39 +0,0 @@ -from torch import nn -from torch.nn import functional as F - -from maskrcnn_benchmark import layers - - -class KeypointRCNNPredictor(nn.Module): - def __init__(self, cfg): - super(KeypointRCNNPredictor, self).__init__() - input_features = cfg.MODEL.ROI_KEYPOINT_HEAD.CONV_LAYERS[-1] - num_keypoints = cfg.MODEL.ROI_KEYPOINT_HEAD.NUM_CLASSES - deconv_kernel = 4 - self.kps_score_lowres = layers.ConvTranspose2d( - input_features, - num_keypoints, - deconv_kernel, - stride=2, - padding=deconv_kernel // 2 - 1, - ) - nn.init.kaiming_normal_( - self.kps_score_lowres.weight, mode="fan_out", nonlinearity="relu" - ) - nn.init.constant_(self.kps_score_lowres.bias, 0) - self.up_scale = 2 - - def forward(self, x): - x = self.kps_score_lowres(x) - x = layers.interpolate( - x, scale_factor=self.up_scale, mode="bilinear", align_corners=False - ) - return x - - -_ROI_KEYPOINT_PREDICTOR = {"KeypointRCNNPredictor": KeypointRCNNPredictor} - - -def make_roi_keypoint_predictor(cfg): - func = _ROI_KEYPOINT_PREDICTOR[cfg.MODEL.ROI_KEYPOINT_HEAD.PREDICTOR] - return func(cfg) \ No newline at end of file diff --git a/spaces/Pinwheel/GLIP-BLIP-Object-Detection-VQA/maskrcnn_benchmark/modeling/roi_heads/mask_head/roi_mask_feature_extractors.py b/spaces/Pinwheel/GLIP-BLIP-Object-Detection-VQA/maskrcnn_benchmark/modeling/roi_heads/mask_head/roi_mask_feature_extractors.py deleted file mode 100644 index cc31baec95952afb43f0fdc873487bf2e7d1ec3f..0000000000000000000000000000000000000000 --- a/spaces/Pinwheel/GLIP-BLIP-Object-Detection-VQA/maskrcnn_benchmark/modeling/roi_heads/mask_head/roi_mask_feature_extractors.py +++ /dev/null @@ -1,117 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved. -from torch import nn -from torch.nn import functional as F - -from .hourglass import Hourglass -from ..box_head.roi_box_feature_extractors import ResNet50Conv5ROIFeatureExtractor -from maskrcnn_benchmark.modeling.poolers import Pooler -from maskrcnn_benchmark.layers import Conv2d -from maskrcnn_benchmark.modeling.make_layers import make_conv3x3 - - - -class MaskRCNNFPNFeatureExtractor(nn.Module): - """ - Heads for FPN for classification - """ - - def __init__(self, cfg): - """ - Arguments: - num_classes (int): number of output classes - input_size (int): number of channels of the input once it's flattened - representation_size (int): size of the intermediate representation - """ - super(MaskRCNNFPNFeatureExtractor, self).__init__() - - resolution = cfg.MODEL.ROI_MASK_HEAD.POOLER_RESOLUTION - scales = cfg.MODEL.ROI_MASK_HEAD.POOLER_SCALES - sampling_ratio = cfg.MODEL.ROI_MASK_HEAD.POOLER_SAMPLING_RATIO - pooler = Pooler( - output_size=(resolution, resolution), - scales=scales, - sampling_ratio=sampling_ratio, - ) - input_size = cfg.MODEL.BACKBONE.OUT_CHANNELS - self.pooler = pooler - - use_gn = cfg.MODEL.ROI_MASK_HEAD.USE_GN - layers = cfg.MODEL.ROI_MASK_HEAD.CONV_LAYERS - dilation = cfg.MODEL.ROI_MASK_HEAD.DILATION - - next_feature = input_size - self.blocks = [] - for layer_idx, layer_features in enumerate(layers, 1): - layer_name = "mask_fcn{}".format(layer_idx) - module = make_conv3x3(next_feature, layer_features, - dilation=dilation, stride=1, use_gn=use_gn - ) - self.add_module(layer_name, module) - next_feature = layer_features - self.blocks.append(layer_name) - - def forward(self, x, proposals): - x = self.pooler(x, proposals) - - for layer_name in self.blocks: - x = F.relu(getattr(self, layer_name)(x)) - - return x - - -class HourglassFPNFeatureExtractor(nn.Module): - """ - Heads for FPN for classification - """ - - def __init__(self, cfg): - """ - Arguments: - num_classes (int): number of output classes - input_size (int): number of channels of the input once it's flattened - representation_size (int): size of the intermediate representation - """ - super(HourglassFPNFeatureExtractor, self).__init__() - - resolution = cfg.MODEL.ROI_MASK_HEAD.POOLER_RESOLUTION - scales = cfg.MODEL.ROI_MASK_HEAD.POOLER_SCALES - sampling_ratio = cfg.MODEL.ROI_MASK_HEAD.POOLER_SAMPLING_RATIO - pooler = Pooler( - output_size=(resolution, resolution), - scales=scales, - sampling_ratio=sampling_ratio, - ) - input_size = cfg.MODEL.BACKBONE.OUT_CHANNELS - self.pooler = pooler - - use_gn = cfg.MODEL.ROI_MASK_HEAD.USE_GN - layers = cfg.MODEL.ROI_MASK_HEAD.CONV_LAYERS - scale = cfg.MODEL.ROI_MASK_HEAD.HG_SCALE - - assert input_size==layers[0] - self.blocks = [] - for layer_idx, layer_features in enumerate(layers, 1): - layer_name = "mask_hg{}".format(layer_idx) - module = Hourglass(scale, layer_features, gn=use_gn) - self.add_module(layer_name, module) - self.blocks.append(layer_name) - - def forward(self, x, proposals): - x = self.pooler(x, proposals) - - for layer_name in self.blocks: - x = F.relu(getattr(self, layer_name)(x)) - - return x - - -_ROI_MASK_FEATURE_EXTRACTORS = { - "ResNet50Conv5ROIFeatureExtractor": ResNet50Conv5ROIFeatureExtractor, - "MaskRCNNFPNFeatureExtractor": MaskRCNNFPNFeatureExtractor, - "HourglassFPNFeatureExtractor": HourglassFPNFeatureExtractor, -} - - -def make_roi_mask_feature_extractor(cfg): - func = _ROI_MASK_FEATURE_EXTRACTORS[cfg.MODEL.ROI_MASK_HEAD.FEATURE_EXTRACTOR] - return func(cfg) diff --git a/spaces/PrabhuKiranKonda/Streamlit-PDF-Assistant-Docker/Dockerfile b/spaces/PrabhuKiranKonda/Streamlit-PDF-Assistant-Docker/Dockerfile deleted file mode 100644 index f79157ba2f0d0e71ec1fabf14f56d7810ff512af..0000000000000000000000000000000000000000 --- a/spaces/PrabhuKiranKonda/Streamlit-PDF-Assistant-Docker/Dockerfile +++ /dev/null @@ -1,8 +0,0 @@ -FROM python:3.9-slim -WORKDIR /app -COPY requirements.txt ./requirements.txt -RUN pip install -r requirements.txt -EXPOSE 8501 -COPY . /app -ENTRYPOINT ["streamlit", "run"] -CMD ["Home.py", "--server.port", "7860"] \ No newline at end of file diff --git a/spaces/Prof-Reza/Audiocraft_Music-Audio_Generation/audiocraft/grids/diffusion/__init__.py b/spaces/Prof-Reza/Audiocraft_Music-Audio_Generation/audiocraft/grids/diffusion/__init__.py deleted file mode 100644 index e5737294ae16c0de52085b8dcf6825c348f617e4..0000000000000000000000000000000000000000 --- a/spaces/Prof-Reza/Audiocraft_Music-Audio_Generation/audiocraft/grids/diffusion/__init__.py +++ /dev/null @@ -1,6 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. -"""Diffusion grids.""" diff --git a/spaces/ProteinDesignLab/protpardelle/ProteinMPNN/README.md b/spaces/ProteinDesignLab/protpardelle/ProteinMPNN/README.md deleted file mode 100644 index 0d3033865f5199f7c35ad65a3c5dff3b5a9466f0..0000000000000000000000000000000000000000 --- a/spaces/ProteinDesignLab/protpardelle/ProteinMPNN/README.md +++ /dev/null @@ -1,111 +0,0 @@ -# ProteinMPNN -![ProteinMPNN](https://docs.google.com/drawings/d/e/2PACX-1vTtnMBDOq8TpHIctUfGN8Vl32x5ISNcPKlxjcQJF2q70PlaH2uFlj2Ac4s3khnZqG1YxppdMr0iTyk-/pub?w=889&h=358) -Read [ProteinMPNN paper](https://www.biorxiv.org/content/10.1101/2022.06.03.494563v1). - -To run ProteinMPNN clone this github repo and install Python>=3.0, PyTorch, Numpy. - -Full protein backbone models: `vanilla_model_weights/v_48_002.pt, v_48_010.pt, v_48_020.pt, v_48_030.pt`, `soluble_model_weights/v_48_010.pt, v_48_020.pt`. - -CA only models: `ca_model_weights/v_48_002.pt, v_48_010.pt, v_48_020.pt`. Enable flag `--ca_only` to use these models. - -Helper scripts: `helper_scripts` - helper functions to parse PDBs, assign which chains to design, which residues to fix, adding AA bias, tying residues etc. - -Code organization: -* `protein_mpnn_run.py` - the main script to initialialize and run the model. -* `protein_mpnn_utils.py` - utility functions for the main script. -* `examples/` - simple code examples. -* `inputs/` - input PDB files for examples -* `outputs/` - outputs from examples -* `colab_notebooks/` - Google Colab examples -* `training/` - code and data to retrain the model ------------------------------------------------------------------------------------------------------ -Input flags for `protein_mpnn_run.py`: -``` - argparser.add_argument("--suppress_print", type=int, default=0, help="0 for False, 1 for True") - argparser.add_argument("--ca_only", action="store_true", default=False, help="Parse CA-only structures and use CA-only models (default: false)") - argparser.add_argument("--path_to_model_weights", type=str, default="", help="Path to model weights folder;") - argparser.add_argument("--model_name", type=str, default="v_48_020", help="ProteinMPNN model name: v_48_002, v_48_010, v_48_020, v_48_030; v_48_010=version with 48 edges 0.10A noise") - argparser.add_argument("--use_soluble_model", action="store_true", default=False, help="Flag to load ProteinMPNN weights trained on soluble proteins only.") - argparser.add_argument("--seed", type=int, default=0, help="If set to 0 then a random seed will be picked;") - argparser.add_argument("--save_score", type=int, default=0, help="0 for False, 1 for True; save score=-log_prob to npy files") - argparser.add_argument("--path_to_fasta", type=str, default="", help="score provided input sequence in a fasta format; e.g. GGGGGG/PPPPS/WWW for chains A, B, C sorted alphabetically and separated by /") - argparser.add_argument("--save_probs", type=int, default=0, help="0 for False, 1 for True; save MPNN predicted probabilites per position") - argparser.add_argument("--score_only", type=int, default=0, help="0 for False, 1 for True; score input backbone-sequence pairs") - argparser.add_argument("--conditional_probs_only", type=int, default=0, help="0 for False, 1 for True; output conditional probabilities p(s_i given the rest of the sequence and backbone)") - argparser.add_argument("--conditional_probs_only_backbone", type=int, default=0, help="0 for False, 1 for True; if true output conditional probabilities p(s_i given backbone)") - argparser.add_argument("--unconditional_probs_only", type=int, default=0, help="0 for False, 1 for True; output unconditional probabilities p(s_i given backbone) in one forward pass") - argparser.add_argument("--backbone_noise", type=float, default=0.00, help="Standard deviation of Gaussian noise to add to backbone atoms") - argparser.add_argument("--num_seq_per_target", type=int, default=1, help="Number of sequences to generate per target") - argparser.add_argument("--batch_size", type=int, default=1, help="Batch size; can set higher for titan, quadro GPUs, reduce this if running out of GPU memory") - argparser.add_argument("--max_length", type=int, default=200000, help="Max sequence length") - argparser.add_argument("--sampling_temp", type=str, default="0.1", help="A string of temperatures, 0.2 0.25 0.5. Sampling temperature for amino acids. Suggested values 0.1, 0.15, 0.2, 0.25, 0.3. Higher values will lead to more diversity.") - argparser.add_argument("--out_folder", type=str, help="Path to a folder to output sequences, e.g. /home/out/") - argparser.add_argument("--pdb_path", type=str, default='', help="Path to a single PDB to be designed") - argparser.add_argument("--pdb_path_chains", type=str, default='', help="Define which chains need to be designed for a single PDB ") - argparser.add_argument("--jsonl_path", type=str, help="Path to a folder with parsed pdb into jsonl") - argparser.add_argument("--chain_id_jsonl",type=str, default='', help="Path to a dictionary specifying which chains need to be designed and which ones are fixed, if not specied all chains will be designed.") - argparser.add_argument("--fixed_positions_jsonl", type=str, default='', help="Path to a dictionary with fixed positions") - argparser.add_argument("--omit_AAs", type=list, default='X', help="Specify which amino acids should be omitted in the generated sequence, e.g. 'AC' would omit alanine and cystine.") - argparser.add_argument("--bias_AA_jsonl", type=str, default='', help="Path to a dictionary which specifies AA composion bias if neededi, e.g. {A: -1.1, F: 0.7} would make A less likely and F more likely.") - argparser.add_argument("--bias_by_res_jsonl", default='', help="Path to dictionary with per position bias.") - argparser.add_argument("--omit_AA_jsonl", type=str, default='', help="Path to a dictionary which specifies which amino acids need to be omited from design at specific chain indices") - argparser.add_argument("--pssm_jsonl", type=str, default='', help="Path to a dictionary with pssm") - argparser.add_argument("--pssm_multi", type=float, default=0.0, help="A value between [0.0, 1.0], 0.0 means do not use pssm, 1.0 ignore MPNN predictions") - argparser.add_argument("--pssm_threshold", type=float, default=0.0, help="A value between -inf + inf to restric per position AAs") - argparser.add_argument("--pssm_log_odds_flag", type=int, default=0, help="0 for False, 1 for True") - argparser.add_argument("--pssm_bias_flag", type=int, default=0, help="0 for False, 1 for True") - argparser.add_argument("--tied_positions_jsonl", type=str, default='', help="Path to a dictionary with tied positions") - -``` ------------------------------------------------------------------------------------------------------ -For example to make a conda environment to run ProteinMPNN: -* `conda create --name mlfold` - this creates conda environment called `mlfold` -* `source activate mlfold` - this activate environment -* `conda install pytorch torchvision torchaudio cudatoolkit=11.3 -c pytorch` - install pytorch following steps from https://pytorch.org/ ------------------------------------------------------------------------------------------------------ -These are provided `examples/`: -* `submit_example_1.sh` - simple monomer example -* `submit_example_2.sh` - simple multi-chain example -* `submit_example_3.sh` - directly from the .pdb path -* `submit_example_3_score_only.sh` - return score only (model's uncertainty) -* `submit_example_3_score_only_from_fasta.sh` - return score only (model's uncertainty) loading sequence from fasta files -* `submit_example_4.sh` - fix some residue positions -* `submit_example_4_non_fixed.sh` - specify which positions to design -* `submit_example_5.sh` - tie some positions together (symmetry) -* `submit_example_6.sh` - homooligomer example -* `submit_example_7.sh` - return sequence unconditional probabilities (PSSM like) -* `submit_example_8.sh` - add amino acid bias -* `submit_example_pssm.sh` - use PSSM bias when designing sequences ------------------------------------------------------------------------------------------------------ -Output example: -``` ->3HTN, score=1.1705, global_score=1.2045, fixed_chains=['B'], designed_chains=['A', 'C'], model_name=v_48_020, git_hash=015ff820b9b5741ead6ba6795258f35a9c15e94b, seed=37 -NMYSYKKIGNKYIVSINNHTEIVKALNAFCKEKGILSGSINGIGAIGELTLRFFNPKTKAYDDKTFREQMEISNLTGNISSMNEQVYLHLHITVGRSDYSALAGHLLSAIQNGAGEFVVEDYSERISRTYNPDLGLNIYDFER/NMYSYKKIGNKYIVSINNHTEIVKALNAFCKEKGILSGSINGIGAIGELTLRFFNPKTKAYDDKTFREQMEISNLTGNISSMNEQVYLHLHITVGRSDYSALAGHLLSAIQNGAGEFVVEDYSERISRTYNPDLGLNIYDFER ->T=0.1, sample=1, score=0.7291, global_score=0.9330, seq_recovery=0.5736 -NMYSYKKIGNKYIVSINNHTEIVKALKKFCEEKNIKSGSVNGIGSIGSVTLKFYNLETKEEELKTFNANFEISNLTGFISMHDNKVFLDLHITIGDENFSALAGHLVSAVVNGTCELIVEDFNELVSTKYNEELGLWLLDFEK/NMYSYKKIGNKYIVSINNHTDIVTAIKKFCEDKKIKSGTINGIGQVKEVTLEFRNFETGEKEEKTFKKQFTISNLTGFISTKDGKVFLDLHITFGDENFSALAGHLISAIVDGKCELIIEDYNEEINVKYNEELGLYLLDFNK ->T=0.1, sample=2, score=0.7414, global_score=0.9355, seq_recovery=0.6075 -NMYKYKKIGNKYIVSINNHTEIVKAIKEFCKEKNIKSGTINGIGQVGKVTLRFYNPETKEYTEKTFNDNFEISNLTGFISTYKNEVFLHLHITFGKSDFSALAGHLLSAIVNGICELIVEDFKENLSMKYDEKTGLYLLDFEK/NMYKYKKIGNKYVVSINNHTEIVEALKAFCEDKKIKSGTVNGIGQVSKVTLKFFNIETKESKEKTFNKNFEISNLTGFISEINGEVFLHLHITIGDENFSALAGHLLSAVVNGEAILIVEDYKEKVNRKYNEELGLNLLDFNL -``` -* `score` - average over residues that were designed negative log probability of sampled amino acids -* `global score` - average over all residues in all chains negative log probability of sampled/fixed amino acids -* `fixed_chains` - chains that were not designed (fixed) -* `designed_chains` - chains that were redesigned -* `model_name/CA_model_name` - model name that was used to generate results, e.g. `v_48_020` -* `git_hash` - github version that was used to generate outputs -* `seed` - random seed -* `T=0.1` - temperature equal to 0.1 was used to sample sequences -* `sample` - sequence sample number 1, 2, 3...etc ------------------------------------------------------------------------------------------------------ -``` -@article{dauparas2022robust, - title={Robust deep learning--based protein sequence design using ProteinMPNN}, - author={Dauparas, Justas and Anishchenko, Ivan and Bennett, Nathaniel and Bai, Hua and Ragotte, Robert J and Milles, Lukas F and Wicky, Basile IM and Courbet, Alexis and de Haas, Rob J and Bethel, Neville and others}, - journal={Science}, - volume={378}, - number={6615}, - pages={49--56}, - year={2022}, - publisher={American Association for the Advancement of Science} -} -``` ------------------------------------------------------------------------------------------------------ diff --git a/spaces/RMXK/RVC_HFF/infer/lib/uvr5_pack/lib_v5/layers_new.py b/spaces/RMXK/RVC_HFF/infer/lib/uvr5_pack/lib_v5/layers_new.py deleted file mode 100644 index 44153b6a23399c6938affc61c71919eaa172bcee..0000000000000000000000000000000000000000 --- a/spaces/RMXK/RVC_HFF/infer/lib/uvr5_pack/lib_v5/layers_new.py +++ /dev/null @@ -1,125 +0,0 @@ -import torch -import torch.nn.functional as F -from torch import nn - -from . import spec_utils - - -class Conv2DBNActiv(nn.Module): - def __init__(self, nin, nout, ksize=3, stride=1, pad=1, dilation=1, activ=nn.ReLU): - super(Conv2DBNActiv, self).__init__() - self.conv = nn.Sequential( - nn.Conv2d( - nin, - nout, - kernel_size=ksize, - stride=stride, - padding=pad, - dilation=dilation, - bias=False, - ), - nn.BatchNorm2d(nout), - activ(), - ) - - def __call__(self, x): - return self.conv(x) - - -class Encoder(nn.Module): - def __init__(self, nin, nout, ksize=3, stride=1, pad=1, activ=nn.LeakyReLU): - super(Encoder, self).__init__() - self.conv1 = Conv2DBNActiv(nin, nout, ksize, stride, pad, activ=activ) - self.conv2 = Conv2DBNActiv(nout, nout, ksize, 1, pad, activ=activ) - - def __call__(self, x): - h = self.conv1(x) - h = self.conv2(h) - - return h - - -class Decoder(nn.Module): - def __init__( - self, nin, nout, ksize=3, stride=1, pad=1, activ=nn.ReLU, dropout=False - ): - super(Decoder, self).__init__() - self.conv1 = Conv2DBNActiv(nin, nout, ksize, 1, pad, activ=activ) - # self.conv2 = Conv2DBNActiv(nout, nout, ksize, 1, pad, activ=activ) - self.dropout = nn.Dropout2d(0.1) if dropout else None - - def __call__(self, x, skip=None): - x = F.interpolate(x, scale_factor=2, mode="bilinear", align_corners=True) - - if skip is not None: - skip = spec_utils.crop_center(skip, x) - x = torch.cat([x, skip], dim=1) - - h = self.conv1(x) - # h = self.conv2(h) - - if self.dropout is not None: - h = self.dropout(h) - - return h - - -class ASPPModule(nn.Module): - def __init__(self, nin, nout, dilations=(4, 8, 12), activ=nn.ReLU, dropout=False): - super(ASPPModule, self).__init__() - self.conv1 = nn.Sequential( - nn.AdaptiveAvgPool2d((1, None)), - Conv2DBNActiv(nin, nout, 1, 1, 0, activ=activ), - ) - self.conv2 = Conv2DBNActiv(nin, nout, 1, 1, 0, activ=activ) - self.conv3 = Conv2DBNActiv( - nin, nout, 3, 1, dilations[0], dilations[0], activ=activ - ) - self.conv4 = Conv2DBNActiv( - nin, nout, 3, 1, dilations[1], dilations[1], activ=activ - ) - self.conv5 = Conv2DBNActiv( - nin, nout, 3, 1, dilations[2], dilations[2], activ=activ - ) - self.bottleneck = Conv2DBNActiv(nout * 5, nout, 1, 1, 0, activ=activ) - self.dropout = nn.Dropout2d(0.1) if dropout else None - - def forward(self, x): - _, _, h, w = x.size() - feat1 = F.interpolate( - self.conv1(x), size=(h, w), mode="bilinear", align_corners=True - ) - feat2 = self.conv2(x) - feat3 = self.conv3(x) - feat4 = self.conv4(x) - feat5 = self.conv5(x) - out = torch.cat((feat1, feat2, feat3, feat4, feat5), dim=1) - out = self.bottleneck(out) - - if self.dropout is not None: - out = self.dropout(out) - - return out - - -class LSTMModule(nn.Module): - def __init__(self, nin_conv, nin_lstm, nout_lstm): - super(LSTMModule, self).__init__() - self.conv = Conv2DBNActiv(nin_conv, 1, 1, 1, 0) - self.lstm = nn.LSTM( - input_size=nin_lstm, hidden_size=nout_lstm // 2, bidirectional=True - ) - self.dense = nn.Sequential( - nn.Linear(nout_lstm, nin_lstm), nn.BatchNorm1d(nin_lstm), nn.ReLU() - ) - - def forward(self, x): - N, _, nbins, nframes = x.size() - h = self.conv(x)[:, 0] # N, nbins, nframes - h = h.permute(2, 0, 1) # nframes, N, nbins - h, _ = self.lstm(h) - h = self.dense(h.reshape(-1, h.size()[-1])) # nframes * N, nbins - h = h.reshape(nframes, N, 1, nbins) - h = h.permute(1, 2, 3, 0) - - return h diff --git a/spaces/Rakot2223/faster-whisper-webui/src/whisper/fasterWhisperContainer.py b/spaces/Rakot2223/faster-whisper-webui/src/whisper/fasterWhisperContainer.py deleted file mode 100644 index ccb5d3cd6360094636e7e9edfc1310019a548433..0000000000000000000000000000000000000000 --- a/spaces/Rakot2223/faster-whisper-webui/src/whisper/fasterWhisperContainer.py +++ /dev/null @@ -1,200 +0,0 @@ -import os -from typing import List, Union - -from faster_whisper import WhisperModel, download_model -from src.config import ModelConfig, VadInitialPromptMode -from src.hooks.progressListener import ProgressListener -from src.languages import get_language_from_name -from src.modelCache import ModelCache -from src.whisper.abstractWhisperContainer import AbstractWhisperCallback, AbstractWhisperContainer -from src.utils import format_timestamp - -class FasterWhisperContainer(AbstractWhisperContainer): - def __init__(self, model_name: str, device: str = None, compute_type: str = "float16", - download_root: str = None, - cache: ModelCache = None, models: List[ModelConfig] = []): - super().__init__(model_name, device, compute_type, download_root, cache, models) - - def ensure_downloaded(self): - """ - Ensure that the model is downloaded. This is useful if you want to ensure that the model is downloaded before - passing the container to a subprocess. - """ - model_config = self._get_model_config() - - if os.path.isdir(model_config.url): - model_config.path = model_config.url - else: - model_config.path = download_model(model_config.url, output_dir=self.download_root) - - def _get_model_config(self) -> ModelConfig: - """ - Get the model configuration for the model. - """ - for model in self.models: - if model.name == self.model_name: - return model - return None - - def _create_model(self): - print("Loading faster whisper model " + self.model_name + " for device " + str(self.device)) - model_config = self._get_model_config() - - if model_config.type == "whisper" and model_config.url not in ["tiny", "base", "small", "medium", "large", "large-v2"]: - raise Exception("FasterWhisperContainer does not yet support Whisper models. Use ct2-transformers-converter to convert the model to a faster-whisper model.") - - device = self.device - - if (device is None): - device = "auto" - - model = WhisperModel(model_config.url, device=device, compute_type=self.compute_type) - return model - - def create_callback(self, language: str = None, task: str = None, initial_prompt: str = None, - initial_prompt_mode: VadInitialPromptMode = VadInitialPromptMode.PREPREND_FIRST_SEGMENT, - **decodeOptions: dict) -> AbstractWhisperCallback: - """ - Create a WhisperCallback object that can be used to transcript audio files. - - Parameters - ---------- - language: str - The target language of the transcription. If not specified, the language will be inferred from the audio content. - task: str - The task - either translate or transcribe. - initial_prompt: str - The initial prompt to use for the transcription. - initial_prompt_mode: VadInitialPromptMode - The mode to use for the initial prompt. If set to PREPEND_FIRST_SEGMENT, the initial prompt will be prepended to the first segment of audio. - If set to PREPEND_ALL_SEGMENTS, the initial prompt will be prepended to all segments of audio. - decodeOptions: dict - Additional options to pass to the decoder. Must be pickleable. - - Returns - ------- - A WhisperCallback object. - """ - return FasterWhisperCallback(self, language=language, task=task, initial_prompt=initial_prompt, initial_prompt_mode=initial_prompt_mode, **decodeOptions) - -class FasterWhisperCallback(AbstractWhisperCallback): - def __init__(self, model_container: FasterWhisperContainer, language: str = None, task: str = None, - initial_prompt: str = None, initial_prompt_mode: VadInitialPromptMode=VadInitialPromptMode.PREPREND_FIRST_SEGMENT, - **decodeOptions: dict): - self.model_container = model_container - self.language = language - self.task = task - self.initial_prompt = initial_prompt - self.initial_prompt_mode = initial_prompt_mode - self.decodeOptions = decodeOptions - - self._printed_warning = False - - def invoke(self, audio, segment_index: int, prompt: str, detected_language: str, progress_listener: ProgressListener = None): - """ - Peform the transcription of the given audio file or data. - - Parameters - ---------- - audio: Union[str, np.ndarray, torch.Tensor] - The audio file to transcribe, or the audio data as a numpy array or torch tensor. - segment_index: int - The target language of the transcription. If not specified, the language will be inferred from the audio content. - task: str - The task - either translate or transcribe. - progress_listener: ProgressListener - A callback to receive progress updates. - """ - model: WhisperModel = self.model_container.get_model() - language_code = self._lookup_language_code(self.language) if self.language else None - - # Copy decode options and remove options that are not supported by faster-whisper - decodeOptions = self.decodeOptions.copy() - verbose = decodeOptions.pop("verbose", None) - - logprob_threshold = decodeOptions.pop("logprob_threshold", None) - - patience = decodeOptions.pop("patience", None) - length_penalty = decodeOptions.pop("length_penalty", None) - suppress_tokens = decodeOptions.pop("suppress_tokens", None) - - if (decodeOptions.pop("fp16", None) is not None): - if not self._printed_warning: - print("WARNING: fp16 option is ignored by faster-whisper - use compute_type instead.") - self._printed_warning = True - - # Fix up decode options - if (logprob_threshold is not None): - decodeOptions["log_prob_threshold"] = logprob_threshold - - decodeOptions["patience"] = float(patience) if patience is not None else 1.0 - decodeOptions["length_penalty"] = float(length_penalty) if length_penalty is not None else 1.0 - - # See if supress_tokens is a string - if so, convert it to a list of ints - decodeOptions["suppress_tokens"] = self._split_suppress_tokens(suppress_tokens) - - initial_prompt = self._get_initial_prompt(self.initial_prompt, self.initial_prompt_mode, prompt, segment_index) - - segments_generator, info = model.transcribe(audio, \ - language=language_code if language_code else detected_language, task=self.task, \ - initial_prompt=initial_prompt, \ - **decodeOptions - ) - - segments = [] - - for segment in segments_generator: - segments.append(segment) - - if progress_listener is not None: - progress_listener.on_progress(segment.end, info.duration) - if verbose: - print("[{}->{}] {}".format(format_timestamp(segment.start, True), format_timestamp(segment.end, True), - segment.text)) - - text = " ".join([segment.text for segment in segments]) - - # Convert the segments to a format that is easier to serialize - whisper_segments = [{ - "text": segment.text, - "start": segment.start, - "end": segment.end, - - # Extra fields added by faster-whisper - "words": [{ - "start": word.start, - "end": word.end, - "word": word.word, - "probability": word.probability - } for word in (segment.words if segment.words is not None else []) ] - } for segment in segments] - - result = { - "segments": whisper_segments, - "text": text, - "language": info.language if info else None, - - # Extra fields added by faster-whisper - "language_probability": info.language_probability if info else None, - "duration": info.duration if info else None - } - - if progress_listener is not None: - progress_listener.on_finished() - return result - - def _split_suppress_tokens(self, suppress_tokens: Union[str, List[int]]): - if (suppress_tokens is None): - return None - if (isinstance(suppress_tokens, list)): - return suppress_tokens - - return [int(token) for token in suppress_tokens.split(",")] - - def _lookup_language_code(self, language: str): - language = get_language_from_name(language) - - if language is None: - raise ValueError("Invalid language: " + language) - - return language.code diff --git a/spaces/Realcat/image-matching-webui/third_party/d2net/lib/pyramid.py b/spaces/Realcat/image-matching-webui/third_party/d2net/lib/pyramid.py deleted file mode 100644 index 938a775f739a446f5f48b2040ce6c4ee644f20b1..0000000000000000000000000000000000000000 --- a/spaces/Realcat/image-matching-webui/third_party/d2net/lib/pyramid.py +++ /dev/null @@ -1,129 +0,0 @@ -import torch -import torch.nn as nn -import torch.nn.functional as F - -from lib.exceptions import EmptyTensorError -from lib.utils import interpolate_dense_features, upscale_positions - - -def process_multiscale(image, model, scales=[.5, 1, 2]): - b, _, h_init, w_init = image.size() - device = image.device - assert(b == 1) - - all_keypoints = torch.zeros([3, 0]) - all_descriptors = torch.zeros([ - model.dense_feature_extraction.num_channels, 0 - ]) - all_scores = torch.zeros(0) - - previous_dense_features = None - banned = None - for idx, scale in enumerate(scales): - current_image = F.interpolate( - image, scale_factor=scale, - mode='bilinear', align_corners=True - ) - _, _, h_level, w_level = current_image.size() - - dense_features = model.dense_feature_extraction(current_image) - del current_image - - _, _, h, w = dense_features.size() - - # Sum the feature maps. - if previous_dense_features is not None: - dense_features += F.interpolate( - previous_dense_features, size=[h, w], - mode='bilinear', align_corners=True - ) - del previous_dense_features - - # Recover detections. - detections = model.detection(dense_features) - if banned is not None: - banned = F.interpolate(banned.float(), size=[h, w]).bool() - detections = torch.min(detections, ~banned) - banned = torch.max( - torch.max(detections, dim=1)[0].unsqueeze(1), banned - ) - else: - banned = torch.max(detections, dim=1)[0].unsqueeze(1) - fmap_pos = torch.nonzero(detections[0].cpu()).t() - del detections - - # Recover displacements. - displacements = model.localization(dense_features)[0].cpu() - displacements_i = displacements[ - 0, fmap_pos[0, :], fmap_pos[1, :], fmap_pos[2, :] - ] - displacements_j = displacements[ - 1, fmap_pos[0, :], fmap_pos[1, :], fmap_pos[2, :] - ] - del displacements - - mask = torch.min( - torch.abs(displacements_i) < 0.5, - torch.abs(displacements_j) < 0.5 - ) - fmap_pos = fmap_pos[:, mask] - valid_displacements = torch.stack([ - displacements_i[mask], - displacements_j[mask] - ], dim=0) - del mask, displacements_i, displacements_j - - fmap_keypoints = fmap_pos[1 :, :].float() + valid_displacements - del valid_displacements - - try: - raw_descriptors, _, ids = interpolate_dense_features( - fmap_keypoints.to(device), - dense_features[0] - ) - except EmptyTensorError: - continue - fmap_pos = fmap_pos.to(device) - fmap_keypoints = fmap_keypoints.to(device) - fmap_pos = fmap_pos[:, ids] - fmap_keypoints = fmap_keypoints[:, ids] - del ids - - keypoints = upscale_positions(fmap_keypoints, scaling_steps=2) - del fmap_keypoints - - descriptors = F.normalize(raw_descriptors, dim=0).cpu() - del raw_descriptors - - keypoints[0, :] *= h_init / h_level - keypoints[1, :] *= w_init / w_level - - fmap_pos = fmap_pos.cpu() - keypoints = keypoints.cpu() - - keypoints = torch.cat([ - keypoints, - torch.ones([1, keypoints.size(1)]) * 1 / scale, - ], dim=0) - - scores = dense_features[ - 0, fmap_pos[0, :], fmap_pos[1, :], fmap_pos[2, :] - ].cpu() / (idx + 1) - del fmap_pos - - all_keypoints = torch.cat([all_keypoints, keypoints], dim=1) - all_descriptors = torch.cat([all_descriptors, descriptors], dim=1) - all_scores = torch.cat([all_scores, scores], dim=0) - del keypoints, descriptors - - previous_dense_features = dense_features - del dense_features - del previous_dense_features, banned - - keypoints = all_keypoints.t().detach().numpy() - del all_keypoints - scores = all_scores.detach().numpy() - del all_scores - descriptors = all_descriptors.t().detach().numpy() - del all_descriptors - return keypoints, scores, descriptors diff --git a/spaces/Realcat/image-matching-webui/third_party/r2d2/nets/reliability_loss.py b/spaces/Realcat/image-matching-webui/third_party/r2d2/nets/reliability_loss.py deleted file mode 100644 index e560d1ea1b4dc27d81031c62cc4c0aed9161cc67..0000000000000000000000000000000000000000 --- a/spaces/Realcat/image-matching-webui/third_party/r2d2/nets/reliability_loss.py +++ /dev/null @@ -1,59 +0,0 @@ -# Copyright 2019-present NAVER Corp. -# CC BY-NC-SA 3.0 -# Available only for non-commercial use - -import pdb -import torch.nn as nn -import torch.nn.functional as F - -from nets.ap_loss import APLoss - - -class PixelAPLoss(nn.Module): - """Computes the pixel-wise AP loss: - Given two images and ground-truth optical flow, computes the AP per pixel. - - feat1: (B, C, H, W) pixel-wise features extracted from img1 - feat2: (B, C, H, W) pixel-wise features extracted from img2 - aflow: (B, 2, H, W) absolute flow: aflow[...,y1,x1] = x2,y2 - """ - - def __init__(self, sampler, nq=20): - nn.Module.__init__(self) - self.aploss = APLoss(nq, min=0, max=1, euc=False) - self.name = "pixAP" - self.sampler = sampler - - def loss_from_ap(self, ap, rel): - return 1 - ap - - def forward(self, descriptors, aflow, **kw): - # subsample things - scores, gt, msk, qconf = self.sampler(descriptors, kw.get("reliability"), aflow) - - # compute pixel-wise AP - n = qconf.numel() - if n == 0: - return 0 - scores, gt = scores.view(n, -1), gt.view(n, -1) - ap = self.aploss(scores, gt).view(msk.shape) - - pixel_loss = self.loss_from_ap(ap, qconf) - - loss = pixel_loss[msk].mean() - return loss - - -class ReliabilityLoss(PixelAPLoss): - """same than PixelAPLoss, but also train a pixel-wise confidence - that this pixel is going to have a good AP. - """ - - def __init__(self, sampler, base=0.5, **kw): - PixelAPLoss.__init__(self, sampler, **kw) - assert 0 <= base < 1 - self.base = base - self.name = "reliability" - - def loss_from_ap(self, ap, rel): - return 1 - ap * rel - (1 - rel) * self.base diff --git a/spaces/Ritori/TTS_Yui/stft.py b/spaces/Ritori/TTS_Yui/stft.py deleted file mode 100644 index edfc44ae8bdec2887920a1ffab012432ca09a33d..0000000000000000000000000000000000000000 --- a/spaces/Ritori/TTS_Yui/stft.py +++ /dev/null @@ -1,141 +0,0 @@ -""" -BSD 3-Clause License - -Copyright (c) 2017, Prem Seetharaman -All rights reserved. - -* Redistribution and use in source and binary forms, with or without - modification, are permitted provided that the following conditions are met: - -* Redistributions of source code must retain the above copyright notice, - this list of conditions and the following disclaimer. - -* Redistributions in binary form must reproduce the above copyright notice, this - list of conditions and the following disclaimer in the - documentation and/or other materials provided with the distribution. - -* Neither the name of the copyright holder nor the names of its - contributors may be used to endorse or promote products derived from this - software without specific prior written permission. - -THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND -ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED -WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE -DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR -ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES -(INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; -LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON -ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT -(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS -SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. -""" - -import torch -import numpy as np -import torch.nn.functional as F -from torch.autograd import Variable -from scipy.signal import get_window -from librosa.util import pad_center, tiny -from audio_processing import window_sumsquare - - -class STFT(torch.nn.Module): - """adapted from Prem Seetharaman's https://github.com/pseeth/pytorch-stft""" - def __init__(self, filter_length=800, hop_length=200, win_length=800, - window='hann'): - super(STFT, self).__init__() - self.filter_length = filter_length - self.hop_length = hop_length - self.win_length = win_length - self.window = window - self.forward_transform = None - scale = self.filter_length / self.hop_length - fourier_basis = np.fft.fft(np.eye(self.filter_length)) - - cutoff = int((self.filter_length / 2 + 1)) - fourier_basis = np.vstack([np.real(fourier_basis[:cutoff, :]), - np.imag(fourier_basis[:cutoff, :])]) - - forward_basis = torch.FloatTensor(fourier_basis[:, None, :]) - inverse_basis = torch.FloatTensor( - np.linalg.pinv(scale * fourier_basis).T[:, None, :]) - - if window is not None: - assert(filter_length >= win_length) - # get window and zero center pad it to filter_length - fft_window = get_window(window, win_length, fftbins=True) - fft_window = pad_center(fft_window, filter_length) - fft_window = torch.from_numpy(fft_window).float() - - # window the bases - forward_basis *= fft_window - inverse_basis *= fft_window - - self.register_buffer('forward_basis', forward_basis.float()) - self.register_buffer('inverse_basis', inverse_basis.float()) - - def transform(self, input_data): - num_batches = input_data.size(0) - num_samples = input_data.size(1) - - self.num_samples = num_samples - - # similar to librosa, reflect-pad the input - input_data = input_data.view(num_batches, 1, num_samples) - input_data = F.pad( - input_data.unsqueeze(1), - (int(self.filter_length / 2), int(self.filter_length / 2), 0, 0), - mode='reflect') - input_data = input_data.squeeze(1) - - forward_transform = F.conv1d( - input_data, - Variable(self.forward_basis, requires_grad=False), - stride=self.hop_length, - padding=0) - - cutoff = int((self.filter_length / 2) + 1) - real_part = forward_transform[:, :cutoff, :] - imag_part = forward_transform[:, cutoff:, :] - - magnitude = torch.sqrt(real_part**2 + imag_part**2) - phase = torch.autograd.Variable( - torch.atan2(imag_part.data, real_part.data)) - - return magnitude, phase - - def inverse(self, magnitude, phase): - recombine_magnitude_phase = torch.cat( - [magnitude*torch.cos(phase), magnitude*torch.sin(phase)], dim=1) - - inverse_transform = F.conv_transpose1d( - recombine_magnitude_phase, - Variable(self.inverse_basis, requires_grad=False), - stride=self.hop_length, - padding=0) - - if self.window is not None: - window_sum = window_sumsquare( - self.window, magnitude.size(-1), hop_length=self.hop_length, - win_length=self.win_length, n_fft=self.filter_length, - dtype=np.float32) - # remove modulation effects - approx_nonzero_indices = torch.from_numpy( - np.where(window_sum > tiny(window_sum))[0]) - window_sum = torch.autograd.Variable( - torch.from_numpy(window_sum), requires_grad=False) - window_sum = window_sum.cuda() if magnitude.is_cuda else window_sum - inverse_transform[:, :, approx_nonzero_indices] /= window_sum[approx_nonzero_indices] - - # scale by hop ratio - inverse_transform *= float(self.filter_length) / self.hop_length - - inverse_transform = inverse_transform[:, :, int(self.filter_length/2):] - inverse_transform = inverse_transform[:, :, :-int(self.filter_length/2):] - - return inverse_transform - - def forward(self, input_data): - self.magnitude, self.phase = self.transform(input_data) - reconstruction = self.inverse(self.magnitude, self.phase) - return reconstruction diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmcv/ops/corner_pool.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmcv/ops/corner_pool.py deleted file mode 100644 index a33d798b43d405e4c86bee4cd6389be21ca9c637..0000000000000000000000000000000000000000 --- a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmcv/ops/corner_pool.py +++ /dev/null @@ -1,161 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -from torch import nn -from torch.autograd import Function - -from ..utils import ext_loader - -ext_module = ext_loader.load_ext('_ext', [ - 'top_pool_forward', 'top_pool_backward', 'bottom_pool_forward', - 'bottom_pool_backward', 'left_pool_forward', 'left_pool_backward', - 'right_pool_forward', 'right_pool_backward' -]) - -_mode_dict = {'top': 0, 'bottom': 1, 'left': 2, 'right': 3} - - -class TopPoolFunction(Function): - - @staticmethod - def symbolic(g, input): - output = g.op( - 'mmcv::MMCVCornerPool', input, mode_i=int(_mode_dict['top'])) - return output - - @staticmethod - def forward(ctx, input): - output = ext_module.top_pool_forward(input) - ctx.save_for_backward(input) - return output - - @staticmethod - def backward(ctx, grad_output): - input, = ctx.saved_tensors - output = ext_module.top_pool_backward(input, grad_output) - return output - - -class BottomPoolFunction(Function): - - @staticmethod - def symbolic(g, input): - output = g.op( - 'mmcv::MMCVCornerPool', input, mode_i=int(_mode_dict['bottom'])) - return output - - @staticmethod - def forward(ctx, input): - output = ext_module.bottom_pool_forward(input) - ctx.save_for_backward(input) - return output - - @staticmethod - def backward(ctx, grad_output): - input, = ctx.saved_tensors - output = ext_module.bottom_pool_backward(input, grad_output) - return output - - -class LeftPoolFunction(Function): - - @staticmethod - def symbolic(g, input): - output = g.op( - 'mmcv::MMCVCornerPool', input, mode_i=int(_mode_dict['left'])) - return output - - @staticmethod - def forward(ctx, input): - output = ext_module.left_pool_forward(input) - ctx.save_for_backward(input) - return output - - @staticmethod - def backward(ctx, grad_output): - input, = ctx.saved_tensors - output = ext_module.left_pool_backward(input, grad_output) - return output - - -class RightPoolFunction(Function): - - @staticmethod - def symbolic(g, input): - output = g.op( - 'mmcv::MMCVCornerPool', input, mode_i=int(_mode_dict['right'])) - return output - - @staticmethod - def forward(ctx, input): - output = ext_module.right_pool_forward(input) - ctx.save_for_backward(input) - return output - - @staticmethod - def backward(ctx, grad_output): - input, = ctx.saved_tensors - output = ext_module.right_pool_backward(input, grad_output) - return output - - -class CornerPool(nn.Module): - """Corner Pooling. - - Corner Pooling is a new type of pooling layer that helps a - convolutional network better localize corners of bounding boxes. - - Please refer to https://arxiv.org/abs/1808.01244 for more details. - Code is modified from https://github.com/princeton-vl/CornerNet-Lite. - - Args: - mode(str): Pooling orientation for the pooling layer - - - 'bottom': Bottom Pooling - - 'left': Left Pooling - - 'right': Right Pooling - - 'top': Top Pooling - - Returns: - Feature map after pooling. - """ - - pool_functions = { - 'bottom': BottomPoolFunction, - 'left': LeftPoolFunction, - 'right': RightPoolFunction, - 'top': TopPoolFunction, - } - - cummax_dim_flip = { - 'bottom': (2, False), - 'left': (3, True), - 'right': (3, False), - 'top': (2, True), - } - - def __init__(self, mode): - super(CornerPool, self).__init__() - assert mode in self.pool_functions - self.mode = mode - self.corner_pool = self.pool_functions[mode] - - def forward(self, x): - if torch.__version__ != 'parrots' and torch.__version__ >= '1.5.0': - if torch.onnx.is_in_onnx_export(): - assert torch.__version__ >= '1.7.0', \ - 'When `cummax` serves as an intermediate component whose '\ - 'outputs is used as inputs for another modules, it\'s '\ - 'expected that pytorch version must be >= 1.7.0, '\ - 'otherwise Error appears like: `RuntimeError: tuple '\ - 'appears in op that does not forward tuples, unsupported '\ - 'kind: prim::PythonOp`.' - - dim, flip = self.cummax_dim_flip[self.mode] - if flip: - x = x.flip(dim) - pool_tensor, _ = torch.cummax(x, dim=dim) - if flip: - pool_tensor = pool_tensor.flip(dim) - return pool_tensor - else: - return self.corner_pool.apply(x) diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet/datasets/pipelines/test_time_aug.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet/datasets/pipelines/test_time_aug.py deleted file mode 100644 index b6226e040499882c99f15594c66ebf3d07829168..0000000000000000000000000000000000000000 --- a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet/datasets/pipelines/test_time_aug.py +++ /dev/null @@ -1,119 +0,0 @@ -import warnings - -import mmcv - -from ..builder import PIPELINES -from .compose import Compose - - -@PIPELINES.register_module() -class MultiScaleFlipAug(object): - """Test-time augmentation with multiple scales and flipping. - - An example configuration is as followed: - - .. code-block:: - - img_scale=[(1333, 400), (1333, 800)], - flip=True, - transforms=[ - dict(type='Resize', keep_ratio=True), - dict(type='RandomFlip'), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=32), - dict(type='ImageToTensor', keys=['img']), - dict(type='Collect', keys=['img']), - ] - - After MultiScaleFLipAug with above configuration, the results are wrapped - into lists of the same length as followed: - - .. code-block:: - - dict( - img=[...], - img_shape=[...], - scale=[(1333, 400), (1333, 400), (1333, 800), (1333, 800)] - flip=[False, True, False, True] - ... - ) - - Args: - transforms (list[dict]): Transforms to apply in each augmentation. - img_scale (tuple | list[tuple] | None): Images scales for resizing. - scale_factor (float | list[float] | None): Scale factors for resizing. - flip (bool): Whether apply flip augmentation. Default: False. - flip_direction (str | list[str]): Flip augmentation directions, - options are "horizontal" and "vertical". If flip_direction is list, - multiple flip augmentations will be applied. - It has no effect when flip == False. Default: "horizontal". - """ - - def __init__(self, - transforms, - img_scale=None, - scale_factor=None, - flip=False, - flip_direction='horizontal'): - self.transforms = Compose(transforms) - assert (img_scale is None) ^ (scale_factor is None), ( - 'Must have but only one variable can be setted') - if img_scale is not None: - self.img_scale = img_scale if isinstance(img_scale, - list) else [img_scale] - self.scale_key = 'scale' - assert mmcv.is_list_of(self.img_scale, tuple) - else: - self.img_scale = scale_factor if isinstance( - scale_factor, list) else [scale_factor] - self.scale_key = 'scale_factor' - - self.flip = flip - self.flip_direction = flip_direction if isinstance( - flip_direction, list) else [flip_direction] - assert mmcv.is_list_of(self.flip_direction, str) - if not self.flip and self.flip_direction != ['horizontal']: - warnings.warn( - 'flip_direction has no effect when flip is set to False') - if (self.flip - and not any([t['type'] == 'RandomFlip' for t in transforms])): - warnings.warn( - 'flip has no effect when RandomFlip is not in transforms') - - def __call__(self, results): - """Call function to apply test time augment transforms on results. - - Args: - results (dict): Result dict contains the data to transform. - - Returns: - dict[str: list]: The augmented data, where each value is wrapped - into a list. - """ - - aug_data = [] - flip_args = [(False, None)] - if self.flip: - flip_args += [(True, direction) - for direction in self.flip_direction] - for scale in self.img_scale: - for flip, direction in flip_args: - _results = results.copy() - _results[self.scale_key] = scale - _results['flip'] = flip - _results['flip_direction'] = direction - data = self.transforms(_results) - aug_data.append(data) - # list of dict to dict of list - aug_data_dict = {key: [] for key in aug_data[0]} - for data in aug_data: - for key, val in data.items(): - aug_data_dict[key].append(val) - return aug_data_dict - - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += f'(transforms={self.transforms}, ' - repr_str += f'img_scale={self.img_scale}, flip={self.flip}, ' - repr_str += f'flip_direction={self.flip_direction})' - return repr_str diff --git a/spaces/SERER/VITS-Umamusume-voice-synthesizer/ONNXVITS_to_onnx.py b/spaces/SERER/VITS-Umamusume-voice-synthesizer/ONNXVITS_to_onnx.py deleted file mode 100644 index 846e39849535ed08accb10d7001f2431a851d372..0000000000000000000000000000000000000000 --- a/spaces/SERER/VITS-Umamusume-voice-synthesizer/ONNXVITS_to_onnx.py +++ /dev/null @@ -1,31 +0,0 @@ -import ONNXVITS_models -import utils -from text import text_to_sequence -import torch -import commons - -def get_text(text, hps): - text_norm = text_to_sequence(text, hps.symbols, hps.data.text_cleaners) - if hps.data.add_blank: - text_norm = commons.intersperse(text_norm, 0) - text_norm = torch.LongTensor(text_norm) - return text_norm - -hps = utils.get_hparams_from_file("../vits/pretrained_models/uma87.json") -symbols = hps.symbols -net_g = ONNXVITS_models.SynthesizerTrn( - len(symbols), - hps.data.filter_length // 2 + 1, - hps.train.segment_size // hps.data.hop_length, - n_speakers=hps.data.n_speakers, - **hps.model) -_ = net_g.eval() -_ = utils.load_checkpoint("../vits/pretrained_models/uma_1153000.pth", net_g) - -text1 = get_text("ありがとうございます。", hps) -stn_tst = text1 -with torch.no_grad(): - x_tst = stn_tst.unsqueeze(0) - x_tst_lengths = torch.LongTensor([stn_tst.size(0)]) - sid = torch.tensor([0]) - o = net_g(x_tst, x_tst_lengths, sid=sid, noise_scale=.667, noise_scale_w=0.8, length_scale=1) \ No newline at end of file diff --git a/spaces/SQSora/VITS-Umamusume-voice-synthesizer/ONNXVITS_modules.py b/spaces/SQSora/VITS-Umamusume-voice-synthesizer/ONNXVITS_modules.py deleted file mode 100644 index 6cf676ce37c1eaf8428c4094e749f862182cb0c3..0000000000000000000000000000000000000000 --- a/spaces/SQSora/VITS-Umamusume-voice-synthesizer/ONNXVITS_modules.py +++ /dev/null @@ -1,390 +0,0 @@ -import copy -import math -import numpy as np -import scipy -import torch -from torch import nn -from torch.nn import functional as F - -from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm - -import commons -from commons import init_weights, get_padding -from ONNXVITS_transforms import piecewise_rational_quadratic_transform - - -LRELU_SLOPE = 0.1 - - -class LayerNorm(nn.Module): - def __init__(self, channels, eps=1e-5): - super().__init__() - self.channels = channels - self.eps = eps - - self.gamma = nn.Parameter(torch.ones(channels)) - self.beta = nn.Parameter(torch.zeros(channels)) - - def forward(self, x): - x = x.transpose(1, -1) - x = F.layer_norm(x, (self.channels,), self.gamma, self.beta, self.eps) - return x.transpose(1, -1) - - -class ConvReluNorm(nn.Module): - def __init__(self, in_channels, hidden_channels, out_channels, kernel_size, n_layers, p_dropout): - super().__init__() - self.in_channels = in_channels - self.hidden_channels = hidden_channels - self.out_channels = out_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.p_dropout = p_dropout - assert n_layers > 1, "Number of layers should be larger than 0." - - self.conv_layers = nn.ModuleList() - self.norm_layers = nn.ModuleList() - self.conv_layers.append(nn.Conv1d(in_channels, hidden_channels, kernel_size, padding=kernel_size//2)) - self.norm_layers.append(LayerNorm(hidden_channels)) - self.relu_drop = nn.Sequential( - nn.ReLU(), - nn.Dropout(p_dropout)) - for _ in range(n_layers-1): - self.conv_layers.append(nn.Conv1d(hidden_channels, hidden_channels, kernel_size, padding=kernel_size//2)) - self.norm_layers.append(LayerNorm(hidden_channels)) - self.proj = nn.Conv1d(hidden_channels, out_channels, 1) - self.proj.weight.data.zero_() - self.proj.bias.data.zero_() - - def forward(self, x, x_mask): - x_org = x - for i in range(self.n_layers): - x = self.conv_layers[i](x * x_mask) - x = self.norm_layers[i](x) - x = self.relu_drop(x) - x = x_org + self.proj(x) - return x * x_mask - - -class DDSConv(nn.Module): - """ - Dialted and Depth-Separable Convolution - """ - def __init__(self, channels, kernel_size, n_layers, p_dropout=0.): - super().__init__() - self.channels = channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.p_dropout = p_dropout - - self.drop = nn.Dropout(p_dropout) - self.convs_sep = nn.ModuleList() - self.convs_1x1 = nn.ModuleList() - self.norms_1 = nn.ModuleList() - self.norms_2 = nn.ModuleList() - for i in range(n_layers): - dilation = kernel_size ** i - padding = (kernel_size * dilation - dilation) // 2 - self.convs_sep.append(nn.Conv1d(channels, channels, kernel_size, - groups=channels, dilation=dilation, padding=padding - )) - self.convs_1x1.append(nn.Conv1d(channels, channels, 1)) - self.norms_1.append(LayerNorm(channels)) - self.norms_2.append(LayerNorm(channels)) - - def forward(self, x, x_mask, g=None): - if g is not None: - x = x + g - for i in range(self.n_layers): - y = self.convs_sep[i](x * x_mask) - y = self.norms_1[i](y) - y = F.gelu(y) - y = self.convs_1x1[i](y) - y = self.norms_2[i](y) - y = F.gelu(y) - y = self.drop(y) - x = x + y - return x * x_mask - - -class WN(torch.nn.Module): - def __init__(self, hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=0, p_dropout=0): - super(WN, self).__init__() - assert(kernel_size % 2 == 1) - self.hidden_channels =hidden_channels - self.kernel_size = kernel_size, - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - self.p_dropout = p_dropout - - self.in_layers = torch.nn.ModuleList() - self.res_skip_layers = torch.nn.ModuleList() - self.drop = nn.Dropout(p_dropout) - - if gin_channels != 0: - cond_layer = torch.nn.Conv1d(gin_channels, 2*hidden_channels*n_layers, 1) - self.cond_layer = torch.nn.utils.weight_norm(cond_layer, name='weight') - - for i in range(n_layers): - dilation = dilation_rate ** i - padding = int((kernel_size * dilation - dilation) / 2) - in_layer = torch.nn.Conv1d(hidden_channels, 2*hidden_channels, kernel_size, - dilation=dilation, padding=padding) - in_layer = torch.nn.utils.weight_norm(in_layer, name='weight') - self.in_layers.append(in_layer) - - # last one is not necessary - if i < n_layers - 1: - res_skip_channels = 2 * hidden_channels - else: - res_skip_channels = hidden_channels - - res_skip_layer = torch.nn.Conv1d(hidden_channels, res_skip_channels, 1) - res_skip_layer = torch.nn.utils.weight_norm(res_skip_layer, name='weight') - self.res_skip_layers.append(res_skip_layer) - - def forward(self, x, x_mask, g=None, **kwargs): - output = torch.zeros_like(x) - n_channels_tensor = torch.IntTensor([self.hidden_channels]) - - if g is not None: - g = self.cond_layer(g) - - for i in range(self.n_layers): - x_in = self.in_layers[i](x) - if g is not None: - cond_offset = i * 2 * self.hidden_channels - g_l = g[:,cond_offset:cond_offset+2*self.hidden_channels,:] - else: - g_l = torch.zeros_like(x_in) - - acts = commons.fused_add_tanh_sigmoid_multiply( - x_in, - g_l, - n_channels_tensor) - acts = self.drop(acts) - - res_skip_acts = self.res_skip_layers[i](acts) - if i < self.n_layers - 1: - res_acts = res_skip_acts[:,:self.hidden_channels,:] - x = (x + res_acts) * x_mask - output = output + res_skip_acts[:,self.hidden_channels:,:] - else: - output = output + res_skip_acts - return output * x_mask - - def remove_weight_norm(self): - if self.gin_channels != 0: - torch.nn.utils.remove_weight_norm(self.cond_layer) - for l in self.in_layers: - torch.nn.utils.remove_weight_norm(l) - for l in self.res_skip_layers: - torch.nn.utils.remove_weight_norm(l) - - -class ResBlock1(torch.nn.Module): - def __init__(self, channels, kernel_size=3, dilation=(1, 3, 5)): - super(ResBlock1, self).__init__() - self.convs1 = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[2], - padding=get_padding(kernel_size, dilation[2]))) - ]) - self.convs1.apply(init_weights) - - self.convs2 = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))) - ]) - self.convs2.apply(init_weights) - - def forward(self, x, x_mask=None): - for c1, c2 in zip(self.convs1, self.convs2): - xt = F.leaky_relu(x, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c1(xt) - xt = F.leaky_relu(xt, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c2(xt) - x = xt + x - if x_mask is not None: - x = x * x_mask - return x - - def remove_weight_norm(self): - for l in self.convs1: - remove_weight_norm(l) - for l in self.convs2: - remove_weight_norm(l) - - -class ResBlock2(torch.nn.Module): - def __init__(self, channels, kernel_size=3, dilation=(1, 3)): - super(ResBlock2, self).__init__() - self.convs = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]))) - ]) - self.convs.apply(init_weights) - - def forward(self, x, x_mask=None): - for c in self.convs: - xt = F.leaky_relu(x, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c(xt) - x = xt + x - if x_mask is not None: - x = x * x_mask - return x - - def remove_weight_norm(self): - for l in self.convs: - remove_weight_norm(l) - - -class Log(nn.Module): - def forward(self, x, x_mask, reverse=False, **kwargs): - if not reverse: - y = torch.log(torch.clamp_min(x, 1e-5)) * x_mask - logdet = torch.sum(-y, [1, 2]) - return y, logdet - else: - x = torch.exp(x) * x_mask - return x - - -class Flip(nn.Module): - def forward(self, x, *args, reverse=False, **kwargs): - x = torch.flip(x, [1]) - if not reverse: - logdet = torch.zeros(x.size(0)).to(dtype=x.dtype, device=x.device) - return x, logdet - else: - return x - - -class ElementwiseAffine(nn.Module): - def __init__(self, channels): - super().__init__() - self.channels = channels - self.m = nn.Parameter(torch.zeros(channels,1)) - self.logs = nn.Parameter(torch.zeros(channels,1)) - - def forward(self, x, x_mask, reverse=False, **kwargs): - if not reverse: - y = self.m + torch.exp(self.logs) * x - y = y * x_mask - logdet = torch.sum(self.logs * x_mask, [1,2]) - return y, logdet - else: - x = (x - self.m) * torch.exp(-self.logs) * x_mask - return x - - -class ResidualCouplingLayer(nn.Module): - def __init__(self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - p_dropout=0, - gin_channels=0, - mean_only=False): - assert channels % 2 == 0, "channels should be divisible by 2" - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.half_channels = channels // 2 - self.mean_only = mean_only - - self.pre = nn.Conv1d(self.half_channels, hidden_channels, 1) - self.enc = WN(hidden_channels, kernel_size, dilation_rate, n_layers, p_dropout=p_dropout, gin_channels=gin_channels) - self.post = nn.Conv1d(hidden_channels, self.half_channels * (2 - mean_only), 1) - self.post.weight.data.zero_() - self.post.bias.data.zero_() - - def forward(self, x, x_mask, g=None, reverse=False): - x0, x1 = torch.split(x, [self.half_channels]*2, 1) - h = self.pre(x0) * x_mask - h = self.enc(h, x_mask, g=g) - stats = self.post(h) * x_mask - if not self.mean_only: - m, logs = torch.split(stats, [self.half_channels]*2, 1) - else: - m = stats - logs = torch.zeros_like(m) - - if not reverse: - x1 = m + x1 * torch.exp(logs) * x_mask - x = torch.cat([x0, x1], 1) - logdet = torch.sum(logs, [1,2]) - return x, logdet - else: - x1 = (x1 - m) * torch.exp(-logs) * x_mask - x = torch.cat([x0, x1], 1) - return x - - -class ConvFlow(nn.Module): - def __init__(self, in_channels, filter_channels, kernel_size, n_layers, num_bins=10, tail_bound=5.0): - super().__init__() - self.in_channels = in_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.num_bins = num_bins - self.tail_bound = tail_bound - self.half_channels = in_channels // 2 - - self.pre = nn.Conv1d(self.half_channels, filter_channels, 1) - self.convs = DDSConv(filter_channels, kernel_size, n_layers, p_dropout=0.) - self.proj = nn.Conv1d(filter_channels, self.half_channels * (num_bins * 3 - 1), 1) - self.proj.weight.data.zero_() - self.proj.bias.data.zero_() - - def forward(self, x, x_mask, g=None, reverse=False): - x0, x1 = torch.split(x, [self.half_channels]*2, 1) - h = self.pre(x0) - h = self.convs(h, x_mask, g=g) - h = self.proj(h) * x_mask - - b, c, t = x0.shape - h = h.reshape(b, c, -1, t).permute(0, 1, 3, 2) # [b, cx?, t] -> [b, c, t, ?] - - unnormalized_widths = h[..., :self.num_bins] / math.sqrt(self.filter_channels) - unnormalized_heights = h[..., self.num_bins:2*self.num_bins] / math.sqrt(self.filter_channels) - unnormalized_derivatives = h[..., 2 * self.num_bins:] - - x1, logabsdet = piecewise_rational_quadratic_transform(x1, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=reverse, - tails='linear', - tail_bound=self.tail_bound - ) - - x = torch.cat([x0, x1], 1) * x_mask - logdet = torch.sum(logabsdet * x_mask, [1,2]) - if not reverse: - return x, logdet - else: - return x diff --git a/spaces/SalahZa/Tunisian-ASR-v0/app.py b/spaces/SalahZa/Tunisian-ASR-v0/app.py deleted file mode 100644 index df9e5d6ba33e7595ba53c2a037d7f8df2fc78f4f..0000000000000000000000000000000000000000 --- a/spaces/SalahZa/Tunisian-ASR-v0/app.py +++ /dev/null @@ -1,383 +0,0 @@ -import os -import sys -import torch -import logging -import speechbrain as sb -from speechbrain.utils.distributed import run_on_main -from hyperpyyaml import load_hyperpyyaml -from pathlib import Path -import torchaudio.transforms as T -import torchaudio -import numpy as np - -from pyctcdecode import build_ctcdecoder -hparams_file, run_opts, overrides = sb.parse_arguments(["wavlm_partly_frozen.yaml"]) - -# If distributed_launch=True then -# create ddp_group with the right communication protocol -sb.utils.distributed.ddp_init_group(run_opts) - -with open(hparams_file) as fin: - hparams = load_hyperpyyaml(fin, overrides) - -# Create experiment directory -sb.create_experiment_directory( - experiment_directory=hparams["output_folder"], - hyperparams_to_save=hparams_file, - overrides=overrides, -) -def read_labels_file(labels_file): - with open(labels_file, "r") as lf: - lines = lf.read().splitlines() - division = "===" - numbers = {} - for line in lines : - if division in line : - break - string, number = line.split("=>") - number = int(number) - string = string[1:-2] - numbers[number] = string - return [numbers[x] for x in range(len(numbers))] -labels = read_labels_file(os.path.join(hparams["save_folder"], "label_encoder.txt")) -print(labels) -labels = [""] + labels[1:] -print(len(labels)) - -# Dataset prep (parsing Librispeech) - -resampler_8000 = T.Resample(8000, 16000, dtype=torch.float) - -resampler_44100 =T.Resample(44100, 16000, dtype=torch.float) -resampler_48000 =T.Resample(48000, 16000, dtype=torch.float) - - -resamplers = {"8000": resampler_8000, "44100":resampler_44100, "48000": resampler_48000} -def dataio_prepare(hparams): - """This function prepares the datasets to be used in the brain class. - It also defines the data processing pipeline through user-defined functions.""" - data_folder = hparams["data_folder"] - - train_data = sb.dataio.dataset.DynamicItemDataset.from_csv( - csv_path=hparams["train_csv"], replacements={"data_root": data_folder}, - ) - - if hparams["sorting"] == "ascending": - # we sort training data to speed up training and get better results. - train_data = train_data.filtered_sorted(sort_key="duration") - # when sorting do not shuffle in dataloader ! otherwise is pointless - hparams["train_dataloader_opts"]["shuffle"] = False - - elif hparams["sorting"] == "descending": - train_data = train_data.filtered_sorted( - sort_key="duration", reverse=True - ) - # when sorting do not shuffle in dataloader ! otherwise is pointless - hparams["train_dataloader_opts"]["shuffle"] = False - - elif hparams["sorting"] == "random": - pass - - else: - raise NotImplementedError( - "sorting must be random, ascending or descending" - ) - - valid_data = sb.dataio.dataset.DynamicItemDataset.from_csv( - csv_path=hparams["valid_csv"], replacements={"data_root": data_folder}, - ) - valid_data = valid_data.filtered_sorted(sort_key="duration") - - # test is separate - test_datasets = {} - for csv_file in hparams["test_csv"]: - name = Path(csv_file).stem - test_datasets[name] = sb.dataio.dataset.DynamicItemDataset.from_csv( - csv_path=csv_file, replacements={"data_root": data_folder} - ) - test_datasets[name] = test_datasets[name].filtered_sorted( - sort_key="duration" - ) - - datasets = [train_data, valid_data] + [i for k, i in test_datasets.items()] - - # 2. Define audio pipeline: - @sb.utils.data_pipeline.takes("wav", "sr") - @sb.utils.data_pipeline.provides("sig") - def audio_pipeline(wav, sr): - sig = sb.dataio.dataio.read_audio(wav) - sig = resamplers[sr](sig) - return sig - - sb.dataio.dataset.add_dynamic_item(datasets, audio_pipeline) - label_encoder = sb.dataio.encoder.CTCTextEncoder() - - # 3. Define text pipeline: - @sb.utils.data_pipeline.takes("wrd") - @sb.utils.data_pipeline.provides( - "wrd", "char_list", "tokens_list", "tokens_bos", "tokens_eos", "tokens" - ) - def text_pipeline(wrd): - yield wrd - char_list = list(wrd) - yield char_list - tokens_list = label_encoder.encode_sequence(char_list) - yield tokens_list - tokens_bos = torch.LongTensor([hparams["bos_index"]] + (tokens_list)) - yield tokens_bos - tokens_eos = torch.LongTensor(tokens_list + [hparams["eos_index"]]) - yield tokens_eos - tokens = torch.LongTensor(tokens_list) - yield tokens - - sb.dataio.dataset.add_dynamic_item(datasets, text_pipeline) - - lab_enc_file = os.path.join(hparams["save_folder"], "label_encoder.txt") - special_labels = { - "bos_label": hparams["bos_index"], - "eos_label": hparams["eos_index"], - "blank_label": hparams["blank_index"], - } - label_encoder.load_or_create( - path=lab_enc_file, - from_didatasets=[train_data], - output_key="char_list", - special_labels=special_labels, - sequence_input=True, - ) - - # 4. Set output: - sb.dataio.dataset.set_output_keys( - datasets, - ["id", "sig", "wrd", "char_list", "tokens_bos", "tokens_eos", "tokens"], - ) - return train_data, valid_data, test_datasets, label_encoder - - -class ASR(sb.Brain): - def compute_forward(self, batch, stage): - """Forward computations from the waveform batches to the output probabilities.""" - batch = batch.to(self.device) - wavs, wav_lens = batch.sig - print(wavs) - tokens_bos, _ = batch.tokens_bos - wavs, wav_lens = wavs.to(self.device), wav_lens.to(self.device) - - # Forward pass - feats = self.modules.wav2vec2(wavs) - x = self.modules.enc(feats) - # Compute outputs - p_tokens = None - logits = self.modules.ctc_lin(x) - p_ctc = self.hparams.log_softmax(logits) - if stage != sb.Stage.TRAIN: - p_tokens = sb.decoders.ctc_greedy_decode( - p_ctc, wav_lens, blank_id=self.hparams.blank_index - ) - return p_ctc, wav_lens, p_tokens - - def treat_wav(self,sig): - feats = self.modules.wav2vec2(sig.to(self.device)) - x = self.modules.enc(feats) - p_tokens = None - logits = self.modules.ctc_lin(x) - p_ctc = self.hparams.log_softmax(logits) - predicted_words =[] - for logs in p_ctc: - text = decoder.decode(logs.detach().cpu().numpy()) - predicted_words.append(text.split(" ")) - return " ".join(predicted_words[0]) - - - - - def compute_objectives(self, predictions, batch, stage): - """Computes the loss (CTC+NLL) given predictions and targets.""" - - p_ctc, wav_lens, predicted_tokens = predictions - - ids = batch.id - tokens_eos, tokens_eos_lens = batch.tokens_eos - tokens, tokens_lens = batch.tokens - - if hasattr(self.modules, "env_corrupt") and stage == sb.Stage.TRAIN: - tokens_eos = torch.cat([tokens_eos, tokens_eos], dim=0) - tokens_eos_lens = torch.cat( - [tokens_eos_lens, tokens_eos_lens], dim=0 - ) - tokens = torch.cat([tokens, tokens], dim=0) - tokens_lens = torch.cat([tokens_lens, tokens_lens], dim=0) - - loss_ctc = self.hparams.ctc_cost(p_ctc, tokens, wav_lens, tokens_lens) - loss = loss_ctc - if stage != sb.Stage.TRAIN: - # Decode token terms to words - predicted_words =[] - for logs in p_ctc: - text = decoder.decode(logs.detach().cpu().numpy()) - predicted_words.append(text.split(" ")) - - target_words = [wrd.split(" ") for wrd in batch.wrd] - self.wer_metric.append(ids, predicted_words, target_words) - self.cer_metric.append(ids, predicted_words, target_words) - - return loss - - def fit_batch(self, batch): - """Train the parameters given a single batch in input""" - predictions = self.compute_forward(batch, sb.Stage.TRAIN) - loss = self.compute_objectives(predictions, batch, sb.Stage.TRAIN) - loss.backward() - if self.check_gradients(loss): - self.wav2vec_optimizer.step() - self.model_optimizer.step() - - self.wav2vec_optimizer.zero_grad() - self.model_optimizer.zero_grad() - - return loss.detach() - - def evaluate_batch(self, batch, stage): - """Computations needed for validation/test batches""" - predictions = self.compute_forward(batch, stage=stage) - with torch.no_grad(): - loss = self.compute_objectives(predictions, batch, stage=stage) - return loss.detach() - - def on_stage_start(self, stage, epoch): - """Gets called at the beginning of each epoch""" - if stage != sb.Stage.TRAIN: - self.cer_metric = self.hparams.cer_computer() - self.wer_metric = self.hparams.error_rate_computer() - - def on_stage_end(self, stage, stage_loss, epoch): - """Gets called at the end of an epoch.""" - # Compute/store important stats - stage_stats = {"loss": stage_loss} - if stage == sb.Stage.TRAIN: - self.train_stats = stage_stats - else: - stage_stats["CER"] = self.cer_metric.summarize("error_rate") - stage_stats["WER"] = self.wer_metric.summarize("error_rate") - - # Perform end-of-iteration things, like annealing, logging, etc. - if stage == sb.Stage.VALID: - old_lr_model, new_lr_model = self.hparams.lr_annealing_model( - stage_stats["loss"] - ) - old_lr_wav2vec, new_lr_wav2vec = self.hparams.lr_annealing_wav2vec( - stage_stats["loss"] - ) - sb.nnet.schedulers.update_learning_rate( - self.model_optimizer, new_lr_model - ) - sb.nnet.schedulers.update_learning_rate( - self.wav2vec_optimizer, new_lr_wav2vec - ) - self.hparams.train_logger.log_stats( - stats_meta={ - "epoch": epoch, - "lr_model": old_lr_model, - "lr_wav2vec": old_lr_wav2vec, - }, - train_stats=self.train_stats, - valid_stats=stage_stats, - ) - self.checkpointer.save_and_keep_only( - meta={"WER": stage_stats["WER"]}, min_keys=["WER"], - ) - elif stage == sb.Stage.TEST: - self.hparams.train_logger.log_stats( - stats_meta={"Epoch loaded": self.hparams.epoch_counter.current}, - test_stats=stage_stats, - ) - with open(self.hparams.wer_file, "w") as w: - self.wer_metric.write_stats(w) - - def init_optimizers(self): - "Initializes the wav2vec2 optimizer and model optimizer" - self.wav2vec_optimizer = self.hparams.wav2vec_opt_class( - self.modules.wav2vec2.parameters() - ) - self.model_optimizer = self.hparams.model_opt_class( - self.hparams.model.parameters() - ) - - if self.checkpointer is not None: - self.checkpointer.add_recoverable( - "wav2vec_opt", self.wav2vec_optimizer - ) - self.checkpointer.add_recoverable("modelopt", self.model_optimizer) - -label_encoder = sb.dataio.encoder.CTCTextEncoder() - - -# We dynamicaly add the tokenizer to our brain class. -# NB: This tokenizer corresponds to the one used for the LM!! -decoder = build_ctcdecoder( - labels, - kenlm_model_path="tunisian.arpa", # either .arpa or .bin file - alpha=0.5, # tuned on a val set - beta=1, # tuned on a val set -) -run_opts["device"]="cpu" -asr_brain = ASR( - modules=hparams["modules"], - hparams=hparams, - run_opts=run_opts, - checkpointer=hparams["checkpointer"], -) -description = """This is a speechbrain-based Automatic Speech Recognition (ASR) model for Tunisian arabic. It outputs Tunisian transcriptions written in Arabic alphabet. Since the language is unwritten, the words' transcriptions may vary. This model is presented by Salah Zaiem, PhD candidate, contact : zaiemsalah@gmail.com - - -Due to the nature of the available training data, the model may encounter issues when dealing with foreign words. So, and while it is common for Tunisian speakers to use (mainly french) foreign words, these will lead to more errors. We may work on improving this in further models. - - -Run is done on CPU to keep it free in this space. This leads to quite long running times on long sequences. If for your project or research, you want to transcribe long sequences, feel free to drop an email here : zaiemsalah@gmail.com - - -""" -title = "Tunisian Arabic Automatic Speech Recognition" - - - -asr_brain.device= "cpu" -asr_brain.modules.to("cpu") -asr_brain.tokenizer = label_encoder - -from enum import Enum, auto -class Stage(Enum): - TRAIN = auto() - VALID = auto() - TEST = auto() - -asr_brain.on_evaluate_start() -asr_brain.modules.eval() -import gradio as gr -def treat_wav_file(file_mic, file_upload, resamplers = resamplers,asr=asr_brain, device="cpu") : - - if (file_mic is not None) and (file_upload is not None): - warn_output = "WARNING: You've uploaded an audio file and used the microphone. The recorded file from the microphone will be used and the uploaded audio will be discarded.\n" - wav = file_mic - elif (file_mic is None) and (file_upload is None): - return "ERROR: You have to either use the microphone or upload an audio file" - elif file_mic is not None: - wav = file_mic - else: - wav = file_upload - sig, sr = torchaudio.load(wav) - tensor_wav = sig.to(device) - resampled = resamplers[str(sr)](tensor_wav) - sentence = asr_brain.treat_wav(resampled) - return sentence - -gr.Interface( - fn=treat_wav_file, - title = title, - description = description, - inputs=[gr.inputs.Audio(source="microphone", type='filepath', optional=True), - gr.inputs.Audio(source="upload", type='filepath', optional=True)] - ,outputs="text").launch() - - - diff --git a/spaces/Salesforce/EDICT/my_diffusers/models/attention.py b/spaces/Salesforce/EDICT/my_diffusers/models/attention.py deleted file mode 100644 index 5e5ab9ace7c6ffbf048f6ddd3cfc8e4482fac61f..0000000000000000000000000000000000000000 --- a/spaces/Salesforce/EDICT/my_diffusers/models/attention.py +++ /dev/null @@ -1,333 +0,0 @@ -import math -from typing import Optional - -import torch -import torch.nn.functional as F -from torch import nn - - -class AttentionBlock(nn.Module): - """ - An attention block that allows spatial positions to attend to each other. Originally ported from here, but adapted - to the N-d case. - https://github.com/hojonathanho/diffusion/blob/1e0dceb3b3495bbe19116a5e1b3596cd0706c543/diffusion_tf/models/unet.py#L66. - Uses three q, k, v linear layers to compute attention. - - Parameters: - channels (:obj:`int`): The number of channels in the input and output. - num_head_channels (:obj:`int`, *optional*): - The number of channels in each head. If None, then `num_heads` = 1. - num_groups (:obj:`int`, *optional*, defaults to 32): The number of groups to use for group norm. - rescale_output_factor (:obj:`float`, *optional*, defaults to 1.0): The factor to rescale the output by. - eps (:obj:`float`, *optional*, defaults to 1e-5): The epsilon value to use for group norm. - """ - - def __init__( - self, - channels: int, - num_head_channels: Optional[int] = None, - num_groups: int = 32, - rescale_output_factor = 1.0, - eps = 1e-5, - ): - super().__init__() - self.channels = channels - - self.num_heads = channels // num_head_channels if num_head_channels is not None else 1 - self.num_head_size = num_head_channels - self.group_norm = nn.GroupNorm(num_channels=channels, num_groups=num_groups, eps=eps, affine=True) - - # define q,k,v as linear layers - self.query = nn.Linear(channels, channels) - self.key = nn.Linear(channels, channels) - self.value = nn.Linear(channels, channels) - - self.rescale_output_factor = rescale_output_factor - self.proj_attn = nn.Linear(channels, channels, 1) - - def transpose_for_scores(self, projection: torch.Tensor) -> torch.Tensor: - new_projection_shape = projection.size()[:-1] + (self.num_heads, -1) - # move heads to 2nd position (B, T, H * D) -> (B, T, H, D) -> (B, H, T, D) - new_projection = projection.view(new_projection_shape).permute(0, 2, 1, 3) - return new_projection - - def forward(self, hidden_states): - residual = hidden_states - batch, channel, height, width = hidden_states.shape - - # norm - hidden_states = self.group_norm(hidden_states) - - hidden_states = hidden_states.view(batch, channel, height * width).transpose(1, 2) - - # proj to q, k, v - query_proj = self.query(hidden_states) - key_proj = self.key(hidden_states) - value_proj = self.value(hidden_states) - - # transpose - query_states = self.transpose_for_scores(query_proj) - key_states = self.transpose_for_scores(key_proj) - value_states = self.transpose_for_scores(value_proj) - - # get scores - scale = 1 / math.sqrt(math.sqrt(self.channels / self.num_heads)) - - attention_scores = torch.matmul(query_states * scale, key_states.transpose(-1, -2) * scale) - attention_probs = torch.softmax(attention_scores.double(), dim=-1).type(attention_scores.dtype) - - # compute attention output - hidden_states = torch.matmul(attention_probs, value_states) - - hidden_states = hidden_states.permute(0, 2, 1, 3).contiguous() - new_hidden_states_shape = hidden_states.size()[:-2] + (self.channels,) - hidden_states = hidden_states.view(new_hidden_states_shape) - - # compute next hidden_states - hidden_states = self.proj_attn(hidden_states) - hidden_states = hidden_states.transpose(-1, -2).reshape(batch, channel, height, width) - - # res connect and rescale - hidden_states = (hidden_states + residual) / self.rescale_output_factor - return hidden_states - - -class SpatialTransformer(nn.Module): - """ - Transformer block for image-like data. First, project the input (aka embedding) and reshape to b, t, d. Then apply - standard transformer action. Finally, reshape to image. - - Parameters: - in_channels (:obj:`int`): The number of channels in the input and output. - n_heads (:obj:`int`): The number of heads to use for multi-head attention. - d_head (:obj:`int`): The number of channels in each head. - depth (:obj:`int`, *optional*, defaults to 1): The number of layers of Transformer blocks to use. - dropout (:obj:`float`, *optional*, defaults to 0.1): The dropout probability to use. - context_dim (:obj:`int`, *optional*): The number of context dimensions to use. - """ - - def __init__( - self, - in_channels: int, - n_heads: int, - d_head: int, - depth: int = 1, - dropout = 0.0, - context_dim: Optional[int] = None, - ): - super().__init__() - self.n_heads = n_heads - self.d_head = d_head - self.in_channels = in_channels - inner_dim = n_heads * d_head - self.norm = torch.nn.GroupNorm(num_groups=32, num_channels=in_channels, eps=1e-6, affine=True) - - self.proj_in = nn.Conv2d(in_channels, inner_dim, kernel_size=1, stride=1, padding=0) - - self.transformer_blocks = nn.ModuleList( - [ - BasicTransformerBlock(inner_dim, n_heads, d_head, dropout=dropout, context_dim=context_dim) - for d in range(depth) - ] - ) - - self.proj_out = nn.Conv2d(inner_dim, in_channels, kernel_size=1, stride=1, padding=0) - - def _set_attention_slice(self, slice_size): - for block in self.transformer_blocks: - block._set_attention_slice(slice_size) - - def forward(self, x, context=None): - # note: if no context is given, cross-attention defaults to self-attention - b, c, h, w = x.shape - x_in = x - x = self.norm(x) - x = self.proj_in(x) - x = x.permute(0, 2, 3, 1).reshape(b, h * w, c) - for block in self.transformer_blocks: - x = block(x, context=context) - x = x.reshape(b, h, w, c).permute(0, 3, 1, 2) - x = self.proj_out(x) - return x + x_in - - -class BasicTransformerBlock(nn.Module): - r""" - A basic Transformer block. - - Parameters: - dim (:obj:`int`): The number of channels in the input and output. - n_heads (:obj:`int`): The number of heads to use for multi-head attention. - d_head (:obj:`int`): The number of channels in each head. - dropout (:obj:`float`, *optional*, defaults to 0.0): The dropout probability to use. - context_dim (:obj:`int`, *optional*): The size of the context vector for cross attention. - gated_ff (:obj:`bool`, *optional*, defaults to :obj:`False`): Whether to use a gated feed-forward network. - checkpoint (:obj:`bool`, *optional*, defaults to :obj:`False`): Whether to use checkpointing. - """ - - def __init__( - self, - dim: int, - n_heads: int, - d_head: int, - dropout=0.0, - context_dim: Optional[int] = None, - gated_ff: bool = True, - checkpoint: bool = True, - ): - super().__init__() - self.attn1 = CrossAttention( - query_dim=dim, heads=n_heads, dim_head=d_head, dropout=dropout - ) # is a self-attention - self.ff = FeedForward(dim, dropout=dropout, glu=gated_ff) - self.attn2 = CrossAttention( - query_dim=dim, context_dim=context_dim, heads=n_heads, dim_head=d_head, dropout=dropout - ) # is self-attn if context is none - self.norm1 = nn.LayerNorm(dim) - self.norm2 = nn.LayerNorm(dim) - self.norm3 = nn.LayerNorm(dim) - self.checkpoint = checkpoint - - def _set_attention_slice(self, slice_size): - self.attn1._slice_size = slice_size - self.attn2._slice_size = slice_size - - def forward(self, x, context=None): - x = x.contiguous() if x.device.type == "mps" else x - x = self.attn1(self.norm1(x)) + x - x = self.attn2(self.norm2(x), context=context) + x - x = self.ff(self.norm3(x)) + x - return x - - -class CrossAttention(nn.Module): - r""" - A cross attention layer. - - Parameters: - query_dim (:obj:`int`): The number of channels in the query. - context_dim (:obj:`int`, *optional*): - The number of channels in the context. If not given, defaults to `query_dim`. - heads (:obj:`int`, *optional*, defaults to 8): The number of heads to use for multi-head attention. - dim_head (:obj:`int`, *optional*, defaults to 64): The number of channels in each head. - dropout (:obj:`float`, *optional*, defaults to 0.0): The dropout probability to use. - """ - - def __init__( - self, query_dim: int, context_dim: Optional[int] = None, heads: int = 8, dim_head: int = 64, dropout: int = 0.0 - ): - super().__init__() - inner_dim = dim_head * heads - context_dim = context_dim if context_dim is not None else query_dim - - self.scale = dim_head**-0.5 - self.heads = heads - # for slice_size > 0 the attention score computation - # is split across the batch axis to save memory - # You can set slice_size with `set_attention_slice` - self._slice_size = None - - self.to_q = nn.Linear(query_dim, inner_dim, bias=False) - self.to_k = nn.Linear(context_dim, inner_dim, bias=False) - self.to_v = nn.Linear(context_dim, inner_dim, bias=False) - - self.to_out = nn.Sequential(nn.Linear(inner_dim, query_dim), nn.Dropout(dropout)) - - def reshape_heads_to_batch_dim(self, tensor): - batch_size, seq_len, dim = tensor.shape - head_size = self.heads - tensor = tensor.reshape(batch_size, seq_len, head_size, dim // head_size) - tensor = tensor.permute(0, 2, 1, 3).reshape(batch_size * head_size, seq_len, dim // head_size) - return tensor - - def reshape_batch_dim_to_heads(self, tensor): - batch_size, seq_len, dim = tensor.shape - head_size = self.heads - tensor = tensor.reshape(batch_size // head_size, head_size, seq_len, dim) - tensor = tensor.permute(0, 2, 1, 3).reshape(batch_size // head_size, seq_len, dim * head_size) - return tensor - - def forward(self, x, context=None, mask=None): - batch_size, sequence_length, dim = x.shape - - q = self.to_q(x) - context = context if context is not None else x - k = self.to_k(context) - v = self.to_v(context) - - q = self.reshape_heads_to_batch_dim(q) - k = self.reshape_heads_to_batch_dim(k) - v = self.reshape_heads_to_batch_dim(v) - - # TODO(PVP) - mask is currently never used. Remember to re-implement when used - - # attention, what we cannot get enough of - hidden_states = self._attention(q, k, v, sequence_length, dim) - - return self.to_out(hidden_states) - - def _attention(self, query, key, value, sequence_length, dim): - batch_size_attention = query.shape[0] - hidden_states = torch.zeros( - (batch_size_attention, sequence_length, dim // self.heads), device=query.device, dtype=query.dtype - ) - slice_size = self._slice_size if self._slice_size is not None else hidden_states.shape[0] - for i in range(hidden_states.shape[0] // slice_size): - start_idx = i * slice_size - end_idx = (i + 1) * slice_size - attn_slice = ( - torch.einsum("b i d, b j d -> b i j", query[start_idx:end_idx], key[start_idx:end_idx]) * self.scale - ) - attn_slice = attn_slice.softmax(dim=-1) - attn_slice = torch.einsum("b i j, b j d -> b i d", attn_slice, value[start_idx:end_idx]) - - hidden_states[start_idx:end_idx] = attn_slice - - # reshape hidden_states - hidden_states = self.reshape_batch_dim_to_heads(hidden_states) - return hidden_states - - -class FeedForward(nn.Module): - r""" - A feed-forward layer. - - Parameters: - dim (:obj:`int`): The number of channels in the input. - dim_out (:obj:`int`, *optional*): The number of channels in the output. If not given, defaults to `dim`. - mult (:obj:`int`, *optional*, defaults to 4): The multiplier to use for the hidden dimension. - glu (:obj:`bool`, *optional*, defaults to :obj:`False`): Whether to use GLU activation. - dropout (:obj:`float`, *optional*, defaults to 0.0): The dropout probability to use. - """ - - def __init__( - self, dim: int, dim_out: Optional[int] = None, mult: int = 4, glu: bool = False, dropout = 0.0 - ): - super().__init__() - inner_dim = int(dim * mult) - dim_out = dim_out if dim_out is not None else dim - project_in = GEGLU(dim, inner_dim) - - self.net = nn.Sequential(project_in, nn.Dropout(dropout), nn.Linear(inner_dim, dim_out)) - - def forward(self, x): - return self.net(x) - - -# feedforward -class GEGLU(nn.Module): - r""" - A variant of the gated linear unit activation function from https://arxiv.org/abs/2002.05202. - - Parameters: - dim_in (:obj:`int`): The number of channels in the input. - dim_out (:obj:`int`): The number of channels in the output. - """ - - def __init__(self, dim_in: int, dim_out: int): - super().__init__() - self.proj = nn.Linear(dim_in, dim_out * 2) - - def forward(self, x): - x, gate = self.proj(x).chunk(2, dim=-1) - return x * F.gelu(gate) diff --git a/spaces/SappyInk/Ink/Dockerfile b/spaces/SappyInk/Ink/Dockerfile deleted file mode 100644 index 6c01c09373883afcb4ea34ae2d316cd596e1737b..0000000000000000000000000000000000000000 --- a/spaces/SappyInk/Ink/Dockerfile +++ /dev/null @@ -1,21 +0,0 @@ -FROM node:18-bullseye-slim - -RUN apt-get update && \ - -apt-get install -y git - -RUN git clone https://gitgud.io/khanon/oai-reverse-proxy.git /app - -WORKDIR /app - -RUN npm install - -COPY Dockerfile greeting.md* .env* ./ - -RUN npm run build - -EXPOSE 7860 - -ENV NODE_ENV=production - -CMD [ "npm", "start" ] \ No newline at end of file diff --git a/spaces/SeViLA/SeViLA/lavis/datasets/builders/dialogue_builder.py b/spaces/SeViLA/SeViLA/lavis/datasets/builders/dialogue_builder.py deleted file mode 100644 index 08a54f2aa4da710af98dc36aac36e2eec5d3dad4..0000000000000000000000000000000000000000 --- a/spaces/SeViLA/SeViLA/lavis/datasets/builders/dialogue_builder.py +++ /dev/null @@ -1,21 +0,0 @@ -""" - Copyright (c) 2022, salesforce.com, inc. - All rights reserved. - SPDX-License-Identifier: BSD-3-Clause - For full license text, see the LICENSE file in the repo root or https://opensource.org/licenses/BSD-3-Clause -""" - -from lavis.common.registry import registry -from lavis.datasets.builders.base_dataset_builder import BaseDatasetBuilder -from lavis.datasets.datasets.avsd_dialogue_datasets import ( - AVSDDialDataset, - AVSDDialEvalDataset, -) - - -@registry.register_builder("avsd_dialogue") -class AVSDDialBuilder(BaseDatasetBuilder): - train_dataset_cls = AVSDDialDataset - eval_dataset_cls = AVSDDialEvalDataset - - DATASET_CONFIG_DICT = {"default": "configs/datasets/avsd/defaults_dial.yaml"} diff --git a/spaces/Shannu/mygenAIAvatar/README.md b/spaces/Shannu/mygenAIAvatar/README.md deleted file mode 100644 index 86ba74f361e558f0bf4665462aa986510b271dfe..0000000000000000000000000000000000000000 --- a/spaces/Shannu/mygenAIAvatar/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: MygenAIAvatar -emoji: 🏃 -colorFrom: gray -colorTo: gray -sdk: gradio -sdk_version: 3.39.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Silentlin/DiffSinger/modules/commons/espnet_positional_embedding.py b/spaces/Silentlin/DiffSinger/modules/commons/espnet_positional_embedding.py deleted file mode 100644 index 74decb6ab300951490ae08a4b93041a0542b5bb7..0000000000000000000000000000000000000000 --- a/spaces/Silentlin/DiffSinger/modules/commons/espnet_positional_embedding.py +++ /dev/null @@ -1,113 +0,0 @@ -import math -import torch - - -class PositionalEncoding(torch.nn.Module): - """Positional encoding. - Args: - d_model (int): Embedding dimension. - dropout_rate (float): Dropout rate. - max_len (int): Maximum input length. - reverse (bool): Whether to reverse the input position. - """ - - def __init__(self, d_model, dropout_rate, max_len=5000, reverse=False): - """Construct an PositionalEncoding object.""" - super(PositionalEncoding, self).__init__() - self.d_model = d_model - self.reverse = reverse - self.xscale = math.sqrt(self.d_model) - self.dropout = torch.nn.Dropout(p=dropout_rate) - self.pe = None - self.extend_pe(torch.tensor(0.0).expand(1, max_len)) - - def extend_pe(self, x): - """Reset the positional encodings.""" - if self.pe is not None: - if self.pe.size(1) >= x.size(1): - if self.pe.dtype != x.dtype or self.pe.device != x.device: - self.pe = self.pe.to(dtype=x.dtype, device=x.device) - return - pe = torch.zeros(x.size(1), self.d_model) - if self.reverse: - position = torch.arange( - x.size(1) - 1, -1, -1.0, dtype=torch.float32 - ).unsqueeze(1) - else: - position = torch.arange(0, x.size(1), dtype=torch.float32).unsqueeze(1) - div_term = torch.exp( - torch.arange(0, self.d_model, 2, dtype=torch.float32) - * -(math.log(10000.0) / self.d_model) - ) - pe[:, 0::2] = torch.sin(position * div_term) - pe[:, 1::2] = torch.cos(position * div_term) - pe = pe.unsqueeze(0) - self.pe = pe.to(device=x.device, dtype=x.dtype) - - def forward(self, x: torch.Tensor): - """Add positional encoding. - Args: - x (torch.Tensor): Input tensor (batch, time, `*`). - Returns: - torch.Tensor: Encoded tensor (batch, time, `*`). - """ - self.extend_pe(x) - x = x * self.xscale + self.pe[:, : x.size(1)] - return self.dropout(x) - - -class ScaledPositionalEncoding(PositionalEncoding): - """Scaled positional encoding module. - See Sec. 3.2 https://arxiv.org/abs/1809.08895 - Args: - d_model (int): Embedding dimension. - dropout_rate (float): Dropout rate. - max_len (int): Maximum input length. - """ - - def __init__(self, d_model, dropout_rate, max_len=5000): - """Initialize class.""" - super().__init__(d_model=d_model, dropout_rate=dropout_rate, max_len=max_len) - self.alpha = torch.nn.Parameter(torch.tensor(1.0)) - - def reset_parameters(self): - """Reset parameters.""" - self.alpha.data = torch.tensor(1.0) - - def forward(self, x): - """Add positional encoding. - Args: - x (torch.Tensor): Input tensor (batch, time, `*`). - Returns: - torch.Tensor: Encoded tensor (batch, time, `*`). - """ - self.extend_pe(x) - x = x + self.alpha * self.pe[:, : x.size(1)] - return self.dropout(x) - - -class RelPositionalEncoding(PositionalEncoding): - """Relative positional encoding module. - See : Appendix B in https://arxiv.org/abs/1901.02860 - Args: - d_model (int): Embedding dimension. - dropout_rate (float): Dropout rate. - max_len (int): Maximum input length. - """ - - def __init__(self, d_model, dropout_rate, max_len=5000): - """Initialize class.""" - super().__init__(d_model, dropout_rate, max_len, reverse=True) - - def forward(self, x): - """Compute positional encoding. - Args: - x (torch.Tensor): Input tensor (batch, time, `*`). - Returns: - torch.Tensor: Encoded tensor (batch, time, `*`). - torch.Tensor: Positional embedding tensor (1, time, `*`). - """ - self.extend_pe(x) - x = x * self.xscale - pos_emb = self.pe[:, : x.size(1)] - return self.dropout(x) + self.dropout(pos_emb) \ No newline at end of file diff --git a/spaces/SinaAhmadi/ScriptNormalization/app.py b/spaces/SinaAhmadi/ScriptNormalization/app.py deleted file mode 100644 index a4aa46ce552bba749b2fb682095a555b3e3b697a..0000000000000000000000000000000000000000 --- a/spaces/SinaAhmadi/ScriptNormalization/app.py +++ /dev/null @@ -1,165 +0,0 @@ -from pathlib import Path -from functools import partial - -from joeynmt.prediction import predict -from joeynmt.helpers import ( - check_version, - load_checkpoint, - load_config, - parse_train_args, - resolve_ckpt_path, - -) -from joeynmt.model import build_model -from joeynmt.tokenizers import build_tokenizer -from joeynmt.vocabulary import build_vocab -from joeynmt.datasets import build_dataset - -import gradio as gr - -languages_scripts = { - "Azeri Turkish in Persian": "AzeriTurkish-Persian", - "Central Kurdish in Arabic": "Sorani-Arabic", - "Central Kurdish in Persian": "Sorani-Persian", - "Gilaki in Persian": "Gilaki-Persian", - "Gorani in Arabic": "Gorani-Arabic", - "Gorani in Central Kurdish": "Gorani-Sorani", - "Gorani in Persian": "Gorani-Persian", - "Kashmiri in Urdu": "Kashmiri-Urdu", - "Mazandarani in Persian": "Mazandarani-Persian", - "Northern Kurdish in Arabic": "Kurmanji-Arabic", - "Northern Kurdish in Persian": "Kurmanji-Persian", - "Sindhi in Urdu": "Sindhi-Urdu" -} - -def normalize(text, language_script): - - cfg_file = "./models/%s/config.yaml"%languages_scripts[language_script] - ckpt = "./models/%s/best.ckpt"%languages_scripts[language_script] - - cfg = load_config(Path(cfg_file)) - # parse and validate cfg - model_dir, load_model, device, n_gpu, num_workers, _, fp16 = parse_train_args( - cfg["training"], mode="prediction") - test_cfg = cfg["testing"] - src_cfg = cfg["data"]["src"] - trg_cfg = cfg["data"]["trg"] - - load_model = load_model if ckpt is None else Path(ckpt) - ckpt = resolve_ckpt_path(load_model, model_dir) - - src_vocab, trg_vocab = build_vocab(cfg["data"], model_dir=model_dir) - - model = build_model(cfg["model"], src_vocab=src_vocab, trg_vocab=trg_vocab) - - # load model state from disk - model_checkpoint = load_checkpoint(ckpt, device=device) - model.load_state_dict(model_checkpoint["model_state"]) - - if device.type == "cuda": - model.to(device) - - tokenizer = build_tokenizer(cfg["data"]) - sequence_encoder = { - src_cfg["lang"]: partial(src_vocab.sentences_to_ids, bos=False, eos=True), - trg_cfg["lang"]: None, - } - - test_cfg["batch_size"] = 1 # CAUTION: this will raise an error if n_gpus > 1 - test_cfg["batch_type"] = "sentence" - - test_data = build_dataset( - dataset_type="stream", - path=None, - src_lang=src_cfg["lang"], - trg_lang=trg_cfg["lang"], - split="test", - tokenizer=tokenizer, - sequence_encoder=sequence_encoder, - ) - test_data.set_item(text.strip()) - - cfg=test_cfg - _, _, hypotheses, trg_tokens, trg_scores, _ = predict( - model=model, - data=test_data, - compute_loss=False, - device=device, - n_gpu=n_gpu, - normalization="none", - num_workers=num_workers, - cfg=cfg, - fp16=fp16, - ) - return hypotheses[0] - -title = """ -
      Script Normalization for Unconventional Writing
      - -
      - Perso-Arabic scripts used by the target languages in our paper -
      - -

      - [Paper (ACL 2023)] - [Slides] - [GitHub] - [Presentation] -

      - """ - -description = """ -
        -
      • "mar7aba!"
      • -
      • "هاو ئار یوو؟"
      • -
      • "Μπιάνβενου α σετ ντεμό!"
      • -
      - -

      What do all these sentences have in common? Being greeted in Arabic with "mar7aba" written in the Latin script, then asked how you are ("هاو ئار یوو؟") in English using the Perso-Arabic script of Kurdish and then, welcomed to this demo in French ("Μπιάνβενου α σετ ντεμό!") written in Greek script. All these sentences are written in an unconventional script.

      - -

      Although you may find these sentences risible, unconventional writing is a common practice among millions of speakers in bilingual communities. In our paper entitled "Script Normalization for Unconventional Writing of Under-Resourced Languages in Bilingual Communities", we shed light on this problem and propose an approach to normalize noisy text written in unconventional writing.

      - -

      This demo deploys a few models that are trained for the normalization of unconventional writing. Please note that this tool is not a spell-checker and cannot correct errors beyond character normalization. For better performance, you can apply hard-coded rules on the input and then pass it to the models, hence a hybrid system.

      - -

      For more information, you can check out the project on GitHub too: https://github.com/sinaahmadi/ScriptNormalization

      -""" - -examples = [ - ["بو شهرین نوفوسو ، 2014 نجی ایلين نوفوس ساییمی اساسيندا 41 نفر ایمیش .", "Azeri Turkish in Persian"],#"بۇ شهرین نۆفوسو ، 2014 نجی ایلين نۆفوس ساییمی اساسيندا 41 نفر ایمیش ." - ["ياخوا تةمةن دريژبيت بوئةم ميللةتة", "Central Kurdish in Arabic"], - ["یکیک له جوانیکانی ام شاره جوانه", "Central Kurdish in Persian"], - ["نمک درهٰ مردوم گيلک ايسن ؤ اوشان زوان ني گيلکي ايسه .", "Gilaki in Persian"], - ["شؤنةو اانةيةرة گةشت و گلي ناجارانةو اؤجالاني دةستش پنةكةرد", "Gorani in Arabic"], #شۆنەو ئانەیەرە گەشت و گێڵی ناچارانەو ئۆجالانی دەستش پنەکەرد - ["ڕوٙو زوانی ئەذایی چەنی پەیذابی ؟", "Gorani in Central Kurdish"], # ڕوٙو زوانی ئەڎایی چەنی پەیڎابی ؟ - ["هنگامکان ظميٛ ر چمان ، بپا کريٛلي بيشان :", "Gorani in Persian"], # هەنگامەکان وزمیٛ وەرو چەمان ، بەپاو کریٛڵی بیەشان : - ["ربعی بن افکل اُسے اَکھ صُحابی .", "Kashmiri in Urdu"], # ربعی بن افکل ٲسؠ اَکھ صُحابی . - ["اینتا زون گنشکرون 85 میلیون نفر هسن", "Mazandarani in Persian"], # اینتا زوون گِنِشکَرون 85 میلیون نفر هسنه - ["بة رطكا هة صطئن ژ دل هاطة بة لافكرن", "Northern Kurdish in Arabic"], #پەرتوکا هەستێن ژ دل هاتە بەلافکرن - ["ثرکى همرنگ نرميني دويت هندک قوناغين دي ببريت", "Northern Kurdish in Persian"], # سەرەکی هەمەرەنگ نەرمینێ دڤێت هندەک قوناغێن دی ببڕیت - ["ہتی کجھ اپ ۽ تمام دائون ترینون بیھندیون آھن .", "Sindhi in Urdu"] # هتي ڪجھ اپ ۽ تمام ڊائون ٽرينون بيھنديون آھن . -] - - -article = """ -
      -

      - Created and deployed by Sina Ahmadi (https://sinaahmadi.github.io/). -

      -
      - """ - -demo = gr.Interface( - title=title, - description=description, - fn=normalize, - inputs = [ - gr.inputs.Textbox(lines=4, label="Noisy Text \U0001F974"), - gr.Dropdown(label="Language in unconventional script", choices=sorted(list(languages_scripts.keys()))), - ], - outputs=gr.outputs.Textbox(label="Normalized Text \U0001F642"), - examples=examples, - article=article, - examples_per_page=20 -) - -demo.launch() diff --git a/spaces/Snake12b/wizard-Vicuna-13B-Uncensored-HF/app.py b/spaces/Snake12b/wizard-Vicuna-13B-Uncensored-HF/app.py deleted file mode 100644 index 4983812942256a9655bbb1fb5f6d2ee492eb7aad..0000000000000000000000000000000000000000 --- a/spaces/Snake12b/wizard-Vicuna-13B-Uncensored-HF/app.py +++ /dev/null @@ -1,3 +0,0 @@ -import gradio as gr - -gr.Interface.load("models/TheBloke/Wizard-Vicuna-13B-Uncensored-HF").launch() \ No newline at end of file diff --git a/spaces/Souranil/VAE/Dockerfile b/spaces/Souranil/VAE/Dockerfile deleted file mode 100644 index a202b267a458ef9d46231d5c165202fa354acdf4..0000000000000000000000000000000000000000 --- a/spaces/Souranil/VAE/Dockerfile +++ /dev/null @@ -1,9 +0,0 @@ -FROM python:3.8-slim-buster -WORKDIR /app -EXPOSE $PORT - -COPY requirements.txt / -RUN pip3 install -r /requirements.txt -COPY . /app - -CMD streamlit run app.py --server.port $PORT \ No newline at end of file diff --git a/spaces/SuCicada/Lain-vits/app.py b/spaces/SuCicada/Lain-vits/app.py deleted file mode 100644 index 5723249d650ac34eb7c43b82d13036793bbfcec4..0000000000000000000000000000000000000000 --- a/spaces/SuCicada/Lain-vits/app.py +++ /dev/null @@ -1,33 +0,0 @@ -import os -import subprocess -import sys - - -# Run shell command and capture output in real-time -def init(): - process = subprocess.Popen(""" - bash run.sh - """, stdout=subprocess.PIPE, shell=True) - while True: - output = process.stdout.readline().decode() - if output == '' and process.poll() is not None: - break - if output: - print(output.strip()) - - # Wait for the command to finish and get the return code - return_code = process.poll() - print(f"Command exited with return code {return_code}") - - -is_space = os.getenv("SYSTEM") == "spaces" -if is_space: - init() - -# if is_space: -# sys.path.append(os.path.abspath(os.path.join(os.path.dirname(__file__), "SuTTS/test"))) -# else: -# sys.path.append("/Users/peng/PROGRAM/GitHub/SuTTS/test") -# -# print(sys.path) - diff --git a/spaces/SuYuanS/AudioCraft_Plus/audiocraft/grids/compression/debug.py b/spaces/SuYuanS/AudioCraft_Plus/audiocraft/grids/compression/debug.py deleted file mode 100644 index 5612ff5688d85fede0e605b244919e8081cb1da9..0000000000000000000000000000000000000000 --- a/spaces/SuYuanS/AudioCraft_Plus/audiocraft/grids/compression/debug.py +++ /dev/null @@ -1,31 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -""" -Grid search file, simply list all the exp you want in `explorer`. -Any new exp added there will be scheduled. -You can cancel and experiment by commenting its line. - -This grid is a minimal example for debugging compression task -and how to override parameters directly in a grid. -Learn more about dora grids: https://github.com/facebookresearch/dora -""" - -from ._explorers import CompressionExplorer -from ...environment import AudioCraftEnvironment - - -@CompressionExplorer -def explorer(launcher): - partitions = AudioCraftEnvironment.get_slurm_partitions(['team', 'global']) - launcher.slurm_(gpus=2, partition=partitions) - launcher.bind_(solver='compression/debug') - - with launcher.job_array(): - # base debug task using config from solver=compression/debug - launcher() - # we can override parameters in the grid to launch additional xps - launcher({'rvq.bins': 2048, 'rvq.n_q': 4}) diff --git a/spaces/Superintelligence1130/text-to-video-test/app.py b/spaces/Superintelligence1130/text-to-video-test/app.py deleted file mode 100644 index b0dc9d59bf5874e4d19d78dcdec924c51c04fbbc..0000000000000000000000000000000000000000 --- a/spaces/Superintelligence1130/text-to-video-test/app.py +++ /dev/null @@ -1,71 +0,0 @@ -import os -os.system("pip install torch") -os.system("pip install diffusers") -os.system("python -m pip install --upgrade pip") -os.system("pip install imageio") -os.system("pip install numpy") -os.system("pip install transformers") -''' -import torch -from diffusers import DiffusionPipeline, DPMSolverMultistepScheduler -from diffusers.utils import export_to_video -import gradio as gr - -pipe = DiffusionPipeline.from_pretrained("damo-vilab/text-to-video-ms-1.7b", torch_dtype=torch.float16, variant="fp16") -pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config) -pipe.enable_model_cpu_offload() - -def text_video(prompt): - video_frames = pipe(prompt, num_inference_steps=25).frames - video_path = export_to_video(video_frames) - -result = gr.Video(label="Generated Video") -gr.Interface( - fn=text_video, - inputs=gr.Textbox(label="어떤 비디오를 생성할까요? : "), - outputs=result - -).launch()''' - -import torch -import imageio -from diffusers import TextToVideoZeroPipeline -import numpy as np -import gradio as gr - -model_id = "runwayml/stable-diffusion-v1-5" -pipe = TextToVideoZeroPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") -seed = 0 -video_length = 8 -chunk_size = 4 -def text_video(prompt): - - - - # Generate the video chunk-by-chunk - result = [] - chunk_ids = np.arange(0, video_length, chunk_size - 1) - generator = torch.Generator(device="cuda") - for i in range(len(chunk_ids)): - print(f"Processing chunk {i + 1} / {len(chunk_ids)}") - ch_start = chunk_ids[i] - ch_end = video_length if i == len(chunk_ids) - 1 else chunk_ids[i + 1] - # Attach the first frame for Cross Frame Attention - frame_ids = [0] + list(range(ch_start, ch_end)) - # Fix the seed for the temporal consistency - generator.manual_seed(seed) - output = pipe(prompt=prompt, video_length=len(frame_ids), generator=generator, frame_ids=frame_ids) - result.append(output.images[1:]) - - # Concatenate chunks and save - result = np.concatenate(result) - result = [(r * 255).astype("uint8") for r in result] - imageio.mimsave("video.mp4", result, fps=4) - -result = gr.Video(label="Generated Video") -gr.Interface( - fn=text_video, - inputs=gr.Textbox(label="어떤 비디오를 생성할까요? : "), - outputs=result - -).launch() \ No newline at end of file diff --git a/spaces/Superlang/ImageProcessor/annotator/oneformer/detectron2/config/compat.py b/spaces/Superlang/ImageProcessor/annotator/oneformer/detectron2/config/compat.py deleted file mode 100644 index 11a08c439bf14defd880e37a938fab8a08e68eeb..0000000000000000000000000000000000000000 --- a/spaces/Superlang/ImageProcessor/annotator/oneformer/detectron2/config/compat.py +++ /dev/null @@ -1,229 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -""" -Backward compatibility of configs. - -Instructions to bump version: -+ It's not needed to bump version if new keys are added. - It's only needed when backward-incompatible changes happen - (i.e., some existing keys disappear, or the meaning of a key changes) -+ To bump version, do the following: - 1. Increment _C.VERSION in defaults.py - 2. Add a converter in this file. - - Each ConverterVX has a function "upgrade" which in-place upgrades config from X-1 to X, - and a function "downgrade" which in-place downgrades config from X to X-1 - - In each function, VERSION is left unchanged. - - Each converter assumes that its input has the relevant keys - (i.e., the input is not a partial config). - 3. Run the tests (test_config.py) to make sure the upgrade & downgrade - functions are consistent. -""" - -import logging -from typing import List, Optional, Tuple - -from .config import CfgNode as CN -from .defaults import _C - -__all__ = ["upgrade_config", "downgrade_config"] - - -def upgrade_config(cfg: CN, to_version: Optional[int] = None) -> CN: - """ - Upgrade a config from its current version to a newer version. - - Args: - cfg (CfgNode): - to_version (int): defaults to the latest version. - """ - cfg = cfg.clone() - if to_version is None: - to_version = _C.VERSION - - assert cfg.VERSION <= to_version, "Cannot upgrade from v{} to v{}!".format( - cfg.VERSION, to_version - ) - for k in range(cfg.VERSION, to_version): - converter = globals()["ConverterV" + str(k + 1)] - converter.upgrade(cfg) - cfg.VERSION = k + 1 - return cfg - - -def downgrade_config(cfg: CN, to_version: int) -> CN: - """ - Downgrade a config from its current version to an older version. - - Args: - cfg (CfgNode): - to_version (int): - - Note: - A general downgrade of arbitrary configs is not always possible due to the - different functionalities in different versions. - The purpose of downgrade is only to recover the defaults in old versions, - allowing it to load an old partial yaml config. - Therefore, the implementation only needs to fill in the default values - in the old version when a general downgrade is not possible. - """ - cfg = cfg.clone() - assert cfg.VERSION >= to_version, "Cannot downgrade from v{} to v{}!".format( - cfg.VERSION, to_version - ) - for k in range(cfg.VERSION, to_version, -1): - converter = globals()["ConverterV" + str(k)] - converter.downgrade(cfg) - cfg.VERSION = k - 1 - return cfg - - -def guess_version(cfg: CN, filename: str) -> int: - """ - Guess the version of a partial config where the VERSION field is not specified. - Returns the version, or the latest if cannot make a guess. - - This makes it easier for users to migrate. - """ - logger = logging.getLogger(__name__) - - def _has(name: str) -> bool: - cur = cfg - for n in name.split("."): - if n not in cur: - return False - cur = cur[n] - return True - - # Most users' partial configs have "MODEL.WEIGHT", so guess on it - ret = None - if _has("MODEL.WEIGHT") or _has("TEST.AUG_ON"): - ret = 1 - - if ret is not None: - logger.warning("Config '{}' has no VERSION. Assuming it to be v{}.".format(filename, ret)) - else: - ret = _C.VERSION - logger.warning( - "Config '{}' has no VERSION. Assuming it to be compatible with latest v{}.".format( - filename, ret - ) - ) - return ret - - -def _rename(cfg: CN, old: str, new: str) -> None: - old_keys = old.split(".") - new_keys = new.split(".") - - def _set(key_seq: List[str], val: str) -> None: - cur = cfg - for k in key_seq[:-1]: - if k not in cur: - cur[k] = CN() - cur = cur[k] - cur[key_seq[-1]] = val - - def _get(key_seq: List[str]) -> CN: - cur = cfg - for k in key_seq: - cur = cur[k] - return cur - - def _del(key_seq: List[str]) -> None: - cur = cfg - for k in key_seq[:-1]: - cur = cur[k] - del cur[key_seq[-1]] - if len(cur) == 0 and len(key_seq) > 1: - _del(key_seq[:-1]) - - _set(new_keys, _get(old_keys)) - _del(old_keys) - - -class _RenameConverter: - """ - A converter that handles simple rename. - """ - - RENAME: List[Tuple[str, str]] = [] # list of tuples of (old name, new name) - - @classmethod - def upgrade(cls, cfg: CN) -> None: - for old, new in cls.RENAME: - _rename(cfg, old, new) - - @classmethod - def downgrade(cls, cfg: CN) -> None: - for old, new in cls.RENAME[::-1]: - _rename(cfg, new, old) - - -class ConverterV1(_RenameConverter): - RENAME = [("MODEL.RPN_HEAD.NAME", "MODEL.RPN.HEAD_NAME")] - - -class ConverterV2(_RenameConverter): - """ - A large bulk of rename, before public release. - """ - - RENAME = [ - ("MODEL.WEIGHT", "MODEL.WEIGHTS"), - ("MODEL.PANOPTIC_FPN.SEMANTIC_LOSS_SCALE", "MODEL.SEM_SEG_HEAD.LOSS_WEIGHT"), - ("MODEL.PANOPTIC_FPN.RPN_LOSS_SCALE", "MODEL.RPN.LOSS_WEIGHT"), - ("MODEL.PANOPTIC_FPN.INSTANCE_LOSS_SCALE", "MODEL.PANOPTIC_FPN.INSTANCE_LOSS_WEIGHT"), - ("MODEL.PANOPTIC_FPN.COMBINE_ON", "MODEL.PANOPTIC_FPN.COMBINE.ENABLED"), - ( - "MODEL.PANOPTIC_FPN.COMBINE_OVERLAP_THRESHOLD", - "MODEL.PANOPTIC_FPN.COMBINE.OVERLAP_THRESH", - ), - ( - "MODEL.PANOPTIC_FPN.COMBINE_STUFF_AREA_LIMIT", - "MODEL.PANOPTIC_FPN.COMBINE.STUFF_AREA_LIMIT", - ), - ( - "MODEL.PANOPTIC_FPN.COMBINE_INSTANCES_CONFIDENCE_THRESHOLD", - "MODEL.PANOPTIC_FPN.COMBINE.INSTANCES_CONFIDENCE_THRESH", - ), - ("MODEL.ROI_HEADS.SCORE_THRESH", "MODEL.ROI_HEADS.SCORE_THRESH_TEST"), - ("MODEL.ROI_HEADS.NMS", "MODEL.ROI_HEADS.NMS_THRESH_TEST"), - ("MODEL.RETINANET.INFERENCE_SCORE_THRESHOLD", "MODEL.RETINANET.SCORE_THRESH_TEST"), - ("MODEL.RETINANET.INFERENCE_TOPK_CANDIDATES", "MODEL.RETINANET.TOPK_CANDIDATES_TEST"), - ("MODEL.RETINANET.INFERENCE_NMS_THRESHOLD", "MODEL.RETINANET.NMS_THRESH_TEST"), - ("TEST.DETECTIONS_PER_IMG", "TEST.DETECTIONS_PER_IMAGE"), - ("TEST.AUG_ON", "TEST.AUG.ENABLED"), - ("TEST.AUG_MIN_SIZES", "TEST.AUG.MIN_SIZES"), - ("TEST.AUG_MAX_SIZE", "TEST.AUG.MAX_SIZE"), - ("TEST.AUG_FLIP", "TEST.AUG.FLIP"), - ] - - @classmethod - def upgrade(cls, cfg: CN) -> None: - super().upgrade(cfg) - - if cfg.MODEL.META_ARCHITECTURE == "RetinaNet": - _rename( - cfg, "MODEL.RETINANET.ANCHOR_ASPECT_RATIOS", "MODEL.ANCHOR_GENERATOR.ASPECT_RATIOS" - ) - _rename(cfg, "MODEL.RETINANET.ANCHOR_SIZES", "MODEL.ANCHOR_GENERATOR.SIZES") - del cfg["MODEL"]["RPN"]["ANCHOR_SIZES"] - del cfg["MODEL"]["RPN"]["ANCHOR_ASPECT_RATIOS"] - else: - _rename(cfg, "MODEL.RPN.ANCHOR_ASPECT_RATIOS", "MODEL.ANCHOR_GENERATOR.ASPECT_RATIOS") - _rename(cfg, "MODEL.RPN.ANCHOR_SIZES", "MODEL.ANCHOR_GENERATOR.SIZES") - del cfg["MODEL"]["RETINANET"]["ANCHOR_SIZES"] - del cfg["MODEL"]["RETINANET"]["ANCHOR_ASPECT_RATIOS"] - del cfg["MODEL"]["RETINANET"]["ANCHOR_STRIDES"] - - @classmethod - def downgrade(cls, cfg: CN) -> None: - super().downgrade(cfg) - - _rename(cfg, "MODEL.ANCHOR_GENERATOR.ASPECT_RATIOS", "MODEL.RPN.ANCHOR_ASPECT_RATIOS") - _rename(cfg, "MODEL.ANCHOR_GENERATOR.SIZES", "MODEL.RPN.ANCHOR_SIZES") - cfg.MODEL.RETINANET.ANCHOR_ASPECT_RATIOS = cfg.MODEL.RPN.ANCHOR_ASPECT_RATIOS - cfg.MODEL.RETINANET.ANCHOR_SIZES = cfg.MODEL.RPN.ANCHOR_SIZES - cfg.MODEL.RETINANET.ANCHOR_STRIDES = [] # this is not used anywhere in any version diff --git a/spaces/Superlang/ImageProcessor/annotator/oneformer/detectron2/evaluation/sem_seg_evaluation.py b/spaces/Superlang/ImageProcessor/annotator/oneformer/detectron2/evaluation/sem_seg_evaluation.py deleted file mode 100644 index 1c2f3f5a659bc270d313efb053908d9b1e942f44..0000000000000000000000000000000000000000 --- a/spaces/Superlang/ImageProcessor/annotator/oneformer/detectron2/evaluation/sem_seg_evaluation.py +++ /dev/null @@ -1,265 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import itertools -import json -import logging -import numpy as np -import os -from collections import OrderedDict -from typing import Optional, Union -import annotator.oneformer.pycocotools.mask as mask_util -import torch -from PIL import Image - -from annotator.oneformer.detectron2.data import DatasetCatalog, MetadataCatalog -from annotator.oneformer.detectron2.utils.comm import all_gather, is_main_process, synchronize -from annotator.oneformer.detectron2.utils.file_io import PathManager - -from .evaluator import DatasetEvaluator - -_CV2_IMPORTED = True -try: - import cv2 # noqa -except ImportError: - # OpenCV is an optional dependency at the moment - _CV2_IMPORTED = False - - -def load_image_into_numpy_array( - filename: str, - copy: bool = False, - dtype: Optional[Union[np.dtype, str]] = None, -) -> np.ndarray: - with PathManager.open(filename, "rb") as f: - array = np.array(Image.open(f), copy=copy, dtype=dtype) - return array - - -class SemSegEvaluator(DatasetEvaluator): - """ - Evaluate semantic segmentation metrics. - """ - - def __init__( - self, - dataset_name, - distributed=True, - output_dir=None, - *, - sem_seg_loading_fn=load_image_into_numpy_array, - num_classes=None, - ignore_label=None, - ): - """ - Args: - dataset_name (str): name of the dataset to be evaluated. - distributed (bool): if True, will collect results from all ranks for evaluation. - Otherwise, will evaluate the results in the current process. - output_dir (str): an output directory to dump results. - sem_seg_loading_fn: function to read sem seg file and load into numpy array. - Default provided, but projects can customize. - num_classes, ignore_label: deprecated argument - """ - self._logger = logging.getLogger(__name__) - if num_classes is not None: - self._logger.warn( - "SemSegEvaluator(num_classes) is deprecated! It should be obtained from metadata." - ) - if ignore_label is not None: - self._logger.warn( - "SemSegEvaluator(ignore_label) is deprecated! It should be obtained from metadata." - ) - self._dataset_name = dataset_name - self._distributed = distributed - self._output_dir = output_dir - - self._cpu_device = torch.device("cpu") - - self.input_file_to_gt_file = { - dataset_record["file_name"]: dataset_record["sem_seg_file_name"] - for dataset_record in DatasetCatalog.get(dataset_name) - } - - meta = MetadataCatalog.get(dataset_name) - # Dict that maps contiguous training ids to COCO category ids - try: - c2d = meta.stuff_dataset_id_to_contiguous_id - self._contiguous_id_to_dataset_id = {v: k for k, v in c2d.items()} - except AttributeError: - self._contiguous_id_to_dataset_id = None - self._class_names = meta.stuff_classes - self.sem_seg_loading_fn = sem_seg_loading_fn - self._num_classes = len(meta.stuff_classes) - if num_classes is not None: - assert self._num_classes == num_classes, f"{self._num_classes} != {num_classes}" - self._ignore_label = ignore_label if ignore_label is not None else meta.ignore_label - - # This is because cv2.erode did not work for int datatype. Only works for uint8. - self._compute_boundary_iou = True - if not _CV2_IMPORTED: - self._compute_boundary_iou = False - self._logger.warn( - """Boundary IoU calculation requires OpenCV. B-IoU metrics are - not going to be computed because OpenCV is not available to import.""" - ) - if self._num_classes >= np.iinfo(np.uint8).max: - self._compute_boundary_iou = False - self._logger.warn( - f"""SemSegEvaluator(num_classes) is more than supported value for Boundary IoU calculation! - B-IoU metrics are not going to be computed. Max allowed value (exclusive) - for num_classes for calculating Boundary IoU is {np.iinfo(np.uint8).max}. - The number of classes of dataset {self._dataset_name} is {self._num_classes}""" - ) - - def reset(self): - self._conf_matrix = np.zeros((self._num_classes + 1, self._num_classes + 1), dtype=np.int64) - self._b_conf_matrix = np.zeros( - (self._num_classes + 1, self._num_classes + 1), dtype=np.int64 - ) - self._predictions = [] - - def process(self, inputs, outputs): - """ - Args: - inputs: the inputs to a model. - It is a list of dicts. Each dict corresponds to an image and - contains keys like "height", "width", "file_name". - outputs: the outputs of a model. It is either list of semantic segmentation predictions - (Tensor [H, W]) or list of dicts with key "sem_seg" that contains semantic - segmentation prediction in the same format. - """ - for input, output in zip(inputs, outputs): - output = output["sem_seg"].argmax(dim=0).to(self._cpu_device) - pred = np.array(output, dtype=np.int) - gt_filename = self.input_file_to_gt_file[input["file_name"]] - gt = self.sem_seg_loading_fn(gt_filename, dtype=np.int) - - gt[gt == self._ignore_label] = self._num_classes - - self._conf_matrix += np.bincount( - (self._num_classes + 1) * pred.reshape(-1) + gt.reshape(-1), - minlength=self._conf_matrix.size, - ).reshape(self._conf_matrix.shape) - - if self._compute_boundary_iou: - b_gt = self._mask_to_boundary(gt.astype(np.uint8)) - b_pred = self._mask_to_boundary(pred.astype(np.uint8)) - - self._b_conf_matrix += np.bincount( - (self._num_classes + 1) * b_pred.reshape(-1) + b_gt.reshape(-1), - minlength=self._conf_matrix.size, - ).reshape(self._conf_matrix.shape) - - self._predictions.extend(self.encode_json_sem_seg(pred, input["file_name"])) - - def evaluate(self): - """ - Evaluates standard semantic segmentation metrics (http://cocodataset.org/#stuff-eval): - - * Mean intersection-over-union averaged across classes (mIoU) - * Frequency Weighted IoU (fwIoU) - * Mean pixel accuracy averaged across classes (mACC) - * Pixel Accuracy (pACC) - """ - if self._distributed: - synchronize() - conf_matrix_list = all_gather(self._conf_matrix) - b_conf_matrix_list = all_gather(self._b_conf_matrix) - self._predictions = all_gather(self._predictions) - self._predictions = list(itertools.chain(*self._predictions)) - if not is_main_process(): - return - - self._conf_matrix = np.zeros_like(self._conf_matrix) - for conf_matrix in conf_matrix_list: - self._conf_matrix += conf_matrix - - self._b_conf_matrix = np.zeros_like(self._b_conf_matrix) - for b_conf_matrix in b_conf_matrix_list: - self._b_conf_matrix += b_conf_matrix - - if self._output_dir: - PathManager.mkdirs(self._output_dir) - file_path = os.path.join(self._output_dir, "sem_seg_predictions.json") - with PathManager.open(file_path, "w") as f: - f.write(json.dumps(self._predictions)) - - acc = np.full(self._num_classes, np.nan, dtype=np.float) - iou = np.full(self._num_classes, np.nan, dtype=np.float) - tp = self._conf_matrix.diagonal()[:-1].astype(np.float) - pos_gt = np.sum(self._conf_matrix[:-1, :-1], axis=0).astype(np.float) - class_weights = pos_gt / np.sum(pos_gt) - pos_pred = np.sum(self._conf_matrix[:-1, :-1], axis=1).astype(np.float) - acc_valid = pos_gt > 0 - acc[acc_valid] = tp[acc_valid] / pos_gt[acc_valid] - union = pos_gt + pos_pred - tp - iou_valid = np.logical_and(acc_valid, union > 0) - iou[iou_valid] = tp[iou_valid] / union[iou_valid] - macc = np.sum(acc[acc_valid]) / np.sum(acc_valid) - miou = np.sum(iou[iou_valid]) / np.sum(iou_valid) - fiou = np.sum(iou[iou_valid] * class_weights[iou_valid]) - pacc = np.sum(tp) / np.sum(pos_gt) - - if self._compute_boundary_iou: - b_iou = np.full(self._num_classes, np.nan, dtype=np.float) - b_tp = self._b_conf_matrix.diagonal()[:-1].astype(np.float) - b_pos_gt = np.sum(self._b_conf_matrix[:-1, :-1], axis=0).astype(np.float) - b_pos_pred = np.sum(self._b_conf_matrix[:-1, :-1], axis=1).astype(np.float) - b_union = b_pos_gt + b_pos_pred - b_tp - b_iou_valid = b_union > 0 - b_iou[b_iou_valid] = b_tp[b_iou_valid] / b_union[b_iou_valid] - - res = {} - res["mIoU"] = 100 * miou - res["fwIoU"] = 100 * fiou - for i, name in enumerate(self._class_names): - res[f"IoU-{name}"] = 100 * iou[i] - if self._compute_boundary_iou: - res[f"BoundaryIoU-{name}"] = 100 * b_iou[i] - res[f"min(IoU, B-Iou)-{name}"] = 100 * min(iou[i], b_iou[i]) - res["mACC"] = 100 * macc - res["pACC"] = 100 * pacc - for i, name in enumerate(self._class_names): - res[f"ACC-{name}"] = 100 * acc[i] - - if self._output_dir: - file_path = os.path.join(self._output_dir, "sem_seg_evaluation.pth") - with PathManager.open(file_path, "wb") as f: - torch.save(res, f) - results = OrderedDict({"sem_seg": res}) - self._logger.info(results) - return results - - def encode_json_sem_seg(self, sem_seg, input_file_name): - """ - Convert semantic segmentation to COCO stuff format with segments encoded as RLEs. - See http://cocodataset.org/#format-results - """ - json_list = [] - for label in np.unique(sem_seg): - if self._contiguous_id_to_dataset_id is not None: - assert ( - label in self._contiguous_id_to_dataset_id - ), "Label {} is not in the metadata info for {}".format(label, self._dataset_name) - dataset_id = self._contiguous_id_to_dataset_id[label] - else: - dataset_id = int(label) - mask = (sem_seg == label).astype(np.uint8) - mask_rle = mask_util.encode(np.array(mask[:, :, None], order="F"))[0] - mask_rle["counts"] = mask_rle["counts"].decode("utf-8") - json_list.append( - {"file_name": input_file_name, "category_id": dataset_id, "segmentation": mask_rle} - ) - return json_list - - def _mask_to_boundary(self, mask: np.ndarray, dilation_ratio=0.02): - assert mask.ndim == 2, "mask_to_boundary expects a 2-dimensional image" - h, w = mask.shape - diag_len = np.sqrt(h**2 + w**2) - dilation = max(1, int(round(dilation_ratio * diag_len))) - kernel = np.ones((3, 3), dtype=np.uint8) - - padded_mask = cv2.copyMakeBorder(mask, 1, 1, 1, 1, cv2.BORDER_CONSTANT, value=0) - eroded_mask_with_padding = cv2.erode(padded_mask, kernel, iterations=dilation) - eroded_mask = eroded_mask_with_padding[1:-1, 1:-1] - boundary = mask - eroded_mask - return boundary diff --git a/spaces/Superlang/ImageProcessor/annotator/oneformer/detectron2/utils/README.md b/spaces/Superlang/ImageProcessor/annotator/oneformer/detectron2/utils/README.md deleted file mode 100644 index 9765b24a730b77556104187ac3ef5439ab0859fd..0000000000000000000000000000000000000000 --- a/spaces/Superlang/ImageProcessor/annotator/oneformer/detectron2/utils/README.md +++ /dev/null @@ -1,5 +0,0 @@ -# Utility functions - -This folder contain utility functions that are not used in the -core library, but are useful for building models or training -code using the config system. diff --git a/spaces/Surn/UnlimitedMusicGen/audiocraft/utils/extend.py b/spaces/Surn/UnlimitedMusicGen/audiocraft/utils/extend.py deleted file mode 100644 index 5c919a5cb740e14ca8751d68a0ab16d9400d35d6..0000000000000000000000000000000000000000 --- a/spaces/Surn/UnlimitedMusicGen/audiocraft/utils/extend.py +++ /dev/null @@ -1,332 +0,0 @@ -from tabnanny import verbose -import torch -import math -from audiocraft.models import MusicGen -import numpy as np -from PIL import Image, ImageDraw, ImageFont, ImageColor -import string -import tempfile -import os -import textwrap -import requests -from io import BytesIO -from huggingface_hub import hf_hub_download -import librosa - - -INTERRUPTING = False - -def separate_audio_segments(audio, segment_duration=30, overlap=1): - sr, audio_data = audio[0], audio[1] - - segment_samples = sr * segment_duration - total_samples = max(min((len(audio_data) // segment_samples), 25), 0) - overlap_samples = sr * overlap - - segments = [] - start_sample = 0 - # handle the case where the audio is shorter than the segment duration - if total_samples == 0: - total_samples = 1 - segment_samples = len(audio_data) - overlap_samples = 0 - while total_samples >= segment_samples: - # Collect the segment - # the end sample is the start sample plus the segment samples, - # the start sample, after 0, is minus the overlap samples to account for the overlap - end_sample = start_sample + segment_samples - segment = audio_data[start_sample:end_sample] - segments.append((sr, segment)) - - start_sample += segment_samples - overlap_samples - total_samples -= segment_samples - - # Collect the final segment - if total_samples > 0: - segment = audio_data[-segment_samples:] - segments.append((sr, segment)) - print(f"separate_audio_segments: {len(segments)} segments of length {segment_samples // sr} seconds") - return segments - -def generate_music_segments(text, melody, seed, MODEL, duration:int=10, overlap:int=1, segment_duration:int=30, prompt_index:int=0, harmony_only:bool= False): - # generate audio segments - melody_segments = separate_audio_segments(melody, segment_duration, 0) - - # Create lists to store the melody tensors for each segment - melodys = [] - output_segments = [] - last_chunk = [] - text += ", seed=" + str(seed) - prompt_segment = None - # prevent hacking - duration = min(duration, 720) - overlap = min(overlap, 15) - - # Calculate the total number of segments - total_segments = max(math.ceil(duration / segment_duration),1) - #calculate duration loss from segment overlap - duration_loss = max(total_segments - 1,0) * math.ceil(overlap / 2) - #calc excess duration - excess_duration = segment_duration - (total_segments * segment_duration - duration) - print(f"total Segments to Generate: {total_segments} for {duration} seconds. Each segment is {segment_duration} seconds. Excess {excess_duration} Overlap Loss {duration_loss}") - duration += duration_loss - while excess_duration + duration_loss > segment_duration: - total_segments += 1 - #calculate duration loss from segment overlap - duration_loss += math.ceil(overlap / 2) - #calc excess duration - excess_duration = segment_duration - (total_segments * segment_duration - duration) - print(f"total Segments to Generate: {total_segments} for {duration} seconds. Each segment is {segment_duration} seconds. Excess {excess_duration} Overlap Loss {duration_loss}") - if excess_duration + duration_loss > segment_duration: - duration += duration_loss - duration_loss = 0 - total_segments = min(total_segments, (720 // segment_duration)) - - # If melody_segments is shorter than total_segments, repeat the segments until the total_segments is reached - if len(melody_segments) < total_segments: - #fix melody_segments - for i in range(total_segments - len(melody_segments)): - segment = melody_segments[i] - melody_segments.append(segment) - print(f"melody_segments: {len(melody_segments)} fixed") - - # Iterate over the segments to create list of Meldoy tensors - for segment_idx in range(total_segments): - if INTERRUPTING: - return [], duration - print(f"segment {segment_idx + 1} of {total_segments} \r") - - if harmony_only: - # REMOVE PERCUSION FROM MELODY - # Apply HPSS using librosa - verse_harmonic, verse_percussive = librosa.effects.hpss(melody_segments[segment_idx][1]) - # Convert the separated components back to torch.Tensor - #harmonic_tensor = torch.from_numpy(verse_harmonic) - #percussive_tensor = torch.from_numpy(verse_percussive) - sr, verse = melody_segments[segment_idx][0], torch.from_numpy(verse_harmonic).to(MODEL.device).float().t().unsqueeze(0) - else: - sr, verse = melody_segments[segment_idx][0], torch.from_numpy(melody_segments[segment_idx][1]).to(MODEL.device).float().t().unsqueeze(0) - - print(f"shape:{verse.shape} dim:{verse.dim()}") - if verse.dim() == 2: - verse = verse[None] - verse = verse[..., :int(sr * MODEL.lm.cfg.dataset.segment_duration)] - - # Append the segment to the melodys list - melodys.append(verse) - - torch.manual_seed(seed) - - # If user selects a prompt segment, generate a new prompt segment to use on all segments - #default to the first segment for prompt conditioning - prompt_verse = melodys[0] - if prompt_index > 0: - # Get a prompt segment from the selected verse, normally the first verse - prompt_verse = melodys[prompt_index if prompt_index <= (total_segments - 1) else (total_segments -1)] - - # set the prompt segment MODEL generation params - MODEL.set_generation_params( - use_sampling=True, - top_k=MODEL.generation_params["top_k"], - top_p=MODEL.generation_params["top_p"], - temperature=MODEL.generation_params["temp"], - cfg_coef=MODEL.generation_params["cfg_coef"], - duration=segment_duration, - two_step_cfg=False, - rep_penalty=0.5 - ) - # Generate a new prompt segment. This will be applied to all segments for consistency - print(f"Generating New Prompt Segment: {text} from verse {prompt_index}\r") - prompt_segment = MODEL.generate_with_all( - descriptions=[text], - melody_wavs=prompt_verse, - sample_rate=sr, - progress=False, - prompt=None, - ) - - for idx, verse in enumerate(melodys): - if INTERRUPTING: - return output_segments, duration - - print(f'Segment duration: {segment_duration}, duration: {duration}, overlap: {overlap} Overlap Loss: {duration_loss}') - # Compensate for the length of final segment - if ((idx + 1) == len(melodys)) or (duration < segment_duration): - mod_duration = max(min(duration, segment_duration),1) - print(f'Modify verse length, duration: {duration}, overlap: {overlap} Overlap Loss: {duration_loss} to mod duration: {mod_duration}') - MODEL.set_generation_params( - use_sampling=True, - top_k=MODEL.generation_params["top_k"], - top_p=MODEL.generation_params["top_p"], - temperature=MODEL.generation_params["temp"], - cfg_coef=MODEL.generation_params["cfg_coef"], - duration=mod_duration, - two_step_cfg=False, - rep_penalty=0.5 - ) - try: - # get last chunk - verse = verse[:, :, -mod_duration*MODEL.sample_rate:] - prompt_segment = prompt_segment[:, :, -mod_duration*MODEL.sample_rate:] - except: - # get first chunk - verse = verse[:, :, :mod_duration*MODEL.sample_rate] - prompt_segment = prompt_segment[:, :, :mod_duration*MODEL.sample_rate] - - - print(f"Generating New Melody Segment {idx + 1}: {text}\r") - output = MODEL.generate_with_all( - descriptions=[text], - melody_wavs=verse, - sample_rate=sr, - progress=False, - prompt=prompt_segment, - ) - # If user selects a prompt segment, use the prompt segment for all segments - # Otherwise, use the previous segment as the prompt - if prompt_index < 0: - prompt_segment = output - - # Append the generated output to the list of segments - #output_segments.append(output[:, :segment_duration]) - output_segments.append(output) - print(f"output_segments: {len(output_segments)}: shape: {output.shape} dim {output.dim()}") - #track duration - if duration > segment_duration: - duration -= segment_duration - return output_segments, excess_duration - -def save_image(image): - """ - Saves a PIL image to a temporary file and returns the file path. - - Parameters: - - image: PIL.Image - The PIL image object to be saved. - - Returns: - - str or None: The file path where the image was saved, - or None if there was an error saving the image. - - """ - temp_dir = tempfile.gettempdir() - temp_file = tempfile.NamedTemporaryFile(suffix=".png", dir=temp_dir, delete=False) - temp_file.close() - file_path = temp_file.name - - try: - image.save(file_path) - - except Exception as e: - print("Unable to save image:", str(e)) - return None - finally: - return file_path - -def hex_to_rgba(hex_color): - try: - # Convert hex color to RGBA tuple - rgba = ImageColor.getcolor(hex_color, "RGBA") - except ValueError: - # If the hex color is invalid, default to yellow - rgba = (255,255,0,255) - return rgba - -def load_font(font_name, font_size=16): - """ - Load a font using the provided font name and font size. - - Parameters: - font_name (str): The name of the font to load. Can be a font name recognized by the system, a URL to download the font file, - a local file path, or a Hugging Face model hub identifier. - font_size (int, optional): The size of the font. Default is 16. - - Returns: - ImageFont.FreeTypeFont: The loaded font object. - - Notes: - This function attempts to load the font using various methods until a suitable font is found. If the provided font_name - cannot be loaded, it falls back to a default font. - - The font_name can be one of the following: - - A font name recognized by the system, which can be loaded using ImageFont.truetype. - - A URL pointing to the font file, which is downloaded using requests and then loaded using ImageFont.truetype. - - A local file path to the font file, which is loaded using ImageFont.truetype. - - A Hugging Face model hub identifier, which downloads the font file from the Hugging Face model hub using hf_hub_download - and then loads it using ImageFont.truetype. - - Example: - font = load_font("Arial.ttf", font_size=20) - """ - font = None - if not "http" in font_name: - try: - font = ImageFont.truetype(font_name, font_size) - except (FileNotFoundError, OSError): - print("Font not found. Using Hugging Face download..\n") - - if font is None: - try: - font_path = ImageFont.truetype(hf_hub_download(repo_id=os.environ.get('SPACE_ID', ''), filename="assets/" + font_name, repo_type="space"), encoding="UTF-8") - font = ImageFont.truetype(font_path, font_size) - except (FileNotFoundError, OSError): - print("Font not found. Trying to download from local assets folder...\n") - if font is None: - try: - font = ImageFont.truetype("assets/" + font_name, font_size) - except (FileNotFoundError, OSError): - print("Font not found. Trying to download from URL...\n") - - if font is None: - try: - req = requests.get(font_name) - font = ImageFont.truetype(BytesIO(req.content), font_size) - except (FileNotFoundError, OSError): - print(f"Font not found: {font_name} Using default font\n") - if font: - print(f"Font loaded {font.getname()}") - else: - font = ImageFont.load_default() - return font - - -def add_settings_to_image(title: str = "title", description: str = "", width: int = 768, height: int = 512, background_path: str = "", font: str = "arial.ttf", font_color: str = "#ffffff"): - # Create a new RGBA image with the specified dimensions - image = Image.new("RGBA", (width, height), (255, 255, 255, 0)) - # If a background image is specified, open it and paste it onto the image - if background_path == "": - background = Image.new("RGBA", (width, height), (255, 255, 255, 255)) - else: - background = Image.open(background_path).convert("RGBA") - - #Convert font color to RGBA tuple - font_color = hex_to_rgba(font_color) - - # Calculate the center coordinates for placing the text - text_x = width // 2 - text_y = height // 2 - # Draw the title text at the center top - title_font = load_font(font, 26) # Replace with your desired font and size - - title_text = '\n'.join(textwrap.wrap(title, width // 12)) - title_x, title_y, title_text_width, title_text_height = title_font.getbbox(title_text) - title_x = max(text_x - (title_text_width // 2), title_x, 0) - title_y = text_y - (height // 2) + 10 # 10 pixels padding from the top - title_draw = ImageDraw.Draw(image) - title_draw.multiline_text((title_x, title_y), title, fill=font_color, font=title_font, align="center") - # Draw the description text two lines below the title - description_font = load_font(font, 16) # Replace with your desired font and size - description_text = '\n'.join(textwrap.wrap(description, width // 12)) - description_x, description_y, description_text_width, description_text_height = description_font.getbbox(description_text) - description_x = max(text_x - (description_text_width // 2), description_x, 0) - description_y = title_y + title_text_height + 20 # 20 pixels spacing between title and description - description_draw = ImageDraw.Draw(image) - description_draw.multiline_text((description_x, description_y), description_text, fill=font_color, font=description_font, align="center") - # Calculate the offset to center the image on the background - bg_w, bg_h = background.size - offset = ((bg_w - width) // 2, (bg_h - height) // 2) - # Paste the image onto the background - background.paste(image, offset, mask=image) - - # Save the image and return the file path - return save_image(background) \ No newline at end of file diff --git a/spaces/TH5314/newbing/src/components/theme-toggle.tsx b/spaces/TH5314/newbing/src/components/theme-toggle.tsx deleted file mode 100644 index 67d3f1a2c163ccbeb52c40a7e42f107190237154..0000000000000000000000000000000000000000 --- a/spaces/TH5314/newbing/src/components/theme-toggle.tsx +++ /dev/null @@ -1,31 +0,0 @@ -'use client' - -import * as React from 'react' -import { useTheme } from 'next-themes' - -import { Button } from '@/components/ui/button' -import { IconMoon, IconSun } from '@/components/ui/icons' - -export function ThemeToggle() { - const { setTheme, theme } = useTheme() - const [_, startTransition] = React.useTransition() - - return ( - - ) -} diff --git a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/tenacity/before.py b/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/tenacity/before.py deleted file mode 100644 index cfd7dc72ee7fe9300948133cfeb660f610b90e4e..0000000000000000000000000000000000000000 --- a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/tenacity/before.py +++ /dev/null @@ -1,46 +0,0 @@ -# Copyright 2016 Julien Danjou -# Copyright 2016 Joshua Harlow -# Copyright 2013-2014 Ray Holder -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import typing - -from pip._vendor.tenacity import _utils - -if typing.TYPE_CHECKING: - import logging - - from pip._vendor.tenacity import RetryCallState - - -def before_nothing(retry_state: "RetryCallState") -> None: - """Before call strategy that does nothing.""" - - -def before_log(logger: "logging.Logger", log_level: int) -> typing.Callable[["RetryCallState"], None]: - """Before call strategy that logs to some logger the attempt.""" - - def log_it(retry_state: "RetryCallState") -> None: - if retry_state.fn is None: - # NOTE(sileht): can't really happen, but we must please mypy - fn_name = "" - else: - fn_name = _utils.get_callback_name(retry_state.fn) - logger.log( - log_level, - f"Starting call to '{fn_name}', " - f"this is the {_utils.to_ordinal(retry_state.attempt_number)} time calling it.", - ) - - return log_it diff --git a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pkg_resources/_vendor/importlib_resources/_adapters.py b/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pkg_resources/_vendor/importlib_resources/_adapters.py deleted file mode 100644 index ea363d86a564b5450666aa00aecd46353326a75a..0000000000000000000000000000000000000000 --- a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pkg_resources/_vendor/importlib_resources/_adapters.py +++ /dev/null @@ -1,170 +0,0 @@ -from contextlib import suppress -from io import TextIOWrapper - -from . import abc - - -class SpecLoaderAdapter: - """ - Adapt a package spec to adapt the underlying loader. - """ - - def __init__(self, spec, adapter=lambda spec: spec.loader): - self.spec = spec - self.loader = adapter(spec) - - def __getattr__(self, name): - return getattr(self.spec, name) - - -class TraversableResourcesLoader: - """ - Adapt a loader to provide TraversableResources. - """ - - def __init__(self, spec): - self.spec = spec - - def get_resource_reader(self, name): - return CompatibilityFiles(self.spec)._native() - - -def _io_wrapper(file, mode='r', *args, **kwargs): - if mode == 'r': - return TextIOWrapper(file, *args, **kwargs) - elif mode == 'rb': - return file - raise ValueError( - "Invalid mode value '{}', only 'r' and 'rb' are supported".format(mode) - ) - - -class CompatibilityFiles: - """ - Adapter for an existing or non-existent resource reader - to provide a compatibility .files(). - """ - - class SpecPath(abc.Traversable): - """ - Path tied to a module spec. - Can be read and exposes the resource reader children. - """ - - def __init__(self, spec, reader): - self._spec = spec - self._reader = reader - - def iterdir(self): - if not self._reader: - return iter(()) - return iter( - CompatibilityFiles.ChildPath(self._reader, path) - for path in self._reader.contents() - ) - - def is_file(self): - return False - - is_dir = is_file - - def joinpath(self, other): - if not self._reader: - return CompatibilityFiles.OrphanPath(other) - return CompatibilityFiles.ChildPath(self._reader, other) - - @property - def name(self): - return self._spec.name - - def open(self, mode='r', *args, **kwargs): - return _io_wrapper(self._reader.open_resource(None), mode, *args, **kwargs) - - class ChildPath(abc.Traversable): - """ - Path tied to a resource reader child. - Can be read but doesn't expose any meaningful children. - """ - - def __init__(self, reader, name): - self._reader = reader - self._name = name - - def iterdir(self): - return iter(()) - - def is_file(self): - return self._reader.is_resource(self.name) - - def is_dir(self): - return not self.is_file() - - def joinpath(self, other): - return CompatibilityFiles.OrphanPath(self.name, other) - - @property - def name(self): - return self._name - - def open(self, mode='r', *args, **kwargs): - return _io_wrapper( - self._reader.open_resource(self.name), mode, *args, **kwargs - ) - - class OrphanPath(abc.Traversable): - """ - Orphan path, not tied to a module spec or resource reader. - Can't be read and doesn't expose any meaningful children. - """ - - def __init__(self, *path_parts): - if len(path_parts) < 1: - raise ValueError('Need at least one path part to construct a path') - self._path = path_parts - - def iterdir(self): - return iter(()) - - def is_file(self): - return False - - is_dir = is_file - - def joinpath(self, other): - return CompatibilityFiles.OrphanPath(*self._path, other) - - @property - def name(self): - return self._path[-1] - - def open(self, mode='r', *args, **kwargs): - raise FileNotFoundError("Can't open orphan path") - - def __init__(self, spec): - self.spec = spec - - @property - def _reader(self): - with suppress(AttributeError): - return self.spec.loader.get_resource_reader(self.spec.name) - - def _native(self): - """ - Return the native reader if it supports files(). - """ - reader = self._reader - return reader if hasattr(reader, 'files') else self - - def __getattr__(self, attr): - return getattr(self._reader, attr) - - def files(self): - return CompatibilityFiles.SpecPath(self.spec, self._reader) - - -def wrap_spec(package): - """ - Construct a package spec with traversable compatibility - on the spec/loader/reader. - """ - return SpecLoaderAdapter(package.__spec__, TraversableResourcesLoader) diff --git a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/setuptools/dep_util.py b/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/setuptools/dep_util.py deleted file mode 100644 index 521eb716a5ebbcbc2c59654c4e71c3f0ff1abf26..0000000000000000000000000000000000000000 --- a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/setuptools/dep_util.py +++ /dev/null @@ -1,25 +0,0 @@ -from distutils.dep_util import newer_group - - -# yes, this is was almost entirely copy-pasted from -# 'newer_pairwise()', this is just another convenience -# function. -def newer_pairwise_group(sources_groups, targets): - """Walk both arguments in parallel, testing if each source group is newer - than its corresponding target. Returns a pair of lists (sources_groups, - targets) where sources is newer than target, according to the semantics - of 'newer_group()'. - """ - if len(sources_groups) != len(targets): - raise ValueError( - "'sources_group' and 'targets' must be the same length") - - # build a pair of lists (sources_groups, targets) where source is newer - n_sources = [] - n_targets = [] - for i in range(len(sources_groups)): - if newer_group(sources_groups[i], targets[i]): - n_sources.append(sources_groups[i]) - n_targets.append(targets[i]) - - return n_sources, n_targets diff --git a/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/detectron2/structures/keypoints.py b/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/detectron2/structures/keypoints.py deleted file mode 100644 index d0ee8724ac42087e4ec770a3dfb8e040a62b4c15..0000000000000000000000000000000000000000 --- a/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/detectron2/structures/keypoints.py +++ /dev/null @@ -1,239 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import numpy as np -from typing import Any, List, Tuple, Union -import torch -from torch.nn import functional as F - - -class Keypoints: - """ - Stores keypoint **annotation** data. GT Instances have a `gt_keypoints` property - containing the x,y location and visibility flag of each keypoint. This tensor has shape - (N, K, 3) where N is the number of instances and K is the number of keypoints per instance. - - The visibility flag follows the COCO format and must be one of three integers: - - * v=0: not labeled (in which case x=y=0) - * v=1: labeled but not visible - * v=2: labeled and visible - """ - - def __init__(self, keypoints: Union[torch.Tensor, np.ndarray, List[List[float]]]): - """ - Arguments: - keypoints: A Tensor, numpy array, or list of the x, y, and visibility of each keypoint. - The shape should be (N, K, 3) where N is the number of - instances, and K is the number of keypoints per instance. - """ - device = keypoints.device if isinstance(keypoints, torch.Tensor) else torch.device("cpu") - keypoints = torch.as_tensor(keypoints, dtype=torch.float32, device=device) - assert keypoints.dim() == 3 and keypoints.shape[2] == 3, keypoints.shape - self.tensor = keypoints - - def __len__(self) -> int: - return self.tensor.size(0) - - def to(self, *args: Any, **kwargs: Any) -> "Keypoints": - return type(self)(self.tensor.to(*args, **kwargs)) - - @property - def device(self) -> torch.device: - return self.tensor.device - - def to_heatmap(self, boxes: torch.Tensor, heatmap_size: int) -> torch.Tensor: - """ - Convert keypoint annotations to a heatmap of one-hot labels for training, - as described in :paper:`Mask R-CNN`. - - Arguments: - boxes: Nx4 tensor, the boxes to draw the keypoints to - - Returns: - heatmaps: - A tensor of shape (N, K), each element is integer spatial label - in the range [0, heatmap_size**2 - 1] for each keypoint in the input. - valid: - A tensor of shape (N, K) containing whether each keypoint is in the roi or not. - """ - return _keypoints_to_heatmap(self.tensor, boxes, heatmap_size) - - def __getitem__(self, item: Union[int, slice, torch.BoolTensor]) -> "Keypoints": - """ - Create a new `Keypoints` by indexing on this `Keypoints`. - - The following usage are allowed: - - 1. `new_kpts = kpts[3]`: return a `Keypoints` which contains only one instance. - 2. `new_kpts = kpts[2:10]`: return a slice of key points. - 3. `new_kpts = kpts[vector]`, where vector is a torch.ByteTensor - with `length = len(kpts)`. Nonzero elements in the vector will be selected. - - Note that the returned Keypoints might share storage with this Keypoints, - subject to Pytorch's indexing semantics. - """ - if isinstance(item, int): - return Keypoints([self.tensor[item]]) - return Keypoints(self.tensor[item]) - - def __repr__(self) -> str: - s = self.__class__.__name__ + "(" - s += "num_instances={})".format(len(self.tensor)) - return s - - @staticmethod - def cat(keypoints_list: List["Keypoints"]) -> "Keypoints": - """ - Concatenates a list of Keypoints into a single Keypoints - - Arguments: - keypoints_list (list[Keypoints]) - - Returns: - Keypoints: the concatenated Keypoints - """ - assert isinstance(keypoints_list, (list, tuple)) - assert len(keypoints_list) > 0 - assert all(isinstance(keypoints, Keypoints) for keypoints in keypoints_list) - - cat_kpts = type(keypoints_list[0])( - torch.cat([kpts.tensor for kpts in keypoints_list], dim=0) - ) - return cat_kpts - - -# TODO make this nicer, this is a direct translation from C2 (but removing the inner loop) -def _keypoints_to_heatmap( - keypoints: torch.Tensor, rois: torch.Tensor, heatmap_size: int -) -> Tuple[torch.Tensor, torch.Tensor]: - """ - Encode keypoint locations into a target heatmap for use in SoftmaxWithLoss across space. - - Maps keypoints from the half-open interval [x1, x2) on continuous image coordinates to the - closed interval [0, heatmap_size - 1] on discrete image coordinates. We use the - continuous-discrete conversion from Heckbert 1990 ("What is the coordinate of a pixel?"): - d = floor(c) and c = d + 0.5, where d is a discrete coordinate and c is a continuous coordinate. - - Arguments: - keypoints: tensor of keypoint locations in of shape (N, K, 3). - rois: Nx4 tensor of rois in xyxy format - heatmap_size: integer side length of square heatmap. - - Returns: - heatmaps: A tensor of shape (N, K) containing an integer spatial label - in the range [0, heatmap_size**2 - 1] for each keypoint in the input. - valid: A tensor of shape (N, K) containing whether each keypoint is in - the roi or not. - """ - - if rois.numel() == 0: - return rois.new().long(), rois.new().long() - offset_x = rois[:, 0] - offset_y = rois[:, 1] - scale_x = heatmap_size / (rois[:, 2] - rois[:, 0]) - scale_y = heatmap_size / (rois[:, 3] - rois[:, 1]) - - offset_x = offset_x[:, None] - offset_y = offset_y[:, None] - scale_x = scale_x[:, None] - scale_y = scale_y[:, None] - - x = keypoints[..., 0] - y = keypoints[..., 1] - - x_boundary_inds = x == rois[:, 2][:, None] - y_boundary_inds = y == rois[:, 3][:, None] - - x = (x - offset_x) * scale_x - x = x.floor().long() - y = (y - offset_y) * scale_y - y = y.floor().long() - - x[x_boundary_inds] = heatmap_size - 1 - y[y_boundary_inds] = heatmap_size - 1 - - valid_loc = (x >= 0) & (y >= 0) & (x < heatmap_size) & (y < heatmap_size) - vis = keypoints[..., 2] > 0 - valid = (valid_loc & vis).long() - - lin_ind = y * heatmap_size + x - heatmaps = lin_ind * valid - - return heatmaps, valid - - -@torch.jit.script_if_tracing -def heatmaps_to_keypoints(maps: torch.Tensor, rois: torch.Tensor) -> torch.Tensor: - """ - Extract predicted keypoint locations from heatmaps. - - Args: - maps (Tensor): (#ROIs, #keypoints, POOL_H, POOL_W). The predicted heatmap of logits for - each ROI and each keypoint. - rois (Tensor): (#ROIs, 4). The box of each ROI. - - Returns: - Tensor of shape (#ROIs, #keypoints, 4) with the last dimension corresponding to - (x, y, logit, score) for each keypoint. - - When converting discrete pixel indices in an NxN image to a continuous keypoint coordinate, - we maintain consistency with :meth:`Keypoints.to_heatmap` by using the conversion from - Heckbert 1990: c = d + 0.5, where d is a discrete coordinate and c is a continuous coordinate. - """ - # The decorator use of torch.no_grad() was not supported by torchscript. - # https://github.com/pytorch/pytorch/issues/44768 - maps = maps.detach() - rois = rois.detach() - - offset_x = rois[:, 0] - offset_y = rois[:, 1] - - widths = (rois[:, 2] - rois[:, 0]).clamp(min=1) - heights = (rois[:, 3] - rois[:, 1]).clamp(min=1) - widths_ceil = widths.ceil() - heights_ceil = heights.ceil() - - num_rois, num_keypoints = maps.shape[:2] - xy_preds = maps.new_zeros(rois.shape[0], num_keypoints, 4) - - width_corrections = widths / widths_ceil - height_corrections = heights / heights_ceil - - keypoints_idx = torch.arange(num_keypoints, device=maps.device) - - for i in range(num_rois): - outsize = (int(heights_ceil[i]), int(widths_ceil[i])) - roi_map = F.interpolate( - maps[[i]], size=outsize, mode="bicubic", align_corners=False - ).squeeze( - 0 - ) # #keypoints x H x W - - # softmax over the spatial region - max_score, _ = roi_map.view(num_keypoints, -1).max(1) - max_score = max_score.view(num_keypoints, 1, 1) - tmp_full_resolution = (roi_map - max_score).exp_() - tmp_pool_resolution = (maps[i] - max_score).exp_() - # Produce scores over the region H x W, but normalize with POOL_H x POOL_W, - # so that the scores of objects of different absolute sizes will be more comparable - roi_map_scores = tmp_full_resolution / tmp_pool_resolution.sum((1, 2), keepdim=True) - - w = roi_map.shape[2] - pos = roi_map.view(num_keypoints, -1).argmax(1) - - x_int = pos % w - y_int = (pos - x_int) // w - - assert ( - roi_map_scores[keypoints_idx, y_int, x_int] - == roi_map_scores.view(num_keypoints, -1).max(1)[0] - ).all() - - x = (x_int.float() + 0.5) * width_corrections[i] - y = (y_int.float() + 0.5) * height_corrections[i] - - xy_preds[i, :, 0] = x + offset_x[i] - xy_preds[i, :, 1] = y + offset_y[i] - xy_preds[i, :, 2] = roi_map[keypoints_idx, y_int, x_int] - xy_preds[i, :, 3] = roi_map_scores[keypoints_idx, y_int, x_int] - - return xy_preds diff --git a/spaces/Ukrania/RVC-Models/lib/infer_pack/transforms.py b/spaces/Ukrania/RVC-Models/lib/infer_pack/transforms.py deleted file mode 100644 index a11f799e023864ff7082c1f49c0cc18351a13b47..0000000000000000000000000000000000000000 --- a/spaces/Ukrania/RVC-Models/lib/infer_pack/transforms.py +++ /dev/null @@ -1,209 +0,0 @@ -import torch -from torch.nn import functional as F - -import numpy as np - - -DEFAULT_MIN_BIN_WIDTH = 1e-3 -DEFAULT_MIN_BIN_HEIGHT = 1e-3 -DEFAULT_MIN_DERIVATIVE = 1e-3 - - -def piecewise_rational_quadratic_transform( - inputs, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=False, - tails=None, - tail_bound=1.0, - min_bin_width=DEFAULT_MIN_BIN_WIDTH, - min_bin_height=DEFAULT_MIN_BIN_HEIGHT, - min_derivative=DEFAULT_MIN_DERIVATIVE, -): - if tails is None: - spline_fn = rational_quadratic_spline - spline_kwargs = {} - else: - spline_fn = unconstrained_rational_quadratic_spline - spline_kwargs = {"tails": tails, "tail_bound": tail_bound} - - outputs, logabsdet = spline_fn( - inputs=inputs, - unnormalized_widths=unnormalized_widths, - unnormalized_heights=unnormalized_heights, - unnormalized_derivatives=unnormalized_derivatives, - inverse=inverse, - min_bin_width=min_bin_width, - min_bin_height=min_bin_height, - min_derivative=min_derivative, - **spline_kwargs - ) - return outputs, logabsdet - - -def searchsorted(bin_locations, inputs, eps=1e-6): - bin_locations[..., -1] += eps - return torch.sum(inputs[..., None] >= bin_locations, dim=-1) - 1 - - -def unconstrained_rational_quadratic_spline( - inputs, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=False, - tails="linear", - tail_bound=1.0, - min_bin_width=DEFAULT_MIN_BIN_WIDTH, - min_bin_height=DEFAULT_MIN_BIN_HEIGHT, - min_derivative=DEFAULT_MIN_DERIVATIVE, -): - inside_interval_mask = (inputs >= -tail_bound) & (inputs <= tail_bound) - outside_interval_mask = ~inside_interval_mask - - outputs = torch.zeros_like(inputs) - logabsdet = torch.zeros_like(inputs) - - if tails == "linear": - unnormalized_derivatives = F.pad(unnormalized_derivatives, pad=(1, 1)) - constant = np.log(np.exp(1 - min_derivative) - 1) - unnormalized_derivatives[..., 0] = constant - unnormalized_derivatives[..., -1] = constant - - outputs[outside_interval_mask] = inputs[outside_interval_mask] - logabsdet[outside_interval_mask] = 0 - else: - raise RuntimeError("{} tails are not implemented.".format(tails)) - - ( - outputs[inside_interval_mask], - logabsdet[inside_interval_mask], - ) = rational_quadratic_spline( - inputs=inputs[inside_interval_mask], - unnormalized_widths=unnormalized_widths[inside_interval_mask, :], - unnormalized_heights=unnormalized_heights[inside_interval_mask, :], - unnormalized_derivatives=unnormalized_derivatives[inside_interval_mask, :], - inverse=inverse, - left=-tail_bound, - right=tail_bound, - bottom=-tail_bound, - top=tail_bound, - min_bin_width=min_bin_width, - min_bin_height=min_bin_height, - min_derivative=min_derivative, - ) - - return outputs, logabsdet - - -def rational_quadratic_spline( - inputs, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=False, - left=0.0, - right=1.0, - bottom=0.0, - top=1.0, - min_bin_width=DEFAULT_MIN_BIN_WIDTH, - min_bin_height=DEFAULT_MIN_BIN_HEIGHT, - min_derivative=DEFAULT_MIN_DERIVATIVE, -): - if torch.min(inputs) < left or torch.max(inputs) > right: - raise ValueError("Input to a transform is not within its domain") - - num_bins = unnormalized_widths.shape[-1] - - if min_bin_width * num_bins > 1.0: - raise ValueError("Minimal bin width too large for the number of bins") - if min_bin_height * num_bins > 1.0: - raise ValueError("Minimal bin height too large for the number of bins") - - widths = F.softmax(unnormalized_widths, dim=-1) - widths = min_bin_width + (1 - min_bin_width * num_bins) * widths - cumwidths = torch.cumsum(widths, dim=-1) - cumwidths = F.pad(cumwidths, pad=(1, 0), mode="constant", value=0.0) - cumwidths = (right - left) * cumwidths + left - cumwidths[..., 0] = left - cumwidths[..., -1] = right - widths = cumwidths[..., 1:] - cumwidths[..., :-1] - - derivatives = min_derivative + F.softplus(unnormalized_derivatives) - - heights = F.softmax(unnormalized_heights, dim=-1) - heights = min_bin_height + (1 - min_bin_height * num_bins) * heights - cumheights = torch.cumsum(heights, dim=-1) - cumheights = F.pad(cumheights, pad=(1, 0), mode="constant", value=0.0) - cumheights = (top - bottom) * cumheights + bottom - cumheights[..., 0] = bottom - cumheights[..., -1] = top - heights = cumheights[..., 1:] - cumheights[..., :-1] - - if inverse: - bin_idx = searchsorted(cumheights, inputs)[..., None] - else: - bin_idx = searchsorted(cumwidths, inputs)[..., None] - - input_cumwidths = cumwidths.gather(-1, bin_idx)[..., 0] - input_bin_widths = widths.gather(-1, bin_idx)[..., 0] - - input_cumheights = cumheights.gather(-1, bin_idx)[..., 0] - delta = heights / widths - input_delta = delta.gather(-1, bin_idx)[..., 0] - - input_derivatives = derivatives.gather(-1, bin_idx)[..., 0] - input_derivatives_plus_one = derivatives[..., 1:].gather(-1, bin_idx)[..., 0] - - input_heights = heights.gather(-1, bin_idx)[..., 0] - - if inverse: - a = (inputs - input_cumheights) * ( - input_derivatives + input_derivatives_plus_one - 2 * input_delta - ) + input_heights * (input_delta - input_derivatives) - b = input_heights * input_derivatives - (inputs - input_cumheights) * ( - input_derivatives + input_derivatives_plus_one - 2 * input_delta - ) - c = -input_delta * (inputs - input_cumheights) - - discriminant = b.pow(2) - 4 * a * c - assert (discriminant >= 0).all() - - root = (2 * c) / (-b - torch.sqrt(discriminant)) - outputs = root * input_bin_widths + input_cumwidths - - theta_one_minus_theta = root * (1 - root) - denominator = input_delta + ( - (input_derivatives + input_derivatives_plus_one - 2 * input_delta) - * theta_one_minus_theta - ) - derivative_numerator = input_delta.pow(2) * ( - input_derivatives_plus_one * root.pow(2) - + 2 * input_delta * theta_one_minus_theta - + input_derivatives * (1 - root).pow(2) - ) - logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator) - - return outputs, -logabsdet - else: - theta = (inputs - input_cumwidths) / input_bin_widths - theta_one_minus_theta = theta * (1 - theta) - - numerator = input_heights * ( - input_delta * theta.pow(2) + input_derivatives * theta_one_minus_theta - ) - denominator = input_delta + ( - (input_derivatives + input_derivatives_plus_one - 2 * input_delta) - * theta_one_minus_theta - ) - outputs = input_cumheights + numerator / denominator - - derivative_numerator = input_delta.pow(2) * ( - input_derivatives_plus_one * theta.pow(2) - + 2 * input_delta * theta_one_minus_theta - + input_derivatives * (1 - theta).pow(2) - ) - logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator) - - return outputs, logabsdet diff --git a/spaces/VoiceHero69/changer/setup_tools/magicinstaller/requirements/audiolm_package.py b/spaces/VoiceHero69/changer/setup_tools/magicinstaller/requirements/audiolm_package.py deleted file mode 100644 index 46500efedb4fa81311452b346f78e0fc16918e3c..0000000000000000000000000000000000000000 --- a/spaces/VoiceHero69/changer/setup_tools/magicinstaller/requirements/audiolm_package.py +++ /dev/null @@ -1,20 +0,0 @@ -from setup_tools.magicinstaller.requirement import Requirement, SimpleRequirement - - -class AudioLM(Requirement): - def is_right_version(self): - return self.get_package_version('audiolm-pytorch') == '1.1.4' - - def is_installed(self): - return self.install_check('audiolm-pytorch') - - def install(self) -> tuple[int, str, str]: - return self.install_pip('audiolm-pytorch==1.1.4', 'audiolm') - - -class JobLib(SimpleRequirement): - package_name = 'joblib' - - -class FairSeq(SimpleRequirement): - package_name = 'fairseq' diff --git a/spaces/WinterGYC/Baichuan-13B-Chat-Int8-Docker/Dockerfile b/spaces/WinterGYC/Baichuan-13B-Chat-Int8-Docker/Dockerfile deleted file mode 100644 index ff2442a71e35022846dfa023c513814234caac72..0000000000000000000000000000000000000000 --- a/spaces/WinterGYC/Baichuan-13B-Chat-Int8-Docker/Dockerfile +++ /dev/null @@ -1,34 +0,0 @@ -# read the doc: https://huggingface.co/docs/hub/spaces-sdks-docker -# you will also find guides on how best to write your Dockerfile - -FROM nvidia/cuda:12.2.0-devel-ubuntu20.04 - -#set up environment -RUN apt-get update && apt-get install --no-install-recommends --no-install-suggests -y curl -RUN apt-get install unzip -RUN apt-get -y install python3 -RUN apt-get -y install python3-pip - -WORKDIR /code - -COPY ./requirements.txt /code/requirements.txt - -RUN pip3 install --no-cache-dir --upgrade -r /code/requirements.txt - -# Set up a new user named "user" with user ID 1000 -RUN useradd -m -u 1000 user - -# Switch to the "user" user -USER user - -# Set home to the user's home directory -ENV HOME=/home/user \ - PATH=/home/user/.local/bin:$PATH - -# Set the working directory to the user's home directory -WORKDIR $HOME/app - -# Copy the current directory contents into the container at $HOME/app setting the owner to the user -COPY --chown=user . $HOME/app - -CMD ["streamlit", "run", "app.py", "--server.port", "7860", "--server.address", "0.0.0.0"] diff --git a/spaces/Xule/ChuanhuChatGPT/chatgpt - macOS.command b/spaces/Xule/ChuanhuChatGPT/chatgpt - macOS.command deleted file mode 100644 index fa015edca9e6916f24394813ce8ba77d2072e296..0000000000000000000000000000000000000000 --- a/spaces/Xule/ChuanhuChatGPT/chatgpt - macOS.command +++ /dev/null @@ -1,7 +0,0 @@ -#!/bin/bash -echo Opening ChuanhuChatGPT... -cd "$(dirname "${BASH_SOURCE[0]}")" -nohup python3 ChuanhuChatbot.py >/dev/null 2>&1 & -sleep 5 -open http://127.0.0.1:7860 -echo Finished opening ChuanhuChatGPT (http://127.0.0.1:7860/). If you kill ChuanhuChatbot, Use "pkill -f 'ChuanhuChatbot'" command in terminal. \ No newline at end of file diff --git a/spaces/Yan233th/so-vits-svc-models/modules/__init__.py b/spaces/Yan233th/so-vits-svc-models/modules/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/YeOldHermit/Linaqruf-anything-v3.0/README.md b/spaces/YeOldHermit/Linaqruf-anything-v3.0/README.md deleted file mode 100644 index 6609c87f8080511e1245d7f8da8ff0039b2a8fb1..0000000000000000000000000000000000000000 --- a/spaces/YeOldHermit/Linaqruf-anything-v3.0/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Linaqruf Anything V3.0 -emoji: 💩 -colorFrom: pink -colorTo: yellow -sdk: gradio -sdk_version: 3.12.0 -app_file: app.py -pinned: false -license: openrail ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Yuliang/ICON/lib/common/__init__.py b/spaces/Yuliang/ICON/lib/common/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/Zannriell/TextChatBot/Dockerfile b/spaces/Zannriell/TextChatBot/Dockerfile deleted file mode 100644 index 3a4dc66fdb50519fca2a6eaf64cbe0ea05b09a3f..0000000000000000000000000000000000000000 --- a/spaces/Zannriell/TextChatBot/Dockerfile +++ /dev/null @@ -1,13 +0,0 @@ -FROM python:3.9 - -WORKDIR /code - -COPY ./requirements.txt /code/requirements.txt - -RUN pip install --no-cache-dir --upgrade -r /code/requirements.txt - -COPY . . - -EXPOSE 7860 - -CMD ["shiny", "run", "app.py", "--host", "0.0.0.0", "--port", "7860"] \ No newline at end of file diff --git a/spaces/abby711/FaceRestoration/gfpgan/archs/gfpganv1_arch.py b/spaces/abby711/FaceRestoration/gfpgan/archs/gfpganv1_arch.py deleted file mode 100644 index e092b4f7633dece505e5cd3bac4a482df3746654..0000000000000000000000000000000000000000 --- a/spaces/abby711/FaceRestoration/gfpgan/archs/gfpganv1_arch.py +++ /dev/null @@ -1,439 +0,0 @@ -import math -import random -import torch -from basicsr.archs.stylegan2_arch import (ConvLayer, EqualConv2d, EqualLinear, ResBlock, ScaledLeakyReLU, - StyleGAN2Generator) -from basicsr.ops.fused_act import FusedLeakyReLU -from basicsr.utils.registry import ARCH_REGISTRY -from torch import nn -from torch.nn import functional as F - - -class StyleGAN2GeneratorSFT(StyleGAN2Generator): - """StyleGAN2 Generator with SFT modulation (Spatial Feature Transform). - - Args: - out_size (int): The spatial size of outputs. - num_style_feat (int): Channel number of style features. Default: 512. - num_mlp (int): Layer number of MLP style layers. Default: 8. - channel_multiplier (int): Channel multiplier for large networks of StyleGAN2. Default: 2. - resample_kernel (list[int]): A list indicating the 1D resample kernel magnitude. A cross production will be - applied to extent 1D resample kernel to 2D resample kernel. Default: (1, 3, 3, 1). - lr_mlp (float): Learning rate multiplier for mlp layers. Default: 0.01. - narrow (float): The narrow ratio for channels. Default: 1. - sft_half (bool): Whether to apply SFT on half of the input channels. Default: False. - """ - - def __init__(self, - out_size, - num_style_feat=512, - num_mlp=8, - channel_multiplier=2, - resample_kernel=(1, 3, 3, 1), - lr_mlp=0.01, - narrow=1, - sft_half=False): - super(StyleGAN2GeneratorSFT, self).__init__( - out_size, - num_style_feat=num_style_feat, - num_mlp=num_mlp, - channel_multiplier=channel_multiplier, - resample_kernel=resample_kernel, - lr_mlp=lr_mlp, - narrow=narrow) - self.sft_half = sft_half - - def forward(self, - styles, - conditions, - input_is_latent=False, - noise=None, - randomize_noise=True, - truncation=1, - truncation_latent=None, - inject_index=None, - return_latents=False): - """Forward function for StyleGAN2GeneratorSFT. - - Args: - styles (list[Tensor]): Sample codes of styles. - conditions (list[Tensor]): SFT conditions to generators. - input_is_latent (bool): Whether input is latent style. Default: False. - noise (Tensor | None): Input noise or None. Default: None. - randomize_noise (bool): Randomize noise, used when 'noise' is False. Default: True. - truncation (float): The truncation ratio. Default: 1. - truncation_latent (Tensor | None): The truncation latent tensor. Default: None. - inject_index (int | None): The injection index for mixing noise. Default: None. - return_latents (bool): Whether to return style latents. Default: False. - """ - # style codes -> latents with Style MLP layer - if not input_is_latent: - styles = [self.style_mlp(s) for s in styles] - # noises - if noise is None: - if randomize_noise: - noise = [None] * self.num_layers # for each style conv layer - else: # use the stored noise - noise = [getattr(self.noises, f'noise{i}') for i in range(self.num_layers)] - # style truncation - if truncation < 1: - style_truncation = [] - for style in styles: - style_truncation.append(truncation_latent + truncation * (style - truncation_latent)) - styles = style_truncation - # get style latents with injection - if len(styles) == 1: - inject_index = self.num_latent - - if styles[0].ndim < 3: - # repeat latent code for all the layers - latent = styles[0].unsqueeze(1).repeat(1, inject_index, 1) - else: # used for encoder with different latent code for each layer - latent = styles[0] - elif len(styles) == 2: # mixing noises - if inject_index is None: - inject_index = random.randint(1, self.num_latent - 1) - latent1 = styles[0].unsqueeze(1).repeat(1, inject_index, 1) - latent2 = styles[1].unsqueeze(1).repeat(1, self.num_latent - inject_index, 1) - latent = torch.cat([latent1, latent2], 1) - - # main generation - out = self.constant_input(latent.shape[0]) - out = self.style_conv1(out, latent[:, 0], noise=noise[0]) - skip = self.to_rgb1(out, latent[:, 1]) - - i = 1 - for conv1, conv2, noise1, noise2, to_rgb in zip(self.style_convs[::2], self.style_convs[1::2], noise[1::2], - noise[2::2], self.to_rgbs): - out = conv1(out, latent[:, i], noise=noise1) - - # the conditions may have fewer levels - if i < len(conditions): - # SFT part to combine the conditions - if self.sft_half: # only apply SFT to half of the channels - out_same, out_sft = torch.split(out, int(out.size(1) // 2), dim=1) - out_sft = out_sft * conditions[i - 1] + conditions[i] - out = torch.cat([out_same, out_sft], dim=1) - else: # apply SFT to all the channels - out = out * conditions[i - 1] + conditions[i] - - out = conv2(out, latent[:, i + 1], noise=noise2) - skip = to_rgb(out, latent[:, i + 2], skip) # feature back to the rgb space - i += 2 - - image = skip - - if return_latents: - return image, latent - else: - return image, None - - -class ConvUpLayer(nn.Module): - """Convolutional upsampling layer. It uses bilinear upsampler + Conv. - - Args: - in_channels (int): Channel number of the input. - out_channels (int): Channel number of the output. - kernel_size (int): Size of the convolving kernel. - stride (int): Stride of the convolution. Default: 1 - padding (int): Zero-padding added to both sides of the input. Default: 0. - bias (bool): If ``True``, adds a learnable bias to the output. Default: ``True``. - bias_init_val (float): Bias initialized value. Default: 0. - activate (bool): Whether use activateion. Default: True. - """ - - def __init__(self, - in_channels, - out_channels, - kernel_size, - stride=1, - padding=0, - bias=True, - bias_init_val=0, - activate=True): - super(ConvUpLayer, self).__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.kernel_size = kernel_size - self.stride = stride - self.padding = padding - # self.scale is used to scale the convolution weights, which is related to the common initializations. - self.scale = 1 / math.sqrt(in_channels * kernel_size**2) - - self.weight = nn.Parameter(torch.randn(out_channels, in_channels, kernel_size, kernel_size)) - - if bias and not activate: - self.bias = nn.Parameter(torch.zeros(out_channels).fill_(bias_init_val)) - else: - self.register_parameter('bias', None) - - # activation - if activate: - if bias: - self.activation = FusedLeakyReLU(out_channels) - else: - self.activation = ScaledLeakyReLU(0.2) - else: - self.activation = None - - def forward(self, x): - # bilinear upsample - out = F.interpolate(x, scale_factor=2, mode='bilinear', align_corners=False) - # conv - out = F.conv2d( - out, - self.weight * self.scale, - bias=self.bias, - stride=self.stride, - padding=self.padding, - ) - # activation - if self.activation is not None: - out = self.activation(out) - return out - - -class ResUpBlock(nn.Module): - """Residual block with upsampling. - - Args: - in_channels (int): Channel number of the input. - out_channels (int): Channel number of the output. - """ - - def __init__(self, in_channels, out_channels): - super(ResUpBlock, self).__init__() - - self.conv1 = ConvLayer(in_channels, in_channels, 3, bias=True, activate=True) - self.conv2 = ConvUpLayer(in_channels, out_channels, 3, stride=1, padding=1, bias=True, activate=True) - self.skip = ConvUpLayer(in_channels, out_channels, 1, bias=False, activate=False) - - def forward(self, x): - out = self.conv1(x) - out = self.conv2(out) - skip = self.skip(x) - out = (out + skip) / math.sqrt(2) - return out - - -@ARCH_REGISTRY.register() -class GFPGANv1(nn.Module): - """The GFPGAN architecture: Unet + StyleGAN2 decoder with SFT. - - Ref: GFP-GAN: Towards Real-World Blind Face Restoration with Generative Facial Prior. - - Args: - out_size (int): The spatial size of outputs. - num_style_feat (int): Channel number of style features. Default: 512. - channel_multiplier (int): Channel multiplier for large networks of StyleGAN2. Default: 2. - resample_kernel (list[int]): A list indicating the 1D resample kernel magnitude. A cross production will be - applied to extent 1D resample kernel to 2D resample kernel. Default: (1, 3, 3, 1). - decoder_load_path (str): The path to the pre-trained decoder model (usually, the StyleGAN2). Default: None. - fix_decoder (bool): Whether to fix the decoder. Default: True. - - num_mlp (int): Layer number of MLP style layers. Default: 8. - lr_mlp (float): Learning rate multiplier for mlp layers. Default: 0.01. - input_is_latent (bool): Whether input is latent style. Default: False. - different_w (bool): Whether to use different latent w for different layers. Default: False. - narrow (float): The narrow ratio for channels. Default: 1. - sft_half (bool): Whether to apply SFT on half of the input channels. Default: False. - """ - - def __init__( - self, - out_size, - num_style_feat=512, - channel_multiplier=1, - resample_kernel=(1, 3, 3, 1), - decoder_load_path=None, - fix_decoder=True, - # for stylegan decoder - num_mlp=8, - lr_mlp=0.01, - input_is_latent=False, - different_w=False, - narrow=1, - sft_half=False): - - super(GFPGANv1, self).__init__() - self.input_is_latent = input_is_latent - self.different_w = different_w - self.num_style_feat = num_style_feat - - unet_narrow = narrow * 0.5 # by default, use a half of input channels - channels = { - '4': int(512 * unet_narrow), - '8': int(512 * unet_narrow), - '16': int(512 * unet_narrow), - '32': int(512 * unet_narrow), - '64': int(256 * channel_multiplier * unet_narrow), - '128': int(128 * channel_multiplier * unet_narrow), - '256': int(64 * channel_multiplier * unet_narrow), - '512': int(32 * channel_multiplier * unet_narrow), - '1024': int(16 * channel_multiplier * unet_narrow) - } - - self.log_size = int(math.log(out_size, 2)) - first_out_size = 2**(int(math.log(out_size, 2))) - - self.conv_body_first = ConvLayer(3, channels[f'{first_out_size}'], 1, bias=True, activate=True) - - # downsample - in_channels = channels[f'{first_out_size}'] - self.conv_body_down = nn.ModuleList() - for i in range(self.log_size, 2, -1): - out_channels = channels[f'{2**(i - 1)}'] - self.conv_body_down.append(ResBlock(in_channels, out_channels, resample_kernel)) - in_channels = out_channels - - self.final_conv = ConvLayer(in_channels, channels['4'], 3, bias=True, activate=True) - - # upsample - in_channels = channels['4'] - self.conv_body_up = nn.ModuleList() - for i in range(3, self.log_size + 1): - out_channels = channels[f'{2**i}'] - self.conv_body_up.append(ResUpBlock(in_channels, out_channels)) - in_channels = out_channels - - # to RGB - self.toRGB = nn.ModuleList() - for i in range(3, self.log_size + 1): - self.toRGB.append(EqualConv2d(channels[f'{2**i}'], 3, 1, stride=1, padding=0, bias=True, bias_init_val=0)) - - if different_w: - linear_out_channel = (int(math.log(out_size, 2)) * 2 - 2) * num_style_feat - else: - linear_out_channel = num_style_feat - - self.final_linear = EqualLinear( - channels['4'] * 4 * 4, linear_out_channel, bias=True, bias_init_val=0, lr_mul=1, activation=None) - - # the decoder: stylegan2 generator with SFT modulations - self.stylegan_decoder = StyleGAN2GeneratorSFT( - out_size=out_size, - num_style_feat=num_style_feat, - num_mlp=num_mlp, - channel_multiplier=channel_multiplier, - resample_kernel=resample_kernel, - lr_mlp=lr_mlp, - narrow=narrow, - sft_half=sft_half) - - # load pre-trained stylegan2 model if necessary - if decoder_load_path: - self.stylegan_decoder.load_state_dict( - torch.load(decoder_load_path, map_location=lambda storage, loc: storage)['params_ema']) - # fix decoder without updating params - if fix_decoder: - for _, param in self.stylegan_decoder.named_parameters(): - param.requires_grad = False - - # for SFT modulations (scale and shift) - self.condition_scale = nn.ModuleList() - self.condition_shift = nn.ModuleList() - for i in range(3, self.log_size + 1): - out_channels = channels[f'{2**i}'] - if sft_half: - sft_out_channels = out_channels - else: - sft_out_channels = out_channels * 2 - self.condition_scale.append( - nn.Sequential( - EqualConv2d(out_channels, out_channels, 3, stride=1, padding=1, bias=True, bias_init_val=0), - ScaledLeakyReLU(0.2), - EqualConv2d(out_channels, sft_out_channels, 3, stride=1, padding=1, bias=True, bias_init_val=1))) - self.condition_shift.append( - nn.Sequential( - EqualConv2d(out_channels, out_channels, 3, stride=1, padding=1, bias=True, bias_init_val=0), - ScaledLeakyReLU(0.2), - EqualConv2d(out_channels, sft_out_channels, 3, stride=1, padding=1, bias=True, bias_init_val=0))) - - def forward(self, x, return_latents=False, return_rgb=True, randomize_noise=True): - """Forward function for GFPGANv1. - - Args: - x (Tensor): Input images. - return_latents (bool): Whether to return style latents. Default: False. - return_rgb (bool): Whether return intermediate rgb images. Default: True. - randomize_noise (bool): Randomize noise, used when 'noise' is False. Default: True. - """ - conditions = [] - unet_skips = [] - out_rgbs = [] - - # encoder - feat = self.conv_body_first(x) - for i in range(self.log_size - 2): - feat = self.conv_body_down[i](feat) - unet_skips.insert(0, feat) - - feat = self.final_conv(feat) - - # style code - style_code = self.final_linear(feat.view(feat.size(0), -1)) - if self.different_w: - style_code = style_code.view(style_code.size(0), -1, self.num_style_feat) - - # decode - for i in range(self.log_size - 2): - # add unet skip - feat = feat + unet_skips[i] - # ResUpLayer - feat = self.conv_body_up[i](feat) - # generate scale and shift for SFT layers - scale = self.condition_scale[i](feat) - conditions.append(scale.clone()) - shift = self.condition_shift[i](feat) - conditions.append(shift.clone()) - # generate rgb images - if return_rgb: - out_rgbs.append(self.toRGB[i](feat)) - - # decoder - image, _ = self.stylegan_decoder([style_code], - conditions, - return_latents=return_latents, - input_is_latent=self.input_is_latent, - randomize_noise=randomize_noise) - - return image, out_rgbs - - -@ARCH_REGISTRY.register() -class FacialComponentDiscriminator(nn.Module): - """Facial component (eyes, mouth, noise) discriminator used in GFPGAN. - """ - - def __init__(self): - super(FacialComponentDiscriminator, self).__init__() - # It now uses a VGG-style architectrue with fixed model size - self.conv1 = ConvLayer(3, 64, 3, downsample=False, resample_kernel=(1, 3, 3, 1), bias=True, activate=True) - self.conv2 = ConvLayer(64, 128, 3, downsample=True, resample_kernel=(1, 3, 3, 1), bias=True, activate=True) - self.conv3 = ConvLayer(128, 128, 3, downsample=False, resample_kernel=(1, 3, 3, 1), bias=True, activate=True) - self.conv4 = ConvLayer(128, 256, 3, downsample=True, resample_kernel=(1, 3, 3, 1), bias=True, activate=True) - self.conv5 = ConvLayer(256, 256, 3, downsample=False, resample_kernel=(1, 3, 3, 1), bias=True, activate=True) - self.final_conv = ConvLayer(256, 1, 3, bias=True, activate=False) - - def forward(self, x, return_feats=False): - """Forward function for FacialComponentDiscriminator. - - Args: - x (Tensor): Input images. - return_feats (bool): Whether to return intermediate features. Default: False. - """ - feat = self.conv1(x) - feat = self.conv3(self.conv2(feat)) - rlt_feats = [] - if return_feats: - rlt_feats.append(feat.clone()) - feat = self.conv5(self.conv4(feat)) - if return_feats: - rlt_feats.append(feat.clone()) - out = self.final_conv(feat) - - if return_feats: - return out, rlt_feats - else: - return out, None diff --git a/spaces/abdulmatinomotoso/Plant_leaf_disease_classificaton/app.py b/spaces/abdulmatinomotoso/Plant_leaf_disease_classificaton/app.py deleted file mode 100644 index 2617697928a00063ea9a88a1477cce6fa40defff..0000000000000000000000000000000000000000 --- a/spaces/abdulmatinomotoso/Plant_leaf_disease_classificaton/app.py +++ /dev/null @@ -1,115 +0,0 @@ -import streamlit as st -from PIL import Image -import numpy as np -import tensorflow as tf -from tensorflow import keras -import matplotlib.pyplot as plt -import tensorflow_hub as hub - -hide_streamlit_style = """ - - """ - -st.markdown(hide_streamlit_style, unsafe_allow_html = True) - -st.title('Plant Disease Prediction') -st.write("This model is capable of predicting 38 different classes of plant diseases") - -def main() : - file_uploaded = st.file_uploader('Choose an image...', type = 'jpg') - if file_uploaded is not None : - image = Image.open(file_uploaded) - st.write("Uploaded Image.") - figure = plt.figure() - plt.imshow(image) - plt.axis('off') - st.pyplot(figure) - result, confidence = predict_class(image) - st.write('Prediction : {}'.format(result)) - st.write('Confidence : {}%'.format(confidence)) - -def predict_class(image) : - with st.spinner('Loading Model...'): - classifier_model = keras.models.load_model(r'final1_model.h5', compile = False) - - shape = ((255,255,3)) - model = keras.Sequential([hub.KerasLayer(classifier_model, input_shape = shape)]) # ye bhi kaam kar raha he - test_image = image.resize((255, 255)) - test_image = keras.preprocessing.image.img_to_array(test_image) - test_image /= 255.0 - test_image = np.expand_dims(test_image, axis = 0) - class_name = ["Apple___Apple_scab","Apple___Black_rot", - "Apple___Cedar_apple_rust","Apple___healthy", - "Blueberry___healthy", - "Cherry_(including_sour)___Powdery_mildew", - "Cherry___healthy", - "Corn_(maize)___Cercospora_leaf_spot Gray_leaf_spot", - "Corn_(maize)___Common_rust_", - "Corn_(maize)___Northern_Leaf_Blight", - "Corn_(maize)___healthy","Grape___Black_rot", - "Grape___Esca_(Black_Measles)", - "Grape___Leaf_blight_(Isariopsis_Leaf_Spot)", - "Grape___healthy", - "Orange___Haunglongbing_(Citrus_greening)", - "Peach___Bacterial_spot", - "Peach___healthy", - "Pepper__bell___Bacterial_spot", - "Pepper,_bell___healthy", - "Potato___Early_blight", - "Potato___Late_blight", - "Potato___healthy", - "Raspberry___healthy", - "Soybean___healthy", - "Squash___Powdery_mildew", - "Strawberry___Leaf_scorch", - "Strawberry___healthy", - "Tomato___Bacterial_spot", - "Tomato___Early_blight", - "Tomato___Late_blight", - "Tomato___Leaf_Mold", - "Tomato___Septoria_leaf_spot", - "Tomato___Spider_mites Two-spotted_spider_mite", - "Tomato___Target_Spot", - "Tomato___Tomato_Yellow_Leaf_Curl_Virus", - "Tomato___Tomato_mosaic_virus", - "Tomato___healthy"] - prediction = model.predict_generator(test_image) - confidence = round(100 * (np.max(prediction[0])), 2) - final_pred = class_name[np.argmax(prediction)] - return final_pred, confidence - -footer = """ - -""" -st.markdown(footer, unsafe_allow_html = True) -if __name__ == "__main__": - main() \ No newline at end of file diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet_null/models/backbones/resnext.py b/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet_null/models/backbones/resnext.py deleted file mode 100644 index 6dbcbd516fd308b1d703eecb83ab275f6b159516..0000000000000000000000000000000000000000 --- a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet_null/models/backbones/resnext.py +++ /dev/null @@ -1,153 +0,0 @@ -import math - -from mmcv.cnn import build_conv_layer, build_norm_layer - -from ..builder import BACKBONES -from ..utils import ResLayer -from .resnet import Bottleneck as _Bottleneck -from .resnet import ResNet - - -class Bottleneck(_Bottleneck): - expansion = 4 - - def __init__(self, - inplanes, - planes, - groups=1, - base_width=4, - base_channels=64, - **kwargs): - """Bottleneck block for ResNeXt. - - If style is "pytorch", the stride-two layer is the 3x3 conv layer, if - it is "caffe", the stride-two layer is the first 1x1 conv layer. - """ - super(Bottleneck, self).__init__(inplanes, planes, **kwargs) - - if groups == 1: - width = self.planes - else: - width = math.floor(self.planes * - (base_width / base_channels)) * groups - - self.norm1_name, norm1 = build_norm_layer( - self.norm_cfg, width, postfix=1) - self.norm2_name, norm2 = build_norm_layer( - self.norm_cfg, width, postfix=2) - self.norm3_name, norm3 = build_norm_layer( - self.norm_cfg, self.planes * self.expansion, postfix=3) - - self.conv1 = build_conv_layer( - self.conv_cfg, - self.inplanes, - width, - kernel_size=1, - stride=self.conv1_stride, - bias=False) - self.add_module(self.norm1_name, norm1) - fallback_on_stride = False - self.with_modulated_dcn = False - if self.with_dcn: - fallback_on_stride = self.dcn.pop('fallback_on_stride', False) - if not self.with_dcn or fallback_on_stride: - self.conv2 = build_conv_layer( - self.conv_cfg, - width, - width, - kernel_size=3, - stride=self.conv2_stride, - padding=self.dilation, - dilation=self.dilation, - groups=groups, - bias=False) - else: - assert self.conv_cfg is None, 'conv_cfg must be None for DCN' - self.conv2 = build_conv_layer( - self.dcn, - width, - width, - kernel_size=3, - stride=self.conv2_stride, - padding=self.dilation, - dilation=self.dilation, - groups=groups, - bias=False) - - self.add_module(self.norm2_name, norm2) - self.conv3 = build_conv_layer( - self.conv_cfg, - width, - self.planes * self.expansion, - kernel_size=1, - bias=False) - self.add_module(self.norm3_name, norm3) - - if self.with_plugins: - self._del_block_plugins(self.after_conv1_plugin_names + - self.after_conv2_plugin_names + - self.after_conv3_plugin_names) - self.after_conv1_plugin_names = self.make_block_plugins( - width, self.after_conv1_plugins) - self.after_conv2_plugin_names = self.make_block_plugins( - width, self.after_conv2_plugins) - self.after_conv3_plugin_names = self.make_block_plugins( - self.planes * self.expansion, self.after_conv3_plugins) - - def _del_block_plugins(self, plugin_names): - """delete plugins for block if exist. - - Args: - plugin_names (list[str]): List of plugins name to delete. - """ - assert isinstance(plugin_names, list) - for plugin_name in plugin_names: - del self._modules[plugin_name] - - -@BACKBONES.register_module() -class ResNeXt(ResNet): - """ResNeXt backbone. - - Args: - depth (int): Depth of resnet, from {18, 34, 50, 101, 152}. - in_channels (int): Number of input image channels. Default: 3. - num_stages (int): Resnet stages. Default: 4. - groups (int): Group of resnext. - base_width (int): Base width of resnext. - strides (Sequence[int]): Strides of the first block of each stage. - dilations (Sequence[int]): Dilation of each stage. - out_indices (Sequence[int]): Output from which stages. - style (str): `pytorch` or `caffe`. If set to "pytorch", the stride-two - layer is the 3x3 conv layer, otherwise the stride-two layer is - the first 1x1 conv layer. - frozen_stages (int): Stages to be frozen (all param fixed). -1 means - not freezing any parameters. - norm_cfg (dict): dictionary to construct and config norm layer. - norm_eval (bool): Whether to set norm layers to eval mode, namely, - freeze running stats (mean and var). Note: Effect on Batch Norm - and its variants only. - with_cp (bool): Use checkpoint or not. Using checkpoint will save some - memory while slowing down the training speed. - zero_init_residual (bool): whether to use zero init for last norm layer - in resblocks to let them behave as identity. - """ - - arch_settings = { - 50: (Bottleneck, (3, 4, 6, 3)), - 101: (Bottleneck, (3, 4, 23, 3)), - 152: (Bottleneck, (3, 8, 36, 3)) - } - - def __init__(self, groups=1, base_width=4, **kwargs): - self.groups = groups - self.base_width = base_width - super(ResNeXt, self).__init__(**kwargs) - - def make_res_layer(self, **kwargs): - """Pack all blocks in a stage into a ``ResLayer``""" - return ResLayer( - groups=self.groups, - base_width=self.base_width, - base_channels=self.base_channels, - **kwargs) diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmseg/datasets/pipelines/compose.py b/spaces/abhishek/sketch-to-image/annotator/uniformer/mmseg/datasets/pipelines/compose.py deleted file mode 100644 index cbfcbb925c6d4ebf849328b9f94ef6fc24359bf5..0000000000000000000000000000000000000000 --- a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmseg/datasets/pipelines/compose.py +++ /dev/null @@ -1,51 +0,0 @@ -import collections - -from annotator.uniformer.mmcv.utils import build_from_cfg - -from ..builder import PIPELINES - - -@PIPELINES.register_module() -class Compose(object): - """Compose multiple transforms sequentially. - - Args: - transforms (Sequence[dict | callable]): Sequence of transform object or - config dict to be composed. - """ - - def __init__(self, transforms): - assert isinstance(transforms, collections.abc.Sequence) - self.transforms = [] - for transform in transforms: - if isinstance(transform, dict): - transform = build_from_cfg(transform, PIPELINES) - self.transforms.append(transform) - elif callable(transform): - self.transforms.append(transform) - else: - raise TypeError('transform must be callable or a dict') - - def __call__(self, data): - """Call function to apply transforms sequentially. - - Args: - data (dict): A result dict contains the data to transform. - - Returns: - dict: Transformed data. - """ - - for t in self.transforms: - data = t(data) - if data is None: - return None - return data - - def __repr__(self): - format_string = self.__class__.__name__ + '(' - for t in self.transforms: - format_string += '\n' - format_string += f' {t}' - format_string += '\n)' - return format_string diff --git a/spaces/aditi2222/paragus_paraphrase_demo/README.md b/spaces/aditi2222/paragus_paraphrase_demo/README.md deleted file mode 100644 index 460b4dd1de74f3830add0239eaf6d2e43cb9753c..0000000000000000000000000000000000000000 --- a/spaces/aditi2222/paragus_paraphrase_demo/README.md +++ /dev/null @@ -1,37 +0,0 @@ ---- -title: Paragus_paraphrase_demo -emoji: 💩 -colorFrom: green -colorTo: purple -sdk: gradio -app_file: app.py -pinned: false ---- - -# Configuration - -`title`: _string_ -Display title for the Space - -`emoji`: _string_ -Space emoji (emoji-only character allowed) - -`colorFrom`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`colorTo`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`sdk`: _string_ -Can be either `gradio` or `streamlit` - -`sdk_version` : _string_ -Only applicable for `streamlit` SDK. -See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions. - -`app_file`: _string_ -Path to your main application file (which contains either `gradio` or `streamlit` Python code). -Path is relative to the root of the repository. - -`pinned`: _boolean_ -Whether the Space stays on top of your list. diff --git a/spaces/ai4bharat/IndicNER/app.py b/spaces/ai4bharat/IndicNER/app.py deleted file mode 100644 index f7cb8626d73b5a6a622ab75cd67346209d06b708..0000000000000000000000000000000000000000 --- a/spaces/ai4bharat/IndicNER/app.py +++ /dev/null @@ -1,45 +0,0 @@ -import gradio as gr -import torch -from transformers import AutoTokenizer, AutoModelForTokenClassification - -tokenizer = AutoTokenizer.from_pretrained("ai4bharat/IndicNER") - -model = AutoModelForTokenClassification.from_pretrained("ai4bharat/IndicNER") - - -def get_ner(sentence): - tok_sentence = tokenizer(sentence, return_tensors='pt') - - with torch.no_grad(): - logits = model(**tok_sentence).logits.argmax(-1) - predicted_tokens_classes = [ - model.config.id2label[t.item()] for t in logits[0]] - - predicted_labels = [] - - previous_token_id = 0 - word_ids = tok_sentence.word_ids() - for word_index in range(len(word_ids)): - if word_ids[word_index] == None: - previous_token_id = word_ids[word_index] - elif word_ids[word_index] == previous_token_id: - previous_token_id = word_ids[word_index] - else: - predicted_labels.append(predicted_tokens_classes[word_index]) - previous_token_id = word_ids[word_index] - - ner_output = [] - for index in range(len(sentence.split(' '))): - ner_output.append( - (sentence.split(' ')[index], predicted_labels[index])) - return ner_output - - -iface = gr.Interface(get_ner, - gr.Textbox(placeholder="Enter sentence here..."), - ["highlight"], description='The 11 languages covered by IndicNER are: Assamese, Bengali, Gujarati, Hindi, Kannada, Malayalam, Marathi, Oriya, Punjabi, Tamil, Telugu.', - examples=['लगातार हमलावर हो रहे शिवपाल और राजभर को सपा की दो टूक, चिट्ठी जारी कर कहा- जहां जाना चाहें जा सकते हैं', 'ಶರಣ್ ರ ನೀವು ನೋಡಲೇಬೇಕಾದ ಟಾಪ್ 5 ಕಾಮಿಡಿ ಚಲನಚಿತ್ರಗಳು'], title='IndicNER', - article='IndicNER is a model trained to complete the task of identifying named entities from sentences in Indian languages. Our model is specifically fine-tuned to the 11 Indian languages mentioned above over millions of sentences. The model is then benchmarked over a human annotated testset and multiple other publicly available Indian NER datasets.' - ) - -iface.launch(enable_queue=True) diff --git a/spaces/akhaliq/Analog-Diffusion/README.md b/spaces/akhaliq/Analog-Diffusion/README.md deleted file mode 100644 index 63ce305bd21e7e98c0788140fec000c763bf50ef..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/Analog-Diffusion/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Analog Diffusion -emoji: 💻 -colorFrom: gray -colorTo: purple -sdk: gradio -sdk_version: 3.12.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/akhaliq/JoJoGAN/e4e/models/stylegan2/op/upfirdn2d.py b/spaces/akhaliq/JoJoGAN/e4e/models/stylegan2/op/upfirdn2d.py deleted file mode 100644 index 7bc5a1e331c2bbb1893ac748cfd0f144ff0651b4..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/JoJoGAN/e4e/models/stylegan2/op/upfirdn2d.py +++ /dev/null @@ -1,184 +0,0 @@ -import os - -import torch -from torch.autograd import Function -from torch.utils.cpp_extension import load - -module_path = os.path.dirname(__file__) -upfirdn2d_op = load( - 'upfirdn2d', - sources=[ - os.path.join(module_path, 'upfirdn2d.cpp'), - os.path.join(module_path, 'upfirdn2d_kernel.cu'), - ], -) - - -class UpFirDn2dBackward(Function): - @staticmethod - def forward( - ctx, grad_output, kernel, grad_kernel, up, down, pad, g_pad, in_size, out_size - ): - up_x, up_y = up - down_x, down_y = down - g_pad_x0, g_pad_x1, g_pad_y0, g_pad_y1 = g_pad - - grad_output = grad_output.reshape(-1, out_size[0], out_size[1], 1) - - grad_input = upfirdn2d_op.upfirdn2d( - grad_output, - grad_kernel, - down_x, - down_y, - up_x, - up_y, - g_pad_x0, - g_pad_x1, - g_pad_y0, - g_pad_y1, - ) - grad_input = grad_input.view(in_size[0], in_size[1], in_size[2], in_size[3]) - - ctx.save_for_backward(kernel) - - pad_x0, pad_x1, pad_y0, pad_y1 = pad - - ctx.up_x = up_x - ctx.up_y = up_y - ctx.down_x = down_x - ctx.down_y = down_y - ctx.pad_x0 = pad_x0 - ctx.pad_x1 = pad_x1 - ctx.pad_y0 = pad_y0 - ctx.pad_y1 = pad_y1 - ctx.in_size = in_size - ctx.out_size = out_size - - return grad_input - - @staticmethod - def backward(ctx, gradgrad_input): - kernel, = ctx.saved_tensors - - gradgrad_input = gradgrad_input.reshape(-1, ctx.in_size[2], ctx.in_size[3], 1) - - gradgrad_out = upfirdn2d_op.upfirdn2d( - gradgrad_input, - kernel, - ctx.up_x, - ctx.up_y, - ctx.down_x, - ctx.down_y, - ctx.pad_x0, - ctx.pad_x1, - ctx.pad_y0, - ctx.pad_y1, - ) - # gradgrad_out = gradgrad_out.view(ctx.in_size[0], ctx.out_size[0], ctx.out_size[1], ctx.in_size[3]) - gradgrad_out = gradgrad_out.view( - ctx.in_size[0], ctx.in_size[1], ctx.out_size[0], ctx.out_size[1] - ) - - return gradgrad_out, None, None, None, None, None, None, None, None - - -class UpFirDn2d(Function): - @staticmethod - def forward(ctx, input, kernel, up, down, pad): - up_x, up_y = up - down_x, down_y = down - pad_x0, pad_x1, pad_y0, pad_y1 = pad - - kernel_h, kernel_w = kernel.shape - batch, channel, in_h, in_w = input.shape - ctx.in_size = input.shape - - input = input.reshape(-1, in_h, in_w, 1) - - ctx.save_for_backward(kernel, torch.flip(kernel, [0, 1])) - - out_h = (in_h * up_y + pad_y0 + pad_y1 - kernel_h) // down_y + 1 - out_w = (in_w * up_x + pad_x0 + pad_x1 - kernel_w) // down_x + 1 - ctx.out_size = (out_h, out_w) - - ctx.up = (up_x, up_y) - ctx.down = (down_x, down_y) - ctx.pad = (pad_x0, pad_x1, pad_y0, pad_y1) - - g_pad_x0 = kernel_w - pad_x0 - 1 - g_pad_y0 = kernel_h - pad_y0 - 1 - g_pad_x1 = in_w * up_x - out_w * down_x + pad_x0 - up_x + 1 - g_pad_y1 = in_h * up_y - out_h * down_y + pad_y0 - up_y + 1 - - ctx.g_pad = (g_pad_x0, g_pad_x1, g_pad_y0, g_pad_y1) - - out = upfirdn2d_op.upfirdn2d( - input, kernel, up_x, up_y, down_x, down_y, pad_x0, pad_x1, pad_y0, pad_y1 - ) - # out = out.view(major, out_h, out_w, minor) - out = out.view(-1, channel, out_h, out_w) - - return out - - @staticmethod - def backward(ctx, grad_output): - kernel, grad_kernel = ctx.saved_tensors - - grad_input = UpFirDn2dBackward.apply( - grad_output, - kernel, - grad_kernel, - ctx.up, - ctx.down, - ctx.pad, - ctx.g_pad, - ctx.in_size, - ctx.out_size, - ) - - return grad_input, None, None, None, None - - -def upfirdn2d(input, kernel, up=1, down=1, pad=(0, 0)): - out = UpFirDn2d.apply( - input, kernel, (up, up), (down, down), (pad[0], pad[1], pad[0], pad[1]) - ) - - return out - - -def upfirdn2d_native( - input, kernel, up_x, up_y, down_x, down_y, pad_x0, pad_x1, pad_y0, pad_y1 -): - _, in_h, in_w, minor = input.shape - kernel_h, kernel_w = kernel.shape - - out = input.view(-1, in_h, 1, in_w, 1, minor) - out = F.pad(out, [0, 0, 0, up_x - 1, 0, 0, 0, up_y - 1]) - out = out.view(-1, in_h * up_y, in_w * up_x, minor) - - out = F.pad( - out, [0, 0, max(pad_x0, 0), max(pad_x1, 0), max(pad_y0, 0), max(pad_y1, 0)] - ) - out = out[ - :, - max(-pad_y0, 0): out.shape[1] - max(-pad_y1, 0), - max(-pad_x0, 0): out.shape[2] - max(-pad_x1, 0), - :, - ] - - out = out.permute(0, 3, 1, 2) - out = out.reshape( - [-1, 1, in_h * up_y + pad_y0 + pad_y1, in_w * up_x + pad_x0 + pad_x1] - ) - w = torch.flip(kernel, [0, 1]).view(1, 1, kernel_h, kernel_w) - out = F.conv2d(out, w) - out = out.reshape( - -1, - minor, - in_h * up_y + pad_y0 + pad_y1 - kernel_h + 1, - in_w * up_x + pad_x0 + pad_x1 - kernel_w + 1, - ) - out = out.permute(0, 2, 3, 1) - - return out[:, ::down_y, ::down_x, :] diff --git a/spaces/akhaliq/SummerTime/model/third_party/HMNet/ThirdParty/ROUGE/ROUGE-1.5.5/ROUGE-1.5.5.pl b/spaces/akhaliq/SummerTime/model/third_party/HMNet/ThirdParty/ROUGE/ROUGE-1.5.5/ROUGE-1.5.5.pl deleted file mode 100644 index 974c667f8a308ce418f9206a8ff76c2f977bc367..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/SummerTime/model/third_party/HMNet/ThirdParty/ROUGE/ROUGE-1.5.5/ROUGE-1.5.5.pl +++ /dev/null @@ -1,3300 +0,0 @@ -#!/usr/bin/perl -w -# Add current dir to include -use File::Basename; -use lib dirname (__FILE__); - -# Version: ROUGE v1.5.5 -# Date: 05/26/2005,05/19/2005,04/26/2005,04/03/2005,10/28/2004,10/25/2004,10/21/2004 -# Author: Chin-Yew Lin -# Description: Given an evaluation description file, for example: test.xml, -# this script computes the averages of the average ROUGE scores for -# the evaluation pairs listed in the ROUGE evaluation configuration file. -# For more information, please see: -# http://www.isi.edu/~cyl/ROUGE -# For more information about Basic Elements, please see: -# http://www.isi.edu/~cyl/BE -# Revision Note: -# 1.5.5 -# (1) Correct stemming on multi-token BE heads and modifiers. -# Previously, only single token heads and modifiers were assumed. -# (2) Correct the resampling routine which ignores the last evaluation -# item in the evaluation list. Therefore, the average scores reported -# by ROUGE is only based on the first N-1 evaluation items. -# Thanks Barry Schiffman at Columbia University to report this bug. -# This bug only affects ROUGE-1.5.X. For pre-1.5 ROUGE, it only affects -# the computation of confidence interval (CI) estimation, i.e. CI is only -# estimated by the first N-1 evaluation items, but it *does not* affect -# average scores. -# (3) Change read_text and read_text_LCS functions to read exact words or -# bytes required by users. Previous versions carry out whitespace -# compression and other string clear up actions before enforce the length -# limit. -# 1.5.4.1 -# (1) Minor description change about "-t 0" option. -# 1.5.4 -# (1) Add easy evalution mode for single reference evaluations with -z -# option. -# 1.5.3 -# (1) Add option to compute ROUGE score based on SIMPLE BE format. Given -# a set of peer and model summary file in BE format with appropriate -# options, ROUGE will compute matching scores based on BE lexical -# matches. -# There are 6 options: -# 1. H : Head only match. This is similar to unigram match but -# only BE Head is used in matching. BEs generated by -# Minipar-based breaker do not include head-only BEs, -# therefore, the score will always be zero. Use HM or HMR -# optiions instead. -# 2. HM : Head and modifier match. This is similar to bigram or -# skip bigram but it's head-modifier bigram match based on -# parse result. Only BE triples with non-NIL modifier are -# included in the matching. -# 3. HMR : Head, modifier, and relation match. This is similar to -# trigram match but it's head-modifier-relation trigram -# match based on parse result. Only BE triples with non-NIL -# relation are included in the matching. -# 4. HM1 : This is combination of H and HM. It is similar to unigram + -# bigram or skip bigram with unigram match but it's -# head-modifier bigram match based on parse result. -# In this case, the modifier field in a BE can be "NIL" -# 5. HMR1 : This is combination of HM and HMR. It is similar to -# trigram match but it's head-modifier-relation trigram -# match based on parse result. In this case, the relation -# field of the BE can be "NIL". -# 6. HMR2 : This is combination of H, HM and HMR. It is similar to -# trigram match but it's head-modifier-relation trigram -# match based on parse result. In this case, the modifier and -# relation fields of the BE can both be "NIL". -# 1.5.2 -# (1) Add option to compute ROUGE score by token using the whole corpus -# as average unit instead of individual sentences. Previous versions of -# ROUGE uses sentence (or unit) boundary to break counting unit and takes -# the average score from the counting unit as the final score. -# Using the whole corpus as one single counting unit can potentially -# improve the reliablity of the final score that treats each token as -# equally important; while the previous approach considers each sentence as -# equally important that ignores the length effect of each individual -# sentences (i.e. long sentences contribute equal weight to the final -# score as short sentences.) -# +v1.2 provide a choice of these two counting modes that users can -# choose the one that fits their scenarios. -# 1.5.1 -# (1) Add precision oriented measure and f-measure to deal with different lengths -# in candidates and references. Importance between recall and precision can -# be controled by 'alpha' parameter: -# alpha -> 0: recall is more important -# alpha -> 1: precision is more important -# Following Chapter 7 in C.J. van Rijsbergen's "Information Retrieval". -# http://www.dcs.gla.ac.uk/Keith/Chapter.7/Ch.7.html -# F = 1/(alpha * (1/P) + (1 - alpha) * (1/R)) ;;; weighted harmonic mean -# 1.4.2 -# (1) Enforce length limit at the time when summary text is read. Previously (before -# and including v1.4.1), length limit was enforced at tokenization time. -# 1.4.1 -# (1) Fix potential over counting in ROUGE-L and ROUGE-W -# In previous version (i.e. 1.4 and order), LCS hit is computed -# by summing union hit over all model sentences. Each model sentence -# is compared with all peer sentences and mark the union LCS. The -# length of the union LCS is the hit of that model sentence. The -# final hit is then sum over all model union LCS hits. This potentially -# would over count a peer sentence which already been marked as contributed -# to some other model sentence. Therefore, double counting is resulted. -# This is seen in evalution where ROUGE-L score is higher than ROUGE-1 and -# this is not correct. -# ROUGEeval-1.4.1.pl fixes this by add a clip function to prevent -# double counting. -# 1.4 -# (1) Remove internal Jackknifing procedure: -# Now the ROUGE script will use all the references listed in the -# section in each section and no -# automatic Jackknifing is performed. Please see RELEASE-NOTE.txt -# for more details. -# 1.3 -# (1) Add skip bigram -# (2) Add an option to specify the number of sampling point (default is 1000) -# 1.2.3 -# (1) Correct the enviroment variable option: -e. Now users can specify evironment -# variable ROUGE_EVAL_HOME using the "-e" option; previously this option is -# not active. Thanks Zhouyan Li of Concordia University, Canada pointing this -# out. -# 1.2.2 -# (1) Correct confidence interval calculation for median, maximum, and minimum. -# Line 390. -# 1.2.1 -# (1) Add sentence per line format input format. See files in Verify-SPL for examples. -# (2) Streamline command line arguments. -# (3) Use bootstrap resampling to estimate confidence intervals instead of using t-test -# or z-test which assume a normal distribution. -# (4) Add LCS (longest common subsequence) evaluation method. -# (5) Add WLCS (weighted longest common subsequence) evaluation method. -# (6) Add length cutoff in bytes. -# (7) Add an option to specify the longest ngram to compute. The default is 4. -# 1.2 -# (1) Change zero condition check in subroutine &computeNGramScores when -# computing $gram1Score from -# if($totalGram2Count!=0) to -# if($totalGram1Count!=0) -# Thanks Ken Litkowski for this bug report. -# This original script will set gram1Score to zero if there is no -# bigram matches. This should rarely has significant affect the final score -# since (a) there are bigram matches most of time; (b) the computation -# of gram1Score is using Jackknifing procedure. However, this definitely -# did not compute the correct $gram1Score when there is no bigram matches. -# Therefore, users of version 1.1 should definitely upgrade to newer -# version of the script that does not contain this bug. -# Note: To use this script, two additional data files are needed: -# (1) smart_common_words.txt - contains stopword list from SMART IR engine -# (2) WordNet-2.0.exc.db - WordNet 2.0 exception inflexion database -# These two files have to be put in a directory pointed by the environment -# variable: "ROUGE_EVAL_HOME". -# If environment variable ROUGE_EVAL_HOME does not exist, this script will -# will assume it can find these two database files in the current directory. -# COPYRIGHT (C) UNIVERSITY OF SOUTHERN CALIFORNIA, 2002,2003,2004 -# University of Southern California -# Information Sciences Institute -# 4676 Admiralty Way -# Marina Del Rey, California 90292-6695 -# -# This software was partially developed under SPAWAR Grant No. -# N66001-00-1-8916 , and the Government holds license rights under -# DAR 7-104.9(a)(c)(1). It is -# transmitted outside of the University of Southern California only under -# written license agreements or software exchange agreements, and its use -# is limited by these agreements. At no time shall any recipient use -# this software in any manner which conflicts or interferes with the -# governmental license rights or other provisions of the governing -# agreement under which it is obtained. It is supplied "AS IS," without -# any warranties of any kind. It is furnished only on the basis that any -# party who receives it indemnifies and holds harmless the parties who -# furnish and originate it against any claims, demands or liabilities -# connected with using it, furnishing it to others or providing it to a -# third party. THIS NOTICE MUST NOT BE REMOVED FROM THE SOFTWARE, -# AND IN THE EVENT THAT THE SOFTWARE IS DIVIDED, IT SHOULD BE -# ATTACHED TO EVERY PART. -# -# Contributor to its design is Chin-Yew Lin. - -use XML::DOM; -use DB_File; -use Getopt::Std; -#------------------------------------------------------------------------------------- -use vars qw($opt_a $opt_b $opt_c $opt_d $opt_e $opt_f $opt_h $opt_H $opt_m $opt_n $opt_p $opt_s $opt_t $opt_l $opt_v $opt_w $opt_2 $opt_u $opt_x $opt_U $opt_3 $opt_M $opt_z); -my $usageFull="$0\n [-a (evaluate all systems)] - [-c cf] - [-d (print per evaluation scores)] - [-e ROUGE_EVAL_HOME] - [-h (usage)] - [-H (detailed usage)] - [-b n-bytes|-l n-words] - [-m (use Porter stemmer)] - [-n max-ngram] - [-s (remove stopwords)] - [-r number-of-samples (for resampling)] - [-2 max-gap-length (if < 0 then no gap length limit)] - [-3 (for scoring based on BE)] - [-u (include unigram in skip-bigram) default no)] - [-U (same as -u but also compute regular skip-bigram)] - [-w weight (weighting factor for WLCS)] - [-v (verbose)] - [-x (do not calculate ROUGE-L)] - [-f A|B (scoring formula)] - [-p alpha (0 <= alpha <=1)] - [-t 0|1|2 (count by token instead of sentence)] - [-z ] - []\n -". - "ROUGE-eval-config-file: Specify the evaluation setup. Three files come with the ROUGE evaluation package, i.e.\n". - " ROUGE-test.xml, verify.xml, and verify-spl.xml are good examples.\n". - "systemID: Specify which system in the ROUGE-eval-config-file to perform the evaluation.\n". - " If '-a' option is used, then all systems are evaluated and users do not need to\n". - " provide this argument.\n". - "Default:\n". - " When running ROUGE without supplying any options (except -a), the following defaults are used:\n". - " (1) ROUGE-L is computed;\n". - " (2) 95% confidence interval;\n". - " (3) No stemming;\n". - " (4) Stopwords are inlcuded in the calculations;\n". - " (5) ROUGE looks for its data directory first through the ROUGE_EVAL_HOME environment variable. If\n". - " it is not set, the current directory is used.\n". - " (6) Use model average scoring formula.\n". - " (7) Assign equal importance of ROUGE recall and precision in computing ROUGE f-measure, i.e. alpha=0.5.\n". - " (8) Compute average ROUGE by averaging sentence (unit) ROUGE scores.\n". - "Options:\n". - " -2: Compute skip bigram (ROGUE-S) co-occurrence, also specify the maximum gap length between two words (skip-bigram)\n". - " -u: Compute skip bigram as -2 but include unigram, i.e. treat unigram as \"start-sentence-symbol unigram\"; -2 has to be specified.\n". - " -3: Compute BE score. Currently only SIMPLE BE triple format is supported.\n". - " H -> head only scoring (does not applied to Minipar-based BEs).\n". - " HM -> head and modifier pair scoring.\n". - " HMR -> head, modifier and relation triple scoring.\n". - " HM1 -> H and HM scoring (same as HM for Minipar-based BEs).\n". - " HMR1 -> HM and HMR scoring (same as HMR for Minipar-based BEs).\n". - " HMR2 -> H, HM and HMR scoring (same as HMR for Minipar-based BEs).\n". - " -a: Evaluate all systems specified in the ROUGE-eval-config-file.\n". - " -c: Specify CF\% (0 <= CF <= 100) confidence interval to compute. The default is 95\% (i.e. CF=95).\n". - " -d: Print per evaluation average score for each system.\n". - " -e: Specify ROUGE_EVAL_HOME directory where the ROUGE data files can be found.\n". - " This will overwrite the ROUGE_EVAL_HOME specified in the environment variable.\n". - " -f: Select scoring formula: 'A' => model average; 'B' => best model\n". - " -h: Print usage information.\n". - " -H: Print detailed usage information.\n". - " -b: Only use the first n bytes in the system/peer summary for the evaluation.\n". - " -l: Only use the first n words in the system/peer summary for the evaluation.\n". - " -m: Stem both model and system summaries using Porter stemmer before computing various statistics.\n". - " -n: Compute ROUGE-N up to max-ngram length will be computed.\n". - " -p: Relative importance of recall and precision ROUGE scores. Alpha -> 1 favors precision, Alpha -> 0 favors recall.\n". - " -s: Remove stopwords in model and system summaries before computing various statistics.\n". - " -t: Compute average ROUGE by averaging over the whole test corpus instead of sentences (units).\n". - " 0: use sentence as counting unit, 1: use token as couting unit, 2: same as 1 but output raw counts\n". - " instead of precision, recall, and f-measure scores. 2 is useful when computation of the final,\n". - " precision, recall, and f-measure scores will be conducted later.\n". - " -r: Specify the number of sampling point in bootstrap resampling (default is 1000).\n". - " Smaller number will speed up the evaluation but less reliable confidence interval.\n". - " -w: Compute ROUGE-W that gives consecutive matches of length L in an LCS a weight of 'L^weight' instead of just 'L' as in LCS.\n". - " Typically this is set to 1.2 or other number greater than 1.\n". - " -v: Print debugging information for diagnositic purpose.\n". - " -x: Do not calculate ROUGE-L.\n". - " -z: ROUGE-eval-config-file is a list of peer-model pair per line in the specified format (SEE|SPL|ISI|SIMPLE).\n"; - -my $usage="$0\n [-a (evaluate all systems)] - [-c cf] - [-d (print per evaluation scores)] - [-e ROUGE_EVAL_HOME] - [-h (usage)] - [-H (detailed usage)] - [-b n-bytes|-l n-words] - [-m (use Porter stemmer)] - [-n max-ngram] - [-s (remove stopwords)] - [-r number-of-samples (for resampling)] - [-2 max-gap-length (if < 0 then no gap length limit)] - [-3 (for scoring based on BE)] - [-u (include unigram in skip-bigram) default no)] - [-U (same as -u but also compute regular skip-bigram)] - [-w weight (weighting factor for WLCS)] - [-v (verbose)] - [-x (do not calculate ROUGE-L)] - [-f A|B (scoring formula)] - [-p alpha (0 <= alpha <=1)] - [-t 0|1|2 (count by token instead of sentence)] - [-z ] - [] -"; -getopts('ahHb:c:de:f:l:mMn:p:st:r:2:3:w:uUvxz:'); -my $systemID; - -die $usageFull if defined($opt_H); -die $usage if defined($opt_h)||@ARGV==0; -die "Please specify the ROUGE configuration file or use option '-h' for help\n" if(@ARGV==0); -if(@ARGV==1&&defined($opt_z)) { - $systemID="X"; # default system ID -} -elsif(@ARGV==1&&!defined($opt_a)) { - die "Please specify a system ID to evaluate or use option '-a' to evaluate all systems. For more information, use option '-h'.\n"; -} -elsif(@ARGV==2) { - $systemID=$ARGV[1]; -} -if(defined($opt_e)) { - $stopwords="$opt_e/smart_common_words.txt"; - $wordnetDB="$opt_e/WordNet-2.0.exc.db"; -} -else { - if(exists($ENV{"ROUGE_EVAL_HOME"})) { - $stopwords="$ENV{\"ROUGE_EVAL_HOME\"}/smart_common_words.txt"; - $wordnetDB="$ENV{\"ROUGE_EVAL_HOME\"}/WordNet-2.0.exc.db"; - } - elsif(exists($ENV{"RED_EVAL_HOME"})) { - $stopwords="$ENV{\"RED_EVAL_HOME\"}/smart_common_words.txt"; - $wordnetDB="$ENV{\"RED_EVAL_HOME\"}/WordNet-2.0.exc.db"; - } - else { - # if no environment variable exists then assume data files are in the current directory - $stopwords="smart_common_words.txt"; - $wordnetDB="WordNet-2.0.exc.db"; - } -} - -if(defined($opt_s)) { - $useStopwords=0; # do not use stop words -} -else { - $useStopwords=1; # use stop words -} - -if(defined($opt_l)&&defined($opt_b)) { - die "Please specify length limit in words or bytes but not both.\n"; -} - -if(defined($opt_l)) { - $lengthLimit=$opt_l; - $byteLimit=0; # no byte limit -} -elsif(defined($opt_b)) { - $lengthLimit=0; # no length limit in words - $byteLimit=$opt_b; -} -else { - $byteLimit=0; # no byte limit - $lengthLimit=0; # no length limit -} - -unless(defined($opt_c)) { - $opt_c=95; -} -else { - if($opt_c<0||$opt_c>100) { - die "Confidence interval should be within 0 and 100. Use option -h for more details.\n"; - } -} - -if(defined($opt_w)) { - if($opt_w>0) { - $weightFactor=$opt_w; - } - else { - die "ROUGE-W weight factor must greater than 0.\n"; - } -} -#unless(defined($opt_n)) { -# $opt_n=4; # default maximum ngram is 4 -#} -if(defined($opt_v)) { - $debug=1; -} -else { - $debug=0; -} - -if(defined($opt_r)) { - $numOfResamples=$opt_r; -} -else { - $numOfResamples=1000; -} - -if(defined($opt_2)) { - $skipDistance=$opt_2; -} - -if(defined($opt_3)) { - $BEMode=$opt_3; -} - -if(defined($opt_f)) { - $scoreMode=$opt_f; -} -else { - $scoreMode="A"; # default: use model average scoring formula -} - -if(defined($opt_p)) { - $alpha=$opt_p; - if($alpha<0|| - $alpha>1) { - die "Relative importance of ROUGE recall and precision has to be between 0 and 1 inclusively.\n"; - } -} -else { - $alpha=0.5; # default is equal importance of ROUGE recall and precision -} - -if(defined($opt_t)) { - # make $opt_t as undef when appropriate option is given - # when $opt_t is undef, sentence level average will be used - if($opt_t==0) { - $opt_t=undef; - } - elsif($opt_t!=1&& - $opt_t!=2) { - $opt_t=undef; # other than 1 or 2, let $opt_t to be undef - } -} - -if(defined($opt_z)) { - # If opt_z is specified, the user has to specify a system ID that - # is used for identification therefore -a option is not allowed. - # Here we make it undef. - $opt_a=undef; -} -#------------------------------------------------------------------------------------- -# Setup ROUGE scoring parameters -%ROUGEParam=(); # ROUGE scoring parameter -if(defined($lengthLimit)) { - $ROUGEParam{"LENGTH"}=$lengthLimit; -} -else { - $ROUGEParam{"LENGTH"}=undef; -} -if(defined($byteLimit)) { - $ROUGEParam{"BYTE"}=$byteLimit; -} -else { - $ROUGEParam{"BYTE"}=undef; -} -if(defined($opt_n)) { # ngram size - $ROUGEParam{"NSIZE"}=$opt_n; -} -else { - $ROUGEParam{"NSIZE"}=undef; -} -if(defined($weightFactor)) { - $ROUGEParam{"WEIGHT"}=$weightFactor; -} -else { - $ROUGEParam{"WEIGHT"}=undef; -} -if(defined($skipDistance)) { - $ROUGEParam{"SD"}=$skipDistance; -} -else { - $ROUGEParam{"SD"}=undef; -} -if(defined($scoreMode)) { - $ROUGEParam{"SM"}=$scoreMode; -} -else { - $ROUGEParam{"SM"}=undef; -} -if(defined($alpha)) { - $ROUGEParam{"ALPHA"}=$alpha; -} -else { - $ROUGEParam{"ALPHA"}=undef; -} -if(defined($opt_t)) { - $ROUGEParam{"AVERAGE"}=$opt_t; -} -else { - $ROUGEParam{"AVERAGE"}=undef; -} -if(defined($opt_3)) { - $ROUGEParam{"BEMODE"}=$opt_3; -} -else { - $ROUGEParam{"BEMODE"}=undef; -} -#------------------------------------------------------------------------------------- -# load stopwords -%stopwords=(); -open(STOP,$stopwords)||die "Cannot open $stopwords\n"; -while(defined($line=)) { - chomp($line); - $stopwords{$line}=1; -} -close(STOP); -# load WordNet database -if(-e "$wordnetDB") { - tie %exceptiondb,'DB_File',"$wordnetDB",O_RDONLY,0440,$DB_HASH or - die "Cannot open exception db file for reading: $wordnetDB\n"; -} -else { - die "Cannot open exception db file for reading: $wordnetDB\n"; -} -#------------------------------------------------------------------------------------- -# Initialize Porter Stemmer -&initialise(); -#------------------------------------------------------------------------------------- -# Read and parse the document -my $parser = new XML::DOM::Parser; -my $doc; -unless(defined($opt_z)) { - $doc=$parser->parsefile($ARGV[0]); -} -else { - open($doc,$ARGV[0])||die "Cannot open $ARGV[0]\n"; -} -%ROUGEEvals=(); -@ROUGEEvalIDs=(); -%ROUGEPeerIDTable=(); -@allPeerIDs=(); -%knownMissing=(); # remember missing submission already known -if(defined($doc)) { - # read evaluation description file - &readEvals(\%ROUGEEvals,\@ROUGEEvalIDs,\%ROUGEPeerIDTable,$doc,undef); - # print evaluation configuration - if(defined($opt_z)) { - if(defined($ARGV[1])) { - $systemID=$ARGV[1]; - } - else { - $systemID="X"; # default system ID in BE file list evaluation mode - } - push(@allPeerIDs,$systemID); - } - else { - unless(defined($opt_a)) { - $systemID=$ARGV[1]; - push(@allPeerIDs,$systemID); - } - else { - # run evaluation for each peer listed in the description file - @allPeerIDs=sort (keys %ROUGEPeerIDTable); - } - } - foreach $peerID (@allPeerIDs) { - %testIDs=(); - # print "\@PEER($peerID)--------------------------------------------------\n"; - if(defined($opt_n)) { - # evaluate a specific peer - # compute ROUGE score up to $opt_n-gram - for($n=1;$n<=$opt_n;$n++) { - my (%ROUGEScores,%ROUGEAverages); - - %ROUGEScores=(); - foreach $e (@ROUGEEvalIDs) { - if($debug) { - print "\@Eval ($e)\n"; - } - $ROUGEParam{"NSIZE"}=$n; - &computeROUGEX("N",\%ROUGEScores,$e,$ROUGEEvals{$e},$peerID,\%ROUGEParam); - } - # compute averages - %ROUGEAverages=(); - &computeAverages(\%ROUGEScores,\%ROUGEAverages,$opt_t); - &printResults($peerID,\%ROUGEAverages,\%ROUGEScores,"ROUGE-$n",$opt_c,$opt_t,$opt_d); - } - } - unless(defined($opt_x)||defined($opt_3)) { - #----------------------------------------------- - # compute LCS score - %ROUGEScores=(); - foreach $e (@ROUGEEvalIDs) { - &computeROUGEX("L",\%ROUGEScores,$e,$ROUGEEvals{$e},$peerID,\%ROUGEParam); - } - # compute averages - %ROUGEAverages=(); - &computeAverages(\%ROUGEScores,\%ROUGEAverages,$opt_t); - &printResults($peerID,\%ROUGEAverages,\%ROUGEScores,"ROUGE-L",$opt_c,$opt_t,$opt_d); - } - if(defined($opt_w)) { - #----------------------------------------------- - # compute WLCS score - %ROUGEScores=(); - foreach $e (@ROUGEEvalIDs) { - &computeROUGEX("W",\%ROUGEScores,$e,$ROUGEEvals{$e},$peerID,\%ROUGEParam); - } - # compute averages - %ROUGEAverages=(); - &computeAverages(\%ROUGEScores,\%ROUGEAverages,$opt_t); - &printResults($peerID,\%ROUGEAverages,\%ROUGEScores,"ROUGE-W-$weightFactor",$opt_c,$opt_t,$opt_d); - } - if(defined($opt_2)) { - #----------------------------------------------- - # compute skip bigram score - %ROUGEScores=(); - foreach $e (@ROUGEEvalIDs) { - &computeROUGEX("S",\%ROUGEScores,$e,$ROUGEEvals{$e},$peerID,\%ROUGEParam); - } - # compute averages - %ROUGEAverages=(); - &computeAverages(\%ROUGEScores,\%ROUGEAverages,$opt_t); - if($skipDistance>=0) { - if(defined($opt_u)) { - &printResults($peerID,\%ROUGEAverages,\%ROUGEScores,"ROUGE-SU$skipDistance",$opt_c,$opt_t,$opt_d); - } - elsif(defined($opt_U)) { - # print regular skip bigram results - &printResults($peerID,\%ROUGEAverages,\%ROUGEScores,"ROUGE-S$skipDistance",$opt_c,$opt_t,$opt_d); - #----------------------------------------------- - # compute skip bigram with unigram extension score - $opt_u=1; - %ROUGEScores=(); - foreach $e (@ROUGEEvalIDs) { - &computeROUGEX("S",\%ROUGEScores,$e,$ROUGEEvals{$e},$peerID,\%ROUGEParam); - } - $opt_u=undef; - # compute averages - %ROUGEAverages=(); - &computeAverages(\%ROUGEScores,\%ROUGEAverages,$opt_t); - &printResults($peerID,\%ROUGEAverages,\%ROUGEScores,"ROUGE-SU$skipDistance",$opt_c,$opt_t,$opt_d); - } - else { - &printResults($peerID,\%ROUGEAverages,\%ROUGEScores,"ROUGE-S$skipDistance",$opt_c,$opt_t,$opt_d); - } - } - else { - if(defined($opt_u)) { - &printResults($peerID,\%ROUGEAverages,\%ROUGEScores,"ROUGE-SU*",$opt_c,$opt_t,$opt_d); - } - else { - &printResults($peerID,\%ROUGEAverages,\%ROUGEScores,"ROUGE-S*",$opt_c,$opt_t,$opt_d); - if(defined($opt_U)) { - #----------------------------------------------- - # compute skip bigram with unigram extension score - $opt_u=1; - %ROUGEScores=(); - foreach $e (@ROUGEEvalIDs) { - &computeROUGEX("S",\%ROUGEScores,$e,$ROUGEEvals{$e},$peerID,\%ROUGEParam); - } - $opt_u=undef; - # compute averages - %ROUGEAverages=(); - &computeAverages(\%ROUGEScores,\%ROUGEAverages,$opt_t); - &printResults($peerID,\%ROUGEAverages,\%ROUGEScores,"ROUGE-SU*",$opt_c,$opt_t,$opt_d); - } - } - } - } - if(defined($opt_3)) { - #----------------------------------------------- - # compute Basic Element triple score - %ROUGEScores=(); - foreach $e (@ROUGEEvalIDs) { - &computeROUGEX("BE",\%ROUGEScores,$e,$ROUGEEvals{$e},$peerID,\%ROUGEParam); - } - # compute averages - %ROUGEAverages=(); - &computeAverages(\%ROUGEScores,\%ROUGEAverages,$opt_t); - &printResults($peerID,\%ROUGEAverages,\%ROUGEScores,"ROUGE-BE-$BEMode",$opt_c,$opt_t,$opt_d); - } - } -} -else { - die "Document undefined\n"; -} -if(defined($opt_z)) { - close($doc); -} -untie %exceptiondb; - -sub printResults { - my $peerID=shift; - my $ROUGEAverages=shift; - my $ROUGEScores=shift; - my $methodTag=shift; - my $opt_c=shift; - my $opt_t=shift; - my $opt_d=shift; - - print "---------------------------------------------\n"; - if(!defined($opt_t)||$opt_t==1) { - print "$peerID $methodTag Average_R: $ROUGEAverages->{'AvgR'} "; - print "($opt_c\%-conf.int. $ROUGEAverages->{'CIAvgL_R'} - $ROUGEAverages->{'CIAvgU_R'})\n"; - print "$peerID $methodTag Average_P: $ROUGEAverages->{'AvgP'} "; - print "($opt_c\%-conf.int. $ROUGEAverages->{'CIAvgL_P'} - $ROUGEAverages->{'CIAvgU_P'})\n"; - print "$peerID $methodTag Average_F: $ROUGEAverages->{'AvgF'} "; - print "($opt_c\%-conf.int. $ROUGEAverages->{'CIAvgL_F'} - $ROUGEAverages->{'CIAvgU_F'})\n"; - } - else { - print "$peerID $methodTag M_count: "; - print int($ROUGEAverages->{'M_cnt'}); - print " P_count: "; - print int($ROUGEAverages->{'P_cnt'}); - print " H_count: "; - print int($ROUGEAverages->{'H_cnt'}); - print "\n"; - } - if(defined($opt_d)) { - print ".............................................\n"; - &printPerEvalData($ROUGEScores,"$peerID $methodTag Eval"); - } -} - -sub bootstrapResampling { - my $scores=shift; - my $instances=shift; - my $seed=shift; - my $opt_t=shift; - my $sample; - my ($i,$ridx); - - # Use $seed to seed the random number generator to make sure - # we have the same random sequence every time, therefore a - # consistent estimation of confidence interval in different runs. - # This is not necessary. To ensure a consistent result in reporting - # results using ROUGE, this is implemented. - srand($seed); - for($i=0;$i<@{$instances};$i++) { - # generate a random index - $ridx=int(rand(@{$instances})); - unless(defined($sample)) { - # setup the resampling array - $sample=[]; - push(@$sample,$scores->{$instances->[$ridx]}[0]); - push(@$sample,$scores->{$instances->[$ridx]}[1]); - push(@$sample,$scores->{$instances->[$ridx]}[2]); - } - else { - # update the resampling array - $sample->[0]+=$scores->{$instances->[$ridx]}[0]; - $sample->[1]+=$scores->{$instances->[$ridx]}[1]; - $sample->[2]+=$scores->{$instances->[$ridx]}[2]; - } - } - # compute the average result for this resampling procedure - unless(defined($opt_t)) { - # per instance or sentence average - if(@{$instances}>0) { - $sample->[0]/=@{$instances}; - $sample->[1]/=@{$instances}; - $sample->[2]/=@{$instances}; - } - else { - $sample->[0]=0; - $sample->[1]=0; - $sample->[2]=0; - } - } - else { - if($opt_t==1) { - # per token or corpus level average - # output recall, precision, and f-measure score - my ($tmpR,$tmpP,$tmpF); - if($sample->[0]>0) { - $tmpR=$sample->[2]/$sample->[0]; # recall - } - else { - $tmpR=0; - } - if($sample->[1]>0) { - $tmpP=$sample->[2]/$sample->[1]; # precision - } - else { - $tmpP=0; - } - if((1-$alpha)*$tmpP+$alpha*$tmpR>0) { - $tmpF=($tmpR*$tmpP)/((1-$alpha)*$tmpP+$alpha*$tmpR); # f-measure - } - else { - $tmpF=0; - } - $sample->[0]=$tmpR; - $sample->[1]=$tmpP; - $sample->[2]=$tmpF; - } - else { - # $opt_t!=1 => output raw model token count, peer token count, and hit count - # do nothing, just return $sample - } - } - return $sample; -} - -sub by_value { - $a<=>$b; -} - -sub printPerEvalData { - my $ROUGEScores=shift; - my $tag=shift; # tag to identify each evaluation - my (@instances,$i,$j); - - @instances=sort by_evalID (keys %$ROUGEScores); - foreach $i (@instances) { - # print average per evaluation score - print "$tag $i R:$ROUGEScores->{$i}[0] P:$ROUGEScores->{$i}[1] F:$ROUGEScores->{$i}[2]\n"; - } -} - -sub by_evalID { - my ($a1,$b1); - - if($a=~/^([0-9]+)/o) { - $a1=$1; - } - if($b=~/^([0-9]+)/o) { - $b1=$1; - } - if(defined($a1)&&defined($b1)) { - return $a1<=>$b1; - } - else { - return $a cmp $b; - } -} - -sub computeAverages { - my $ROUGEScores=shift; - my $ROUGEAverages=shift; - my $opt_t=shift; - my ($avgAvgROUGE_R,$resampleAvgROUGE_R); - my ($avgAvgROUGE_P,$resampleAvgROUGE_P); - my ($avgAvgROUGE_F,$resampleAvgROUGE_F); - my ($ciU,$ciL); - my (@instances,$i,$j,@rankedArray_R,@rankedArray_P,@RankedArray_F); - - @instances=sort (keys %$ROUGEScores); - $avgAvgROUGE_R=0; - $avgAvgROUGE_P=0; - $avgAvgROUGE_F=0; - $resampleAvgROUGE_R=0; - $resampleAvgROUGE_P=0; - $resampleAvgROUGE_F=0; - # compute totals - foreach $i (@instances) { - $avgAvgROUGE_R+=$ROUGEScores->{$i}[0]; # recall ; or model token count - $avgAvgROUGE_P+=$ROUGEScores->{$i}[1]; # precision ; or peer token count - $avgAvgROUGE_F+=$ROUGEScores->{$i}[2]; # f1-measure ; or match token count (hit) - } - # compute averages - unless(defined($opt_t)) { - # per sentence average - if((scalar @instances)>0) { - $avgAvgROUGE_R=sprintf("%7.5f",$avgAvgROUGE_R/(scalar @instances)); - $avgAvgROUGE_P=sprintf("%7.5f",$avgAvgROUGE_P/(scalar @instances)); - $avgAvgROUGE_F=sprintf("%7.5f",$avgAvgROUGE_F/(scalar @instances)); - } - else { - $avgAvgROUGE_R=sprintf("%7.5f",0); - $avgAvgROUGE_P=sprintf("%7.5f",0); - $avgAvgROUGE_F=sprintf("%7.5f",0); - } - } - else { - if($opt_t==1) { - # per token average on corpus level - my ($tmpR,$tmpP,$tmpF); - if($avgAvgROUGE_R>0) { - $tmpR=$avgAvgROUGE_F/$avgAvgROUGE_R; - } - else { - $tmpR=0; - } - if($avgAvgROUGE_P>0) { - $tmpP=$avgAvgROUGE_F/$avgAvgROUGE_P; - } - else { - $tmpP=0; - } - if((1-$alpha)*$tmpP+$alpha*$tmpR>0) { - $tmpF=($tmpR+$tmpP)/((1-$alpha)*$tmpP+$alpha*$tmpR); - } - else { - $tmpF=0; - } - $avgAvgROUGE_R=sprintf("%7.5f",$tmpR); - $avgAvgROUGE_P=sprintf("%7.5f",$tmpP); - $avgAvgROUGE_F=sprintf("%7.5f",$tmpF); - } - } - if(!defined($opt_t)||$opt_t==1) { - # compute confidence intervals using bootstrap resampling - @ResamplingArray=(); - for($i=0;$i<$numOfResamples;$i++) { - my $sample; - - $sample=&bootstrapResampling($ROUGEScores,\@instances,$i,$opt_t); - # sample contains average sum of the sample - if(@ResamplingArray==0) { - # setup the resampling array for Avg - my $s; - - $s=[]; - push(@$s,$sample->[0]); - push(@ResamplingArray,$s); - $s=[]; - push(@$s,$sample->[1]); - push(@ResamplingArray,$s); - $s=[]; - push(@$s,$sample->[2]); - push(@ResamplingArray,$s); - } - else { - $rsa=$ResamplingArray[0]; - push(@{$rsa},$sample->[0]); - $rsa=$ResamplingArray[1]; - push(@{$rsa},$sample->[1]); - $rsa=$ResamplingArray[2]; - push(@{$rsa},$sample->[2]); - } - } - # sort resampling results - { - # recall - @rankedArray_R=sort by_value (@{$ResamplingArray[0]}); - $ResamplingArray[0]=\@rankedArray_R; - for($x=0;$x<=$#rankedArray_R;$x++) { - $resampleAvgROUGE_R+=$rankedArray_R[$x]; - # print "*R ($x): $rankedArray_R[$x]\n"; - } - $resampleAvgROUGE_R=sprintf("%7.5f",$resampleAvgROUGE_R/(scalar @rankedArray_R)); - # precision - @rankedArray_P=sort by_value (@{$ResamplingArray[1]}); - $ResamplingArray[1]=\@rankedArray_P; - for($x=0;$x<=$#rankedArray_P;$x++) { - $resampleAvgROUGE_P+=$rankedArray_P[$x]; - # print "*P ($x): $rankedArray_P[$x]\n"; - } - $resampleAvgROUGE_P=sprintf("%7.5f",$resampleAvgROUGE_P/(scalar @rankedArray_P)); - # f1-measure - @rankedArray_F=sort by_value (@{$ResamplingArray[2]}); - $ResamplingArray[2]=\@rankedArray_F; - for($x=0;$x<=$#rankedArray_F;$x++) { - $resampleAvgROUGE_F+=$rankedArray_F[$x]; - # print "*F ($x): $rankedArray_F[$x]\n"; - } - $resampleAvgROUGE_F=sprintf("%7.5f",$resampleAvgROUGE_F/(scalar @rankedArray_F)); - } - # $ciU=999-int((100-$opt_c)*10/2); # upper bound index - # $ciL=int((100-$opt_c)*10/2); # lower bound index - $delta=$numOfResamples*((100-$opt_c)/2.0)/100.0; - $ciUa=int($numOfResamples-$delta-1); # upper confidence interval lower index - $ciUb=$ciUa+1; # upper confidence interval upper index - $ciLa=int($delta); # lower confidence interval lower index - $ciLb=$ciLa+1; # lower confidence interval upper index - $ciR=$numOfResamples-$delta-1-$ciUa; # ratio bewteen lower and upper indexes - # $ROUGEAverages->{"AvgR"}=$avgAvgROUGE_R; - #------- - # recall - $ROUGEAverages->{"AvgR"}=$resampleAvgROUGE_R; - # find condifence intervals; take maximum distance from the mean - $ROUGEAverages->{"CIAvgL_R"}=sprintf("%7.5f",$ResamplingArray[0][$ciLa]+ - ($ResamplingArray[0][$ciLb]-$ResamplingArray[0][$ciLa])*$ciR); - $ROUGEAverages->{"CIAvgU_R"}=sprintf("%7.5f",$ResamplingArray[0][$ciUa]+ - ($ResamplingArray[0][$ciUb]-$ResamplingArray[0][$ciUa])*$ciR); - #------- - # precision - $ROUGEAverages->{"AvgP"}=$resampleAvgROUGE_P; - # find condifence intervals; take maximum distance from the mean - $ROUGEAverages->{"CIAvgL_P"}=sprintf("%7.5f",$ResamplingArray[1][$ciLa]+ - ($ResamplingArray[1][$ciLb]-$ResamplingArray[1][$ciLa])*$ciR); - $ROUGEAverages->{"CIAvgU_P"}=sprintf("%7.5f",$ResamplingArray[1][$ciUa]+ - ($ResamplingArray[1][$ciUb]-$ResamplingArray[1][$ciUa])*$ciR); - #------- - # f1-measure - $ROUGEAverages->{"AvgF"}=$resampleAvgROUGE_F; - # find condifence intervals; take maximum distance from the mean - $ROUGEAverages->{"CIAvgL_F"}=sprintf("%7.5f",$ResamplingArray[2][$ciLa]+ - ($ResamplingArray[2][$ciLb]-$ResamplingArray[2][$ciLa])*$ciR); - $ROUGEAverages->{"CIAvgU_F"}=sprintf("%7.5f",$ResamplingArray[2][$ciUa]+ - ($ResamplingArray[2][$ciUb]-$ResamplingArray[2][$ciUa])*$ciR); - $ROUGEAverages->{"M_cnt"}=$avgAvgROUGE_R; # model token count - $ROUGEAverages->{"P_cnt"}=$avgAvgROUGE_P; # peer token count - $ROUGEAverages->{"H_cnt"}=$avgAvgROUGE_F; # hit token count - } - else { - # $opt_t==2 => output raw count instead of precision, recall, and f-measure values - # in this option, no resampling is necessary, just output the raw counts - $ROUGEAverages->{"M_cnt"}=$avgAvgROUGE_R; # model token count - $ROUGEAverages->{"P_cnt"}=$avgAvgROUGE_P; # peer token count - $ROUGEAverages->{"H_cnt"}=$avgAvgROUGE_F; # hit token count - } -} - -sub computeROUGEX { - my $metric=shift; # which ROUGE metric to compute? - my $ROUGEScores=shift; - my $evalID=shift; - my $ROUGEEval=shift; # one particular evaluation pair - my $peerID=shift; # a specific peer ID - my $ROUGEParam=shift; # ROUGE scoring parameters - my $lengthLimit; # lenght limit in words - my $byteLimit; # length limit in bytes - my $NSIZE; # ngram size for ROUGE-N - my $weightFactor; # weight factor for ROUGE-W - my $skipDistance; # skip distance for ROUGE-S - my $scoreMode; # scoring mode: A = model average; B = best model - my $alpha; # relative importance between recall and precision - my $opt_t; # ROUGE score counting mode - my $BEMode; # Basic Element scoring mode - my ($c,$cx,@modelPaths,$modelIDs,$modelRoot,$inputFormat); - - $lengthLimit=$ROUGEParam->{"LENGTH"}; - $byteLimit=$ROUGEParam->{"BYTE"}; - $NSIZE=$ROUGEParam->{"NSIZE"}; - $weightFactor=$ROUGEParam->{"WEIGHT"}; - $skipDistance=$ROUGEParam->{"SD"}; - $scoreMode=$ROUGEParam->{"SM"}; - $alpha=$ROUGEParam->{"ALPHA"}; - $opt_t=$ROUGEParam->{"AVERAGE"}; - $BEMode=$ROUGEParam->{"BEMODE"}; - - # Check to see if this evaluation trial contains this $peerID. - # Sometimes not every peer provides response for each - # evaluation trial. - unless(exists($ROUGEEval->{"Ps"}{$peerID})) { - unless(exists($knownMissing{$evalID})) { - $knownMissing{$evalID}={}; - } - unless(exists($knownMissing{$evalID}{$peerID})) { - print STDERR "\*ROUGE Warning: test instance for peer $peerID does not exist for evaluation $evalID\n"; - $knownMissing{$evalID}{$peerID}=1; - } - return; - } - unless(defined($opt_z)) { - $peerPath=$ROUGEEval->{"PR"}."/".$ROUGEEval->{"Ps"}{$peerID}; - } - else { - # if opt_z is set then peerPath is read from a file list that - # includes the path to the peer. - $peerPath=$ROUGEEval->{"Ps"}{$peerID}; - } - if(defined($ROUGEEval->{"MR"})) { - $modelRoot=$ROUGEEval->{"MR"}; - } - else { - # if opt_z is set then modelPath is read from a file list that - # includes the path to the model. - $modelRoot=""; - } - $modelIDs=$ROUGEEval->{"MIDList"}; - $inputFormat=$ROUGEEval->{"IF"}; - # construct combined model - @modelPaths=(); # reset model paths - for($cx=0;$cx<=$#{$modelIDs};$cx++) { - my $modelID; - $modelID=$modelIDs->[$cx]; - unless(defined($opt_z)) { - $modelPath="$modelRoot/$ROUGEEval->{\"Ms\"}{$modelID}"; # get full model path - } - else { - # if opt_z is set then modelPath is read from a file list that - # includes the full path to the model. - $modelPath="$ROUGEEval->{\"Ms\"}{$modelID}"; # get full model path - } - if(-e "$modelPath") { - # print "*$modelPath\n"; - } - else { - die "Cannot find model summary: $modelPath\n"; - } - push(@modelPaths,$modelPath); - } - #--------------------------------------------------------------- - # evaluate peer - { - my (@results); - my ($testID,$avgROUGE,$avgROUGE_P,$avgROUGE_F); - @results=(); - if($metric eq "N") { - &computeNGramScore(\@modelPaths,$peerPath,\@results,$NSIZE,$lengthLimit,$byteLimit,$inputFormat,$scoreMode,$alpha); - } - elsif($metric eq "L") { - &computeLCSScore(\@modelPaths,$peerPath,\@results,$lengthLimit,$byteLimit,$inputFormat,$scoreMode,$alpha); - } - elsif($metric eq "W") { - &computeWLCSScore(\@modelPaths,$peerPath,\@results,$lengthLimit,$byteLimit,$inputFormat,$weightFactor,$scoreMode,$alpha); - } - elsif($metric eq "S") { - &computeSkipBigramScore(\@modelPaths,$peerPath,\@results,$skipDistance,$lengthLimit,$byteLimit,$inputFormat,$scoreMode,$alpha); - } - elsif($metric eq "BE") { - &computeBEScore(\@modelPaths,$peerPath,\@results,$BEMode,$lengthLimit,$byteLimit,$inputFormat,$scoreMode,$alpha); - } - else { - die "Unknown ROUGE metric ID: $metric, has to be N, L, W, or S\n"; - - } - unless(defined($opt_t)) { - # sentence level average - $avgROUGE=sprintf("%7.5f",$results[2]); - $avgROUGE_P=sprintf("%7.5f",$results[4]); - $avgROUGE_F=sprintf("%7.5f",$results[5]); - } - else { - # corpus level per token average - $avgROUGE=$results[0]; # total model token count - $avgROUGE_P=$results[3]; # total peer token count - $avgROUGE_F=$results[1]; # total match count between model and peer, i.e. hit - } - # record ROUGE scores for the current test - $testID="$evalID\.$peerID"; - if($debug) { - print "$testID\n"; - } - unless(exists($testIDs{$testID})) { - $testIDs{$testID}=1; - } - unless(exists($ROUGEScores->{$testID})) { - $ROUGEScores->{$testID}=[]; - push(@{$ROUGEScores->{$testID}},$avgROUGE); # average ; or model token count - push(@{$ROUGEScores->{$testID}},$avgROUGE_P); # average ; or peer token count - push(@{$ROUGEScores->{$testID}},$avgROUGE_F); # average ; or match token count (hit) - } - } -} - -# 10/21/2004 add selection of scoring mode -# A: average over all models -# B: take only the best score -sub computeNGramScore { - my $modelPaths=shift; - my $peerPath=shift; - my $results=shift; - my $NSIZE=shift; - my $lengthLimit=shift; - my $byteLimit=shift; - my $inputFormat=shift; - my $scoreMode=shift; - my $alpha=shift; - my ($modelPath,$modelText,$peerText,$text,@tokens); - my (%model_grams,%peer_grams); - my ($gramHit,$gramScore,$gramScoreBest); - my ($totalGramHit,$totalGramCount); - my ($gramScoreP,$gramScoreF,$totalGramCountP); - - #------------------------------------------------ - # read model file and create model n-gram maps - $totalGramHit=0; - $totalGramCount=0; - $gramScoreBest=-1; - $gramScoreP=0; # precision - $gramScoreF=0; # f-measure - $totalGramCountP=0; - #------------------------------------------------ - # read peer file and create model n-gram maps - %peer_grams=(); - $peerText=""; - &readText($peerPath,\$peerText,$inputFormat,$lengthLimit,$byteLimit); - &createNGram($peerText,\%peer_grams,$NSIZE); - if($debug) { - print "***P $peerPath\n"; - if(defined($peerText)) { - print "$peerText\n"; - print join("|",%peer_grams),"\n"; - } - else { - print "---empty text---\n"; - } - } - foreach $modelPath (@$modelPaths) { - %model_grams=(); - $modelText=""; - &readText($modelPath,\$modelText,$inputFormat,$lengthLimit,$byteLimit); - &createNGram($modelText,\%model_grams,$NSIZE); - if($debug) { - if(defined($modelText)) { - print "$modelText\n"; - print join("|",%model_grams),"\n"; - } - else { - print "---empty text---\n"; - } - } - #------------------------------------------------ - # compute ngram score - &ngramScore(\%model_grams,\%peer_grams,\$gramHit,\$gramScore); - # collect hit and count for each models - # This will effectively clip hit for each model; therefore would not give extra - # credit to reducdant information contained in the peer summary. - if($scoreMode eq "A") { - $totalGramHit+=$gramHit; - $totalGramCount+=$model_grams{"_cn_"}; - $totalGramCountP+=$peer_grams{"_cn_"}; - } - elsif($scoreMode eq "B") { - if($gramScore>$gramScoreBest) { - # only take a better score (i.e. better match) - $gramScoreBest=$gramScore; - $totalGramHit=$gramHit; - $totalGramCount=$model_grams{"_cn_"}; - $totalGramCountP=$peer_grams{"_cn_"}; - } - } - else { - # use average mode - $totalGramHit+=$gramHit; - $totalGramCount+=$model_grams{"_cn_"}; - $totalGramCountP+=$peer_grams{"_cn_"}; - } - if($debug) { - print "***M $modelPath\n"; - } - } - # prepare score result for return - # unigram - push(@$results,$totalGramCount); # total number of ngrams in models - push(@$results,$totalGramHit); - if($totalGramCount!=0) { - $gramScore=sprintf("%7.5f",$totalGramHit/$totalGramCount); - } - else { - $gramScore=sprintf("%7.5f",0); - } - push(@$results,$gramScore); - push(@$results,$totalGramCountP); # total number of ngrams in peers - if($totalGramCountP!=0) { - $gramScoreP=sprintf("%7.5f",$totalGramHit/$totalGramCountP); - } - else { - $gramScoreP=sprintf("%7.5f",0); - } - push(@$results,$gramScoreP); # precision score - if((1-$alpha)*$gramScoreP+$alpha*$gramScore>0) { - $gramScoreF=sprintf("%7.5f",($gramScoreP*$gramScore)/((1-$alpha)*$gramScoreP+$alpha*$gramScore)); - } - else { - $gramScoreF=sprintf("%7.5f",0); - } - push(@$results,$gramScoreF); # f1-measure score - if($debug) { - print "total $NSIZE-gram model count: $totalGramCount\n"; - print "total $NSIZE-gram peer count: $totalGramCountP\n"; - print "total $NSIZE-gram hit: $totalGramHit\n"; - print "total ROUGE-$NSIZE\-R: $gramScore\n"; - print "total ROUGE-$NSIZE\-P: $gramScoreP\n"; - print "total ROUGE-$NSIZE\-F: $gramScoreF\n"; - } -} - -sub computeSkipBigramScore { - my $modelPaths=shift; - my $peerPath=shift; - my $results=shift; - my $skipDistance=shift; - my $lengthLimit=shift; - my $byteLimit=shift; - my $inputFormat=shift; - my $scoreMode=shift; - my $alpha=shift; - my ($modelPath,$modelText,$peerText,$text,@tokens); - my (%model_grams,%peer_grams); - my ($gramHit,$gramScore,$gramScoreBest); - my ($totalGramHitm,$totalGramCount); - my ($gramScoreP,$gramScoreF,$totalGramCountP); - - #------------------------------------------------ - # read model file and create model n-gram maps - $totalGramHit=0; - $totalGramCount=0; - $gramScoreBest=-1; - $gramScoreP=0; # precision - $gramScoreF=0; # f-measure - $totalGramCountP=0; - #------------------------------------------------ - # read peer file and create model n-gram maps - %peer_grams=(); - $peerText=""; - &readText($peerPath,\$peerText,$inputFormat,$lengthLimit,$byteLimit); - &createSkipBigram($peerText,\%peer_grams,$skipDistance); - if($debug) { - print "***P $peerPath\n"; - if(defined($peerText)) { - print "$peerText\n"; - print join("|",%peer_grams),"\n"; - } - else { - print "---empty text---\n"; - } - } - foreach $modelPath (@$modelPaths) { - %model_grams=(); - $modelText=""; - &readText($modelPath,\$modelText,$inputFormat,$lengthLimit,$byteLimit); - if(defined($opt_M)) { # only apply stemming on models - $opt_m=1; - } - &createSkipBigram($modelText,\%model_grams,$skipDistance); - if(defined($opt_M)) { # only apply stemming on models - $opt_m=undef; - } - if($debug) { - if(defined($modelText)) { - print "$modelText\n"; - print join("|",%model_grams),"\n"; - } - else { - print "---empty text---\n"; - } - } - #------------------------------------------------ - # compute ngram score - &skipBigramScore(\%model_grams,\%peer_grams,\$gramHit,\$gramScore); - # collect hit and count for each models - # This will effectively clip hit for each model; therefore would not give extra - # credit to reducdant information contained in the peer summary. - if($scoreMode eq "A") { - $totalGramHit+=$gramHit; - $totalGramCount+=$model_grams{"_cn_"}; - $totalGramCountP+=$peer_grams{"_cn_"}; - } - elsif($scoreMode eq "B") { - if($gramScore>$gramScoreBest) { - # only take a better score (i.e. better match) - $gramScoreBest=$gramScore; - $totalGramHit=$gramHit; - $totalGramCount=$model_grams{"_cn_"}; - $totalGramCountP=$peer_grams{"_cn_"}; - } - } - else { - # use average mode - $totalGramHit+=$gramHit; - $totalGramCount+=$model_grams{"_cn_"}; - $totalGramCountP+=$peer_grams{"_cn_"}; - } - if($debug) { - print "***M $modelPath\n"; - } - } - # prepare score result for return - # unigram - push(@$results,$totalGramCount); # total number of ngrams - push(@$results,$totalGramHit); - if($totalGramCount!=0) { - $gramScore=sprintf("%7.5f",$totalGramHit/$totalGramCount); - } - else { - $gramScore=sprintf("%7.5f",0); - } - push(@$results,$gramScore); - push(@$results,$totalGramCountP); # total number of ngrams in peers - if($totalGramCountP!=0) { - $gramScoreP=sprintf("%7.5f",$totalGramHit/$totalGramCountP); - } - else { - $gramScoreP=sprintf("%7.5f",0); - } - push(@$results,$gramScoreP); # precision score - if((1-$alpha)*$gramScoreP+$alpha*$gramScore>0) { - $gramScoreF=sprintf("%7.5f",($gramScoreP*$gramScore)/((1-$alpha)*$gramScoreP+$alpha*$gramScore)); - } - else { - $gramScoreF=sprintf("%7.5f",0); - } - push(@$results,$gramScoreF); # f1-measure score - if($debug) { - print "total ROUGE-S$skipDistance model count: $totalGramCount\n"; - print "total ROUGE-S$skipDistance peer count: $totalGramCountP\n"; - print "total ROUGE-S$skipDistance hit: $totalGramHit\n"; - print "total ROUGE-S$skipDistance\-R: $gramScore\n"; - print "total ROUGE-S$skipDistance\-P: $gramScore\n"; - print "total ROUGE-S$skipDistance\-F: $gramScore\n"; - } -} - -sub computeLCSScore { - my $modelPaths=shift; - my $peerPath=shift; - my $results=shift; - my $lengthLimit=shift; - my $byteLimit=shift; - my $inputFormat=shift; - my $scoreMode=shift; - my $alpha=shift; - my ($modelPath,@modelText,@peerText,$text,@tokens); - my (@modelTokens,@peerTokens); - my ($lcsHit,$lcsScore,$lcsBase,$lcsScoreBest); - my ($totalLCSHitm,$totalLCSCount); - my (%peer_1grams,%tmp_peer_1grams,%model_1grams,$peerText1,$modelText1); - my ($lcsScoreP,$lcsScoreF,$totalLCSCountP); - - #------------------------------------------------ - $totalLCSHit=0; - $totalLCSCount=0; - $lcsScoreBest=-1; - $lcsScoreP=0; - $lcsScoreF=0; - $totalLCSCountP=0; - #------------------------------------------------ - # read peer file and create peer n-gram maps - @peerTokens=(); - @peerText=(); - &readText_LCS($peerPath,\@peerText,$inputFormat,$lengthLimit,$byteLimit); - &tokenizeText_LCS(\@peerText,\@peerTokens); - #------------------------------------------------ - # create unigram for clipping - %peer_1grams=(); - &readText($peerPath,\$peerText1,$inputFormat,$lengthLimit,$byteLimit); - &createNGram($peerText1,\%peer_1grams,1); - if($debug) { - my $i; - print "***P $peerPath\n"; - print join("\n",@peerText),"\n"; - for($i=0;$i<=$#peerText;$i++) { - print $i,": ",join("|",@{$peerTokens[$i]}),"\n"; - } - } - foreach $modelPath (@$modelPaths) { - %tmp_peer_1grams=%peer_1grams; # renew peer unigram hash, so the peer count can be reset to the orignal number - @modelTokens=(); - @modelText=(); - &readText_LCS($modelPath,\@modelText,$inputFormat,$lengthLimit,$byteLimit); - if(defined($opt_M)) { - $opt_m=1; - &tokenizeText_LCS(\@modelText,\@modelTokens); - $opt_m=undef; - } - else { - &tokenizeText_LCS(\@modelText,\@modelTokens); - } - #------------------------------------------------ - # create unigram for clipping - %model_1grams=(); - &readText($modelPath,\$modelText1,$inputFormat,$lengthLimit,$byteLimit); - if(defined($opt_M)) { # only apply stemming on models - $opt_m=1; - } - &createNGram($modelText1,\%model_1grams,1); - if(defined($opt_M)) { # only apply stemming on models - $opt_m=undef; - } - #------------------------------------------------ - # compute LCS score - &lcs(\@modelTokens,\@peerTokens,\$lcsHit,\$lcsScore,\$lcsBase,\%model_1grams,\%tmp_peer_1grams); - # collect hit and count for each models - # This will effectively clip hit for each model; therefore would not give extra - # credit to reductant information contained in the peer summary. - # Previous method that lumps model text together and inflates the peer summary - # the number of references time would reward redundant information - if($scoreMode eq "A") { - $totalLCSHit+=$lcsHit; - $totalLCSCount+=$lcsBase; - $totalLCSCountP+=$peer_1grams{"_cn_"}; - } - elsif($scoreMode eq "B") { - if($lcsScore>$lcsScoreBest) { - # only take a better score (i.e. better match) - $lcsScoreBest=$lcsScore; - $totalLCSHit=$lcsHit; - $totalLCSCount=$lcsBase; - $totalLCSCountP=$peer_1grams{"_cn_"}; - } - } - else { - # use average mode - $totalLCSHit+=$lcsHit; - $totalLCSCount+=$lcsBase; - $totalLCSCountP+=$peer_1grams{"_cn_"}; - } - if($debug) { - my $i; - print "***M $modelPath\n"; - print join("\n",@modelText),"\n"; - for($i=0;$i<=$#modelText;$i++) { - print $i,": ",join("|",@{$modelTokens[$i]}),"\n"; - } - } - } - # prepare score result for return - push(@$results,$totalLCSCount); # total number of ngrams - push(@$results,$totalLCSHit); - if($totalLCSCount!=0) { - $lcsScore=sprintf("%7.5f",$totalLCSHit/$totalLCSCount); - } - else { - $lcsScore=sprintf("%7.5f",0); - } - push(@$results,$lcsScore); - push(@$results,$totalLCSCountP); # total number of token in peers - if($totalLCSCountP!=0) { - $lcsScoreP=sprintf("%7.5f",$totalLCSHit/$totalLCSCountP); - } - else { - $lcsScoreP=sprintf("%7.5f",0); - } - push(@$results,$lcsScoreP); - if((1-$alpha)*$lcsScoreP+$alpha*$lcsScore>0) { - $lcsScoreF=sprintf("%7.5f",($lcsScoreP*$lcsScore)/((1-$alpha)*$lcsScoreP+$alpha*$lcsScore)); - } - else { - $lcsScoreF=sprintf("%7.5f",0); - } - push(@$results,$lcsScoreF); - if($debug) { - print "total ROUGE-L model count: $totalLCSCount\n"; - print "total ROUGE-L peer count: $totalLCSCountP\n"; - print "total ROUGE-L hit: $totalLCSHit\n"; - print "total ROUGE-L-R score: $lcsScore\n"; - print "total ROUGE-L-P: $lcsScoreP\n"; - print "total ROUGE-L-F: $lcsScoreF\n"; - } -} - -sub computeWLCSScore { - my $modelPaths=shift; - my $peerPath=shift; - my $results=shift; - my $lengthLimit=shift; - my $byteLimit=shift; - my $inputFormat=shift; - my $weightFactor=shift; - my $scoreMode=shift; - my $alpha=shift; - my ($modelPath,@modelText,@peerText,$text,@tokens); - my (@modelTokens,@peerTokens); - my ($lcsHit,$lcsScore,$lcsBase,$lcsScoreBest); - my ($totalLCSHitm,$totalLCSCount); - my (%peer_1grams,%tmp_peer_1grams,%model_1grams,$peerText1,$modelText1); - my ($lcsScoreP,$lcsScoreF,$totalLCSCountP); - - #------------------------------------------------ - # read model file and create model n-gram maps - $totalLCSHit=0; - $totalLCSCount=0; - $lcsScoreBest=-1; - $lcsScoreP=0; - $lcsScoreF=0; - $totalLCSCountP=0; - #------------------------------------------------ - # read peer file and create model n-gram maps - @peerTokens=(); - @peerText=(); - &readText_LCS($peerPath,\@peerText,$inputFormat,$lengthLimit,$byteLimit); - &tokenizeText_LCS(\@peerText,\@peerTokens); - #------------------------------------------------ - # create unigram for clipping - %peer_1grams=(); - &readText($peerPath,\$peerText1,$inputFormat,$lengthLimit,$byteLimit); - &createNGram($peerText1,\%peer_1grams,1); - if($debug) { - my $i; - print "***P $peerPath\n"; - print join("\n",@peerText),"\n"; - for($i=0;$i<=$#peerText;$i++) { - print $i,": ",join("|",@{$peerTokens[$i]}),"\n"; - } - } - foreach $modelPath (@$modelPaths) { - %tmp_peer_1grams=%peer_1grams; # renew peer unigram hash, so the peer count can be reset to the orignal number - @modelTokens=(); - @modelText=(); - &readText_LCS($modelPath,\@modelText,$inputFormat,$lengthLimit,$byteLimit); - &tokenizeText_LCS(\@modelText,\@modelTokens); - #------------------------------------------------ - # create unigram for clipping - %model_1grams=(); - &readText($modelPath,\$modelText1,$inputFormat,$lengthLimit,$byteLimit); - if(defined($opt_M)) { # only apply stemming on models - $opt_m=1; - } - &createNGram($modelText1,\%model_1grams,1); - if(defined($opt_M)) { # only apply stemming on models - $opt_m=undef; - } - #------------------------------------------------ - # compute WLCS score - &wlcs(\@modelTokens,\@peerTokens,\$lcsHit,\$lcsScore,\$lcsBase,$weightFactor,\%model_1grams,\%tmp_peer_1grams); - # collect hit and count for each models - # This will effectively clip hit for each model; therefore would not give extra - # credit to reductant information contained in the peer summary. - # Previous method that lumps model text together and inflates the peer summary - # the number of references time would reward redundant information - if($scoreMode eq "A") { - $totalLCSHit+=$lcsHit; - $totalLCSCount+=&wlcsWeight($lcsBase,$weightFactor); - $totalLCSCountP+=&wlcsWeight($peer_1grams{"_cn_"},$weightFactor); - } - elsif($scoreMode eq "B") { - if($lcsScore>$lcsScoreBest) { - # only take a better score (i.e. better match) - $lcsScoreBest=$lcsScore; - $totalLCSHit=$lcsHit; - $totalLCSCount=&wlcsWeight($lcsBase,$weightFactor); - $totalLCSCountP=&wlcsWeight($peer_1grams{"_cn_"},$weightFactor); - } - } - else { - # use average mode - $totalLCSHit+=$lcsHit; - $totalLCSCount+=&wlcsWeight($lcsBase,$weightFactor); - $totalLCSCountP+=&wlcsWeight($peer_1grams{"_cn_"},$weightFactor); - } - if($debug) { - my $i; - print "***M $modelPath\n"; - print join("\n",@modelText),"\n"; - for($i=0;$i<=$#modelText;$i++) { - print $i,": ",join("|",@{$modelTokens[$i]}),"\n"; - } - } - } - # prepare score result for return - push(@$results,$totalLCSCount); # total number of ngrams - push(@$results,$totalLCSHit); - if($totalLCSCount!=0) { - $lcsScore=sprintf("%7.5f",&wlcsWeightInverse($totalLCSHit/$totalLCSCount,$weightFactor)); - } - else { - $lcsScore=sprintf("%7.5f",0); - } - push(@$results,$lcsScore); - push(@$results,$totalLCSCountP); # total number of token in peers - if($totalLCSCountP!=0) { - $lcsScoreP=sprintf("%7.5f",&wlcsWeightInverse($totalLCSHit/$totalLCSCountP,$weightFactor)); - } - else { - $lcsScoreP=sprintf("%7.5f",0); - } - push(@$results,$lcsScoreP); - if((1-$alpha)*$lcsScoreP+$alpha*$lcsScore>0) { - $lcsScoreF=sprintf("%7.5f",($lcsScoreP*$lcsScore)/((1-$alpha)*$lcsScoreP+$alpha*$lcsScore)); - } - else { - $lcsScoreF=sprintf("%7.5f",0); - } - push(@$results,$lcsScoreF); - if($debug) { - print "total ROUGE-W-$weightFactor model count: $totalLCSCount\n"; - print "total ROUGE-W-$weightFactor peer count: $totalLCSCountP\n"; - print "total ROUGE-W-$weightFactor hit: $totalLCSHit\n"; - print "total ROUGE-W-$weightFactor-R score: $lcsScore\n"; - print "total ROUGE-W-$weightFactor-P score: $lcsScoreP\n"; - print "total ROUGE-W-$weightFactor-F score: $lcsScoreF\n"; - } -} - -sub computeBEScore { - my $modelPaths=shift; - my $peerPath=shift; - my $results=shift; - my $BEMode=shift; - my $lengthLimit=shift; - my $byteLimit=shift; - my $inputFormat=shift; - my $scoreMode=shift; - my $alpha=shift; - my ($modelPath,@modelBEList,@peerBEList,$text,@tokens); - my (%model_BEs,%peer_BEs); - my ($BEHit,$BEScore,$BEScoreBest); - my ($totalBEHit,$totalBECount); - my ($BEScoreP,$BEScoreF,$totalBECountP); - - #------------------------------------------------ - # read model file and create model BE maps - $totalBEHit=0; - $totalBECount=0; - $BEScoreBest=-1; - $BEScoreP=0; # precision - $BEScoreF=0; # f-measure - $totalBECountP=0; - #------------------------------------------------ - # read peer file and create model n-BE maps - %peer_BEs=(); - @peerBEList=(); - &readBE($peerPath,\@peerBEList,$inputFormat); - &createBE(\@peerBEList,\%peer_BEs,$BEMode); - if($debug) { - print "***P $peerPath\n"; - if(scalar @peerBEList > 0) { -# print join("\n",@peerBEList); -# print "\n"; - print join("#",%peer_BEs),"\n"; - } - else { - print "---empty text---\n"; - } - } - foreach $modelPath (@$modelPaths) { - %model_BEs=(); - @modelBEList=(); - &readBE($modelPath,\@modelBEList,$inputFormat); - if(defined($opt_M)) { # only apply stemming on models - $opt_m=1; - } - &createBE(\@modelBEList,\%model_BEs,$BEMode); - if(defined($opt_M)) { # only apply stemming on models - $opt_m=undef; - } - if($debug) { - if(scalar @modelBEList > 0) { -# print join("\n",@modelBEList); -# print "\n"; - print join("#",%model_BEs),"\n"; - } - else { - print "---empty text---\n"; - } - } - #------------------------------------------------ - # compute BE score - &getBEScore(\%model_BEs,\%peer_BEs,\$BEHit,\$BEScore); - # collect hit and count for each models - # This will effectively clip hit for each model; therefore would not give extra - # credit to reducdant information contained in the peer summary. - if($scoreMode eq "A") { - $totalBEHit+=$BEHit; - $totalBECount+=$model_BEs{"_cn_"}; - $totalBECountP+=$peer_BEs{"_cn_"}; - } - elsif($scoreMode eq "B") { - if($BEScore>$BEScoreBest) { - # only take a better score (i.e. better match) - $BEScoreBest=$BEScore; - $totalBEHit=$BEHit; - $totalBECount=$model_BEs{"_cn_"}; - $totalBECountP=$peer_BEs{"_cn_"}; - } - } - else { - # use average mode - $totalBEHit+=$BEHit; - $totalBECount+=$model_BEs{"_cn_"}; - $totalBECountP+=$peer_BEs{"_cn_"}; - } - if($debug) { - print "***M $modelPath\n"; - } - } - # prepare score result for return - # uniBE - push(@$results,$totalBECount); # total number of nbes in models - push(@$results,$totalBEHit); - if($totalBECount!=0) { - $BEScore=sprintf("%7.5f",$totalBEHit/$totalBECount); - } - else { - $BEScore=sprintf("%7.5f",0); - } - push(@$results,$BEScore); - push(@$results,$totalBECountP); # total number of nBEs in peers - if($totalBECountP!=0) { - $BEScoreP=sprintf("%7.5f",$totalBEHit/$totalBECountP); - } - else { - $BEScoreP=sprintf("%7.5f",0); - } - push(@$results,$BEScoreP); # precision score - if((1-$alpha)*$BEScoreP+$alpha*$BEScore>0) { - $BEScoreF=sprintf("%7.5f",($BEScoreP*$BEScore)/((1-$alpha)*$BEScoreP+$alpha*$BEScore)); - } - else { - $BEScoreF=sprintf("%7.5f",0); - } - push(@$results,$BEScoreF); # f1-measure score - if($debug) { - print "total BE-$BEMode model count: $totalBECount\n"; - print "total BE-$BEMode peer count: $totalBECountP\n"; - print "total BE-$BEMode hit: $totalBEHit\n"; - print "total ROUGE-BE-$BEMode\-R: $BEScore\n"; - print "total ROUGE-BE-$BEMode\-P: $BEScoreP\n"; - print "total ROUGE-BE-$BEMode\-F: $BEScoreF\n"; - } -} - -sub readTextOld { - my $inPath=shift; - my $tokenizedText=shift; - my $type=shift; - my $lengthLimit=shift; - my $byteLimit=shift; - my ($text,$bsize,$wsize,@words,$done); - - $$tokenizedText=undef; - $bsize=0; - $wsize=0; - $done=0; - open(TEXT,$inPath)||die "Cannot open $inPath\n"; - if($type=~/^SEE$/oi) { - while(defined($line=)) { # SEE abstract format - if($line=~/^\[([0-9]+)\]<\/a>\s+([^<]+)/o) { - $text=$3; - $text=~tr/A-Z/a-z/; - &checkSummarySize($tokenizedText,\$text,\$wsize,\$bsize,\$done,$lengthLimit,$byteLimit); - } - } - } - elsif($type=~/^ISI$/oi) { # ISI standard sentence by sentence format - while(defined($line=)) { - if($line=~/^([^<]+)<\/S>/o) { - $text=$1; - $text=~tr/A-Z/a-z/; - &checkSummarySize($tokenizedText,\$text,\$wsize,\$bsize,\$done,$lengthLimit,$byteLimit); - } - } - } - elsif($type=~/^SPL$/oi) { # SPL one Sentence Per Line format - while(defined($line=)) { - chomp($line); - $line=~s/^\s+//; - $line=~s/\s+$//; - if(defined($line)&&length($line)>0) { - $text=$line; - $text=~tr/A-Z/a-z/; - &checkSummarySize($tokenizedText,\$text,\$wsize,\$bsize,\$done,$lengthLimit,$byteLimit); - } - } - } - else { - close(TEXT); - die "Unknown input format: $type\n"; - } - close(TEXT); - if(defined($$tokenizedText)) { - $$tokenizedText=~s/\-/ \- /g; - $$tokenizedText=~s/[^A-Za-z0-9\-]/ /g; - $$tokenizedText=~s/^\s+//; - $$tokenizedText=~s/\s+$//; - $$tokenizedText=~s/\s+/ /g; - } - else { - print STDERR "readText: $inPath -> empty text\n"; - } - # print "($$tokenizedText)\n\n"; -} - -# enforce length cutoff at the file level -# convert different input format into SPL format then put them into -# tokenizedText -sub readText { - my $inPath=shift; - my $tokenizedText=shift; - my $type=shift; - my $lengthLimit=shift; - my $byteLimit=shift; - my ($text,$bsize,$wsize,@words,$done,@sntList); - - $$tokenizedText=undef; - $bsize=0; - $wsize=0; - $done=0; - @sntList=(); - open(TEXT,$inPath)||die "Cannot open $inPath\n"; - if($type=~/^SEE$/oi) { - while(defined($line=)) { # SEE abstract format - if($line=~/^\[([0-9]+)\]<\/a>\s+([^<]+)/o|| - $line=~/^\[([0-9]+)\]<\/a>\s+([^<]+)/o) { - $text=$2; - $text=~tr/A-Z/a-z/; - push(@sntList,$text); - } - } - } - elsif($type=~/^ISI$/oi) { # ISI standard sentence by sentence format - while(defined($line=)) { - if($line=~/^([^<]+)<\/S>/o) { - $text=$1; - $text=~tr/A-Z/a-z/; - push(@sntList,$text); - } - } - } - elsif($type=~/^SPL$/oi) { # SPL one Sentence Per Line format - while(defined($line=)) { - chomp($line); - if(defined($line)&&length($line)>0) { - $text=$line; - $text=~tr/A-Z/a-z/; - push(@sntList,$text); - } - } - } - else { - close(TEXT); - die "Unknown input format: $type\n"; - } - close(TEXT); - if($lengthLimit==0&&$byteLimit==0) { - $$tokenizedText=join(" ",@sntList); - } - elsif($lengthLimit!=0) { - my ($tmpText); - $tmpText=""; - $tmpTextLen=0; - foreach $s (@sntList) { - my ($sLen,@tokens); - @tokens=split(/\s+/,$s); - $sLen=scalar @tokens; - if($tmpTextLen+$sLen<$lengthLimit) { - if($tmpTextLen!=0) { - $tmpText.=" $s"; - } - else { - $tmpText.="$s"; - } - $tmpTextLen+=$sLen; - } - else { - if($tmpTextLen>0) { - $tmpText.=" "; - } - $tmpText.=join(" ",@tokens[0..$lengthLimit-$tmpTextLen-1]); - last; - } - } - if(length($tmpText)>0) { - $$tokenizedText=$tmpText; - } - } - elsif($byteLimit!=0) { - my ($tmpText); - $tmpText=""; - $tmpTextLen=0; - foreach $s (@sntList) { - my ($sLen); - $sLen=length($s); - if($tmpTextLen+$sLen<$byteLimit) { - if($tmpTextLen!=0) { - $tmpText.=" $s"; - } - else { - $tmpText.="$s"; - } - $tmpTextLen+=$sLen; - } - else { - if($tmpTextLen>0) { - $tmpText.=" "; - } - $tmpText.=substr($s,0,$byteLimit-$tmpTextLen); - last; - } - } - if(length($tmpText)>0) { - $$tokenizedText=$tmpText; - } - } - if(defined($$tokenizedText)) { - $$tokenizedText=~s/\-/ \- /g; - $$tokenizedText=~s/[^A-Za-z0-9\-]/ /g; - $$tokenizedText=~s/^\s+//; - $$tokenizedText=~s/\s+$//; - $$tokenizedText=~s/\s+/ /g; - } - else { - print STDERR "readText: $inPath -> empty text\n"; - } - # print "($$tokenizedText)\n\n"; -} - -sub readBE { - my $inPath=shift; - my $BEList=shift; - my $type=shift; - my ($line); - - open(TEXT,$inPath)||die "Cannot open $inPath\n"; - if(defined($opt_v)) { - print STDERR "$inPath\n"; - } - if($type=~/^SIMPLE$/oi) { - while(defined($line=)) { # Simple BE triple format - chomp($line); - push(@{$BEList},$line); - } - } - elsif($type=~/^ISI$/oi) { # ISI standard BE format - while(defined($line=)) { - # place holder - } - } - else { - close(TEXT); - die "Unknown input format: $type\n"; - } - close(TEXT); - if(scalar @{$BEList} ==0) { - print STDERR "readBE: $inPath -> empty text\n"; - } -} - -sub checkSummarySize { - my $tokenizedText=shift; - my $text=shift; - my $wsize=shift; - my $bsize=shift; - my $done=shift; - my $lenghtLimit=shift; - my $byteLimit=shift; - my (@words); - - @words=split(/\s+/,$$text); - if(($lengthLimit==0&&$byteLimit==0)|| - ($lengthLimit!=0&&(scalar @words)+$$wsize<=$lengthLimit)|| - ($byteLimit!=0&&length($$text)+$$bsize<=$byteLimit)) { - if(defined($$tokenizedText)) { - $$tokenizedText.=" $$text"; - } - else { - $$tokenizedText=$$text; - } - $$bsize+=length($$text); - $$wsize+=(scalar @words); - } - elsif($lengthLimit!=0&&(scalar @words)+$$wsize>$lengthLimit) { - if($$done==0) { - if(defined($$tokenizedText)) { - $$tokenizedText.=" "; - $$tokenizedText.=join(" ",@words[0..$lengthLimit-$$wsize-1]); - } - else { - $$tokenizedText=join(" ",@words[0..$lengthLimit-$$wsize-1]); - } - $$done=1; - } - } - elsif($byteLimit!=0&&length($$text)+$$bsize>$byteLimit) { - if($$done==0) { - if(defined($$tokenizedText)) { - $$tokenizedText.=" "; - $$tokenizedText.=substr($$text,0,$byteLimit-$$bsize); - } - else { - $$tokenizedText=substr($$text,0,$byteLimit-$$bsize); - - } - $$done=1; - } - } -} - -# LCS computing is based on unit and cannot lump all the text together -# as in computing ngram co-occurrences -sub readText_LCS { - my $inPath=shift; - my $tokenizedText=shift; - my $type=shift; - my $lengthLimit=shift; - my $byteLimit=shift; - my ($text,$t,$bsize,$wsize,$done,@sntList); - - @{$tokenizedText}=(); - $bsize=0; - $wsize=0; - $done=0; - @sntList=(); - open(TEXT,$inPath)||die "Cannot open $inPath\n"; - if($type=~/^SEE$/oi) { - while(defined($line=)) { # SEE abstract format - if($line=~/^\[([0-9]+)\]<\/a>\s+([^<]+)/o|| - $line=~/^\[([0-9]+)\]<\/a>\s+([^<]+)/o) { - $text=$2; - $text=~tr/A-Z/a-z/; - push(@sntList,$text); - } - } - } - elsif($type=~/^ISI$/oi) { # ISI standard sentence by sentence format - while(defined($line=)) { - if($line=~/^([^<]+)<\/S>/o) { - $text=$1; - $text=~tr/A-Z/a-z/; - push(@sntList,$text); - } - } - } - elsif($type=~/^SPL$/oi) { # SPL one Sentence Per Line format - while(defined($line=)) { - chomp($line); - if(defined($line)&&length($line)>0) { - $text=$line; - $text=~tr/A-Z/a-z/; - push(@sntList,$text); - } - } - } - else { - close(TEXT); - die "Unknown input format: $type\n"; - } - close(TEXT); - if($lengthLimit==0&&$byteLimit==0) { - @{$tokenizedText}=@sntList; - } - elsif($lengthLimit!=0) { - my ($tmpText); - $tmpText=""; - $tmpTextLen=0; - foreach $s (@sntList) { - my ($sLen,@tokens); - @tokens=split(/\s+/,$s); - $sLen=scalar @tokens; - if($tmpTextLen+$sLen<$lengthLimit) { - $tmpTextLen+=$sLen; - push(@{$tokenizedText},$s); - } - else { - push(@{$tokenizedText},join(" ",@tokens[0..$lengthLimit-$tmpTextLen-1])); - last; - } - } - } - elsif($byteLimit!=0) { - my ($tmpText); - $tmpText=""; - $tmpTextLen=0; - foreach $s (@sntList) { - my ($sLen); - $sLen=length($s); - if($tmpTextLen+$sLen<$byteLimit) { - push(@{$tokenizedText},$s); - } - else { - push(@{$tokenizedText},substr($s,0,$byteLimit-$tmpTextLen)); - last; - } - } - } - if(defined(@{$tokenizedText}>0)) { - for($t=0;$t<@{$tokenizedText};$t++) { - $tokenizedText->[$t]=~s/\-/ \- /g; - $tokenizedText->[$t]=~s/[^A-Za-z0-9\-]/ /g; - $tokenizedText->[$t]=~s/^\s+//; - $tokenizedText->[$t]=~s/\s+$//; - $tokenizedText->[$t]=~s/\s+/ /g; - } - } - else { - print STDERR "readText_LCS: $inPath -> empty text\n"; - } -} - -# LCS computing is based on unit and cannot lump all the text together -# as in computing ngram co-occurrences -sub readText_LCS_old { - my $inPath=shift; - my $tokenizedText=shift; - my $type=shift; - my $lengthLimit=shift; - my $byteLimit=shift; - my ($text,$t,$bsize,$wsize,$done); - - @{$tokenizedText}=(); - $bsize=0; - $wsize=0; - $done=0; - open(TEXT,$inPath)||die "Cannot open $inPath\n"; - if($type=~/^SEE$/oi) { - while(defined($line=)) { # SEE abstract format - if($line=~/^\[([0-9]+)\]<\/a>\s+([^<]+)/o) { - $text=$3; - $text=~tr/A-Z/a-z/; - &checkSummarySize_LCS($tokenizedText,\$text,\$wsize,\$bsize,\$done,$lengthLimit,$byteLimit); - } - } - } - elsif($type=~/^ISI$/oi) { # ISI standard sentence by sentence format - while(defined($line=)) { - if($line=~/^([^<]+)<\/S>/o) { - $text=$1; - $text=~tr/A-Z/a-z/; - &checkSummarySize_LCS($tokenizedText,\$text,\$wsize,\$bsize,\$done,$lengthLimit,$byteLimit); - } - } - } - elsif($type=~/^SPL$/oi) { # SPL one Sentence Per Line format - while(defined($line=)) { - chomp($line); - $line=~s/^\s+//; - $line=~s/\s+$//; - if(defined($line)&&length($line)>0) { - $text=$line; - $text=~tr/A-Z/a-z/; - &checkSummarySize_LCS($tokenizedText,\$text,\$wsize,\$bsize,\$done,$lengthLimit,$byteLimit); - } - } - } - else { - close(TEXT); - die "Unknown input format: $type\n"; - } - close(TEXT); - if(defined(@{$tokenizedText}>0)) { - for($t=0;$t<@{$tokenizedText};$t++) { - $tokenizedText->[$t]=~s/\-/ \- /g; - $tokenizedText->[$t]=~s/[^A-Za-z0-9\-]/ /g; - $tokenizedText->[$t]=~s/^\s+//; - $tokenizedText->[$t]=~s/\s+$//; - $tokenizedText->[$t]=~s/\s+/ /g; - } - } - else { - print STDERR "readText_LCS: $inPath -> empty text\n"; - } -} - -sub checkSummarySize_LCS { - my $tokenizedText=shift; - my $text=shift; - my $wsize=shift; - my $bsize=shift; - my $done=shift; - my $lenghtLimit=shift; - my $byteLimit=shift; - my (@words); - - @words=split(/\s+/,$$text); - if(($lengthLimit==0&&$byteLimit==0)|| - ($lengthLimit!=0&&(scalar @words)+$$wsize<=$lengthLimit)|| - ($byteLimit!=0&&length($$text)+$$bsize<=$byteLimit)) { - push(@{$tokenizedText},$$text); - $$bsize+=length($$text); - $$wsize+=(scalar @words); - } - elsif($lengthLimit!=0&&(scalar @words)+$$wsize>$lengthLimit) { - if($$done==0) { - push(@{$tokenizedText},$$text); - $$done=1; - } - } - elsif($byteLimit!=0&&length($$text)+$$bsize>$byteLimit) { - if($$done==0) { - push(@{$tokenizedText},$$text); - $$done=1; - } - } -} - -sub ngramScore { - my $model_grams=shift; - my $peer_grams=shift; - my $hit=shift; - my $score=shift; - my ($s,$t,@tokens); - - $$hit=0; - @tokens=keys (%$model_grams); - foreach $t (@tokens) { - if($t ne "_cn_") { - my $h; - $h=0; - if(exists($peer_grams->{$t})) { - $h=$peer_grams->{$t}<=$model_grams->{$t}? - $peer_grams->{$t}:$model_grams->{$t}; # clip - $$hit+=$h; - } - } - } - if($model_grams->{"_cn_"}!=0) { - $$score=sprintf("%07.5f",$$hit/$model_grams->{"_cn_"}); - } - else { - # no instance of n-gram at this length - $$score=0; - # die "model n-grams has zero instance\n"; - } -} - -sub skipBigramScore { - my $model_grams=shift; - my $peer_grams=shift; - my $hit=shift; - my $score=shift; - my ($s,$t,@tokens); - - $$hit=0; - @tokens=keys (%$model_grams); - foreach $t (@tokens) { - if($t ne "_cn_") { - my $h; - $h=0; - if(exists($peer_grams->{$t})) { - $h=$peer_grams->{$t}<=$model_grams->{$t}? - $peer_grams->{$t}:$model_grams->{$t}; # clip - $$hit+=$h; - } - } - } - if($model_grams->{"_cn_"}!=0) { - $$score=sprintf("%07.5f",$$hit/$model_grams->{"_cn_"}); - } - else { - # no instance of n-gram at this length - $$score=0; - # die "model n-grams has zero instance\n"; - } -} - -sub lcs { - my $model=shift; - my $peer=shift; - my $hit=shift; - my $score=shift; - my $base=shift; - my $model_1grams=shift; - my $peer_1grams=shift; - my ($i,$j,@hitMask,@LCS); - - $$hit=0; - $$base=0; - # compute LCS length for each model/peer pair - for($i=0;$i<@{$model};$i++) { - # use @hitMask to make sure multiple peer hit won't be counted as multiple hits - @hitMask=(); - for($j=0;$j<@{$model->[$i]};$j++) { - push(@hitMask,0); # initialize hit mask - } - $$base+=scalar @{$model->[$i]}; # add model length - for($j=0;$j<@{$peer};$j++) { - &lcs_inner($model->[$i],$peer->[$j],\@hitMask); - } - @LCS=(); - for($j=0;$j<@{$model->[$i]};$j++) { - if($hitMask[$j]==1) { - if(exists($model_1grams->{$model->[$i][$j]})&& - exists($peer_1grams->{$model->[$i][$j]})&& - $model_1grams->{$model->[$i][$j]}>0&& - $peer_1grams->{$model->[$i][$j]}>0) { - $$hit++; - #--------------------------------------------- - # bookkeeping to clip over counting - # everytime a hit is found it is deducted - # from both model and peer unigram count - # if a unigram count already involve in - # one LCS match then it will not be counted - # if it match another token in the model - # unit. This will make sure LCS score - # is always lower than unigram score - $model_1grams->{$model->[$i][$j]}--; - $peer_1grams->{$model->[$i][$j]}--; - push(@LCS,$model->[$i][$j]); - } - } - } - if($debug) { - print "LCS: "; - if(@LCS) { - print join(" ",@LCS),"\n"; - } - else { - print "-\n"; - } - } - } - if($$base>0) { - $$score=$$hit/$$base; - } - else { - $$score=0; - } -} - -sub lcs_inner { - my $model=shift; - my $peer=shift; - my $hitMask=shift; - my $m=scalar @$model; # length of model - my $n=scalar @$peer; # length of peer - my ($i,$j); - my (@c,@b); - - if(@{$model}==0) { - return; - } - @c=(); - @b=(); - # initialize boundary condition and - # the DP array - for($i=0;$i<=$m;$i++) { - push(@c,[]); - push(@b,[]); - for($j=0;$j<=$n;$j++) { - push(@{$c[$i]},0); - push(@{$b[$i]},0); - } - } - for($i=1;$i<=$m;$i++) { - for($j=1;$j<=$n;$j++) { - if($model->[$i-1] eq $peer->[$j-1]) { - # recursively solve the i-1 subproblem - $c[$i][$j]=$c[$i-1][$j-1]+1; - $b[$i][$j]="\\"; # go diagonal - } - elsif($c[$i-1][$j]>=$c[$i][$j-1]) { - $c[$i][$j]=$c[$i-1][$j]; - $b[$i][$j]="^"; # go up - } - else { - $c[$i][$j]=$c[$i][$j-1]; - $b[$i][$j]="<"; # go left - } - } - } - &markLCS($hitMask,\@b,$m,$n); -} - -sub wlcs { - my $model=shift; - my $peer=shift; - my $hit=shift; - my $score=shift; - my $base=shift; - my $weightFactor=shift; - my $model_1grams=shift; - my $peer_1grams=shift; - my ($i,$j,@hitMask,@LCS,$hitLen); - - $$hit=0; - $$base=0; - # compute LCS length for each model/peer pair - for($i=0;$i<@{$model};$i++) { - # use @hitMask to make sure multiple peer hit won't be counted as multiple hits - @hitMask=(); - for($j=0;$j<@{$model->[$i]};$j++) { - push(@hitMask,0); # initialize hit mask - } - $$base+=&wlcsWeight(scalar @{$model->[$i]},$weightFactor); # add model length - for($j=0;$j<@{$peer};$j++) { - &wlcs_inner($model->[$i],$peer->[$j],\@hitMask,$weightFactor); - } - @LCS=(); - $hitLen=0; - for($j=0;$j<@{$model->[$i]};$j++) { - if($hitMask[$j]==1) { - if(exists($model_1grams->{$model->[$i][$j]})&& - exists($peer_1grams->{$model->[$i][$j]})&& - $model_1grams->{$model->[$i][$j]}>0&& - $peer_1grams->{$model->[$i][$j]}>0) { - $hitLen++; - if($j+1<@{$model->[$i]}&&$hitMask[$j+1]==0) { - $$hit+=&wlcsWeight($hitLen,$weightFactor); - $hitLen=0; # reset hit length - } - elsif($j+1==@{$model->[$i]}) { - # end of sentence - $$hit+=&wlcsWeight($hitLen,$weightFactor); - $hitLen=0; # reset hit length - } - #--------------------------------------------- - # bookkeeping to clip over counting - # everytime a hit is found it is deducted - # from both model and peer unigram count - # if a unigram count already involve in - # one LCS match then it will not be counted - # if it match another token in the model - # unit. This will make sure LCS score - # is always lower than unigram score - $model_1grams->{$model->[$i][$j]}--; - $peer_1grams->{$model->[$i][$j]}--; - push(@LCS,$model->[$i][$j]); - } - } - } - if($debug) { - print "ROUGE-W: "; - if(@LCS) { - print join(" ",@LCS),"\n"; - } - else { - print "-\n"; - } - } - } - if($$base==0) { - $$base=1e-8; - } - $$score=wlcsWeightInverse($$hit/$$base,$weightFactor); -} - -sub wlcsWeight { - my $r=shift; - my $power=shift; - - return $r**$power; -} - -sub wlcsWeightInverse { - my $r=shift; - my $power=shift; - - return $r**(1/$power); -} - -sub wlcs_inner { - my $model=shift; - my $peer=shift; - my $hitMask=shift; - my $weightFactor=shift; - my $m=scalar @$model; # length of model - my $n=scalar @$peer; # length of peer - my ($i,$j); - my (@c,@b,@l); - - if(@{$model}==0) { - return; - } - @c=(); - @b=(); - @l=(); # the length of consecutive matches so far - # initialize boundary condition and - # the DP array - for($i=0;$i<=$m;$i++) { - push(@c,[]); - push(@b,[]); - push(@l,[]); - for($j=0;$j<=$n;$j++) { - push(@{$c[$i]},0); - push(@{$b[$i]},0); - push(@{$l[$i]},0); - } - } - for($i=1;$i<=$m;$i++) { - for($j=1;$j<=$n;$j++) { - if($model->[$i-1] eq $peer->[$j-1]) { - # recursively solve the i-1 subproblem - $k=$l[$i-1][$j-1]; - $c[$i][$j]=$c[$i-1][$j-1]+&wlcsWeight($k+1,$weightFactor)-&wlcsWeight($k,$weightFactor); - $b[$i][$j]="\\"; # go diagonal - $l[$i][$j]=$k+1; # extend the consecutive matching sequence - } - elsif($c[$i-1][$j]>=$c[$i][$j-1]) { - $c[$i][$j]=$c[$i-1][$j]; - $b[$i][$j]="^"; # go up - $l[$i][$j]=0; # no match at this position - } - else { - $c[$i][$j]=$c[$i][$j-1]; - $b[$i][$j]="<"; # go left - $l[$i][$j]=0; # no match at this position - } - } - } - &markLCS($hitMask,\@b,$m,$n); -} - -sub markLCS { - my $hitMask=shift; - my $b=shift; - my $i=shift; - my $j=shift; - - while($i!=0&&$j!=0) { - if($b->[$i][$j] eq "\\") { - $i--; - $j--; - $hitMask->[$i]=1; # mark current model position as a hit - } - elsif($b->[$i][$j] eq "^") { - $i--; - } - elsif($b->[$i][$j] eq "<") { - $j--; - } - else { - die "Illegal move in markLCS: ($i,$j): \"$b->[$i][$j]\".\n"; - } - } -} - -# currently only support simple lexical matching -sub getBEScore { - my $modelBEs=shift; - my $peerBEs=shift; - my $hit=shift; - my $score=shift; - my ($s,$t,@tokens); - - $$hit=0; - @tokens=keys (%$modelBEs); - foreach $t (@tokens) { - if($t ne "_cn_") { - my $h; - $h=0; - if(exists($peerBEs->{$t})) { - $h=$peerBEs->{$t}<=$modelBEs->{$t}? - $peerBEs->{$t}:$modelBEs->{$t}; # clip - $$hit+=$h; - if(defined($opt_v)) { - print "* Match: $t\n"; - } - } - } - } - if($modelBEs->{"_cn_"}!=0) { - $$score=sprintf("%07.5f",$$hit/$modelBEs->{"_cn_"}); - } - else { - # no instance of BE at this length - $$score=0; - # die "model BE has zero instance\n"; - } -} - -sub MorphStem { - my $token=shift; - my ($os,$ltoken); - - if(!defined($token)||length($token)==0) { - return undef; - } - - $ltoken=$token; - $ltoken=~tr/A-Z/a-z/; - if(exists($exceptiondb{$ltoken})) { - return $exceptiondb{$ltoken}; - } - $os=$ltoken; - return stem($os); -} - -sub createNGram { - my $text=shift; - my $g=shift; - my $NSIZE=shift; - my @mx_tokens=(); - my @m_tokens=(); - my ($i,$j); - my ($gram); - my ($count); - my ($byteSize); - - # remove stopwords - if($useStopwords) { - %stopwords=(); # consider stop words - } - unless(defined($text)) { - $g->{"_cn_"}=0; - return; - } - @mx_tokens=split(/\s+/,$text); - $byteSize=0; - for($i=0;$i<=$#mx_tokens;$i++) { - unless(exists($stopwords{$mx_tokens[$i]})) { - $byteSize+=length($mx_tokens[$i])+1; # the length of words in bytes so far + 1 space - if($mx_tokens[$i]=~/^[a-z0-9\$]/o) { - if(defined($opt_m)) { - # use stemmer - # only consider words starting with these characters - # use Porter stemmer - my $stem; - $stem=$mx_tokens[$i]; - if(length($stem)>3) { - push(@m_tokens,&MorphStem($stem)); - } - else { # no stemmer as default - push(@m_tokens,$mx_tokens[$i]); - } - } - else { # no stemmer - push(@m_tokens,$mx_tokens[$i]); - } - } - } - } - #------------------------------------- - # create ngram - $count=0; - for($i=0;$i<=$#m_tokens-$NSIZE+1;$i++) { - $gram=$m_tokens[$i]; - for($j=$i+1;$j<=$i+$NSIZE-1;$j++) { - $gram.=" $m_tokens[$j]"; - } - $count++; - unless(exists($g->{$gram})) { - $g->{$gram}=1; - } - else { - $g->{$gram}++; - } - } - # save total number of tokens - $g->{"_cn_"}=$count; -} - -sub createSkipBigram { - my $text=shift; - my $g=shift; - my $skipDistance=shift; - my @mx_tokens=(); - my @m_tokens=(); - my ($i,$j); - my ($gram); - my ($count); - my ($byteSize); - - # remove stopwords - if($useStopwords) { - %stopwords=(); # consider stop words - } - unless(defined($text)) { - $g->{"_cn_"}=0; - return; - } - @mx_tokens=split(/\s+/,$text); - $byteSize=0; - for($i=0;$i<=$#mx_tokens;$i++) { - unless(exists($stopwords{$mx_tokens[$i]})) { - $byteSize+=length($mx_tokens[$i])+1; # the length of words in bytes so far + 1 space - if($mx_tokens[$i]=~/^[a-z0-9\$]/o) { - if(defined($opt_m)) { - # use stemmer - # only consider words starting with these characters - # use Porter stemmer - my $stem; - $stem=$mx_tokens[$i]; - if(length($stem)>3) { - push(@m_tokens,&MorphStem($stem)); - } - else { # no stemmer as default - push(@m_tokens,$mx_tokens[$i]); - } - } - else { # no stemmer - push(@m_tokens,$mx_tokens[$i]); - } - } - } - } - #------------------------------------- - # create ngram - $count=0; - for($i=0;$i<$#m_tokens;$i++) { - if(defined($opt_u)) { - # add unigram count - $gram=$m_tokens[$i]; - $count++; - unless(exists($g->{$gram})) { - $g->{$gram}=1; - } - else { - $g->{$gram}++; - } - } - for($j=$i+1; - $j<=$#m_tokens&&($skipDistance<0||$j<=$i+$skipDistance+1); - $j++) { - $gram=$m_tokens[$i]; - $gram.=" $m_tokens[$j]"; - $count++; - unless(exists($g->{$gram})) { - $g->{$gram}=1; - } - else { - $g->{$gram}++; - } - } - } - # save total number of tokens - $g->{"_cn_"}=$count; -} - -sub createBE { - my $BEList=shift; - my $BEMap=shift; - my $BEMode=shift; - my ($i); - - $BEMap->{"_cn_"}=0; - unless(scalar @{$BEList} > 0) { - return; - } - for($i=0;$i<=$#{$BEList};$i++) { - my (@fds); - my ($be,$stemH,$stemM); - $be=$BEList->[$i]; - $be=~tr/A-Z/a-z/; - @fds=split(/\|/,$be); - if(@fds!=3) { - print STDERR "Basic Element (BE) input file is invalid: *$be*\n"; - print STDERR "A BE file has to be in this format per line: HEAD|MODIFIER|RELATION\n"; - die "For more infomation about BE, go to: http://www.isi.edu/~cyl/BE\n"; - } - $stemH=$fds[0]; - $stemM=$fds[1]; - if(defined($opt_m)) { - # use stemmer - # only consider words starting with these characters - # use Porter stemmer - if(length($stemH)>3) { - $stemH=&MorphStemMulti($stemH); - } - if($stemM ne "NIL"&& - length($stemM)>3) { - $stemM=&MorphStemMulti($stemM); - } - } - if($BEMode eq "H"&& - $stemM eq "nil") { - unless(exists($BEMap->{$stemH})) { - $BEMap->{$stemH}=0; - } - $BEMap->{$stemH}++; - $BEMap->{"_cn_"}++; - } - elsif($BEMode eq "HM"&& - $stemM ne "nil") { - my $pair="$stemH|$stemM"; - unless(exists($BEMap->{$pair})) { - $BEMap->{$pair}=0; - } - $BEMap->{$pair}++; - $BEMap->{"_cn_"}++; - } - elsif($BEMode eq "HMR"&& - $fds[2] ne "nil") { - my $triple="$stemH|$stemM|$fds[2]"; - unless(exists($BEMap->{$triple})) { - $BEMap->{$triple}=0; - } - $BEMap->{$triple}++; - $BEMap->{"_cn_"}++; - } - elsif($BEMode eq "HM1") { - my $pair="$stemH|$stemM"; - unless(exists($BEMap->{$pair})) { - $BEMap->{$pair}=0; - } - $BEMap->{$pair}++; - $BEMap->{"_cn_"}++; - } - elsif($BEMode eq "HMR1"&& - $fds[1] ne "nil") { - # relation can be "NIL" but modifier has to have value - my $triple="$stemH|$stemM|$fds[2]"; - unless(exists($BEMap->{$triple})) { - $BEMap->{$triple}=0; - } - $BEMap->{$triple}++; - $BEMap->{"_cn_"}++; - } - elsif($BEMode eq "HMR2") { - # modifier and relation can be "NIL" - my $triple="$stemH|$stemM|$fds[2]"; - unless(exists($BEMap->{$triple})) { - $BEMap->{$triple}=0; - } - $BEMap->{$triple}++; - $BEMap->{"_cn_"}++; - } - } -} - -sub MorphStemMulti { - my $string=shift; - my (@tokens,@stems,$t,$i); - - @tokens=split(/\s+/,$string); - foreach $t (@tokens) { - if($t=~/[A-Za-z0-9]/o&& - $t!~/(-LRB-|-RRB-|-LSB-|-RSB-|-LCB-|-RCB-)/o) { - my $s; - if(defined($s=&MorphStem($t))) { - $t=$s; - } - push(@stems,$t); - } - else { - push(@stems,$t); - } - } - return join(" ",@stems); -} - -sub tokenizeText { - my $text=shift; - my $tokenizedText=shift; - my @mx_tokens=(); - my ($i,$byteSize); - - # remove stopwords - if($useStopwords) { - %stopwords=(); # consider stop words - } - unless(defined($text)) { - return; - } - @mx_tokens=split(/\s+/,$text); - $byteSize=0; - @{$tokenizedText}=(); - for($i=0;$i<=$#mx_tokens;$i++) { - unless(exists($stopwords{$mx_tokens[$i]})) { - $byteSize+=length($mx_tokens[$i])+1; # the length of words in bytes so far + 1 space - if($mx_tokens[$i]=~/^[a-z0-9\$]/o) { - if(defined($opt_m)) { - # use stemmer - # only consider words starting with these characters - # use Porter stemmer - my $stem; - $stem=$mx_tokens[$i]; - if(length($stem)>3) { - push(@{$tokenizedText},&MorphStem($stem)); - } - else { # no stemmer as default - push(@{$tokenizedText},$mx_tokens[$i]); - } - } - else { # no stemmer - push(@{$tokenizedText},$mx_tokens[$i]); - } - } - } - } -} - -sub tokenizeText_LCS { - my $text=shift; - my $tokenizedText=shift; - my $lengthLimit=shift; - my $byteLimit=shift; - my @mx_tokens=(); - my ($i,$byteSize,$t,$done); - - # remove stopwords - if($useStopwords) { - %stopwords=(); # consider stop words - } - if(@{$text}==0) { - return; - } - $byteSize=0; - @{$tokenizedText}=(); - $done=0; - for($t=0;$t<@{$text}&&$done==0;$t++) { - @mx_tokens=split(/\s+/,$text->[$t]); - # tokenized array for each separate unit (for example, sentence) - push(@{$tokenizedText},[]); - for($i=0;$i<=$#mx_tokens;$i++) { - unless(exists($stopwords{$mx_tokens[$i]})) { - $byteSize+=length($mx_tokens[$i])+1; # the length of words in bytes so far + 1 space - if($mx_tokens[$i]=~/^[a-z0-9\$]/o) { - if(defined($opt_m)) { - # use stemmer - # only consider words starting with these characters - # use Porter stemmer - my $stem; - $stem=$mx_tokens[$i]; - if(length($stem)>3) { - push(@{$tokenizedText->[$t]},&MorphStem($stem)); - } - else { # no stemmer as default - push(@{$tokenizedText->[$t]},$mx_tokens[$i]); - } - } - else { # no stemmer - push(@{$tokenizedText->[$t]},$mx_tokens[$i]); - } - } - } - } - } -} - -# Input file configuration is a list of peer/model pair for each evaluation -# instance. Each evaluation pair is in a line separated by white spaces -# characters. -sub readFileList { - my ($ROUGEEvals)=shift; - my ($ROUGEEvalIDs)=shift; - my ($ROUGEPeerIDTable)=shift; - my ($doc)=shift; - my ($evalID,$pair); - my ($inputFormat,$peerFile,$modelFile,$peerID,$modelID); - my (@files); - - $evalID=1; # automatically generated evaluation ID starting from 1 - $peerID=$systemID; - $modelID="M"; - unless(exists($ROUGEPeerIDTable->{$peerID})) { - $ROUGEPeerIDTable->{$peerID}=1; - } - while(defined($pair=<$doc>)) { - my ($peerPath,$modelPath); - if($pair!~/^\#/o&& - $pair!~/^\s*$/o) { # Lines start with '#' is a comment line - chomp($pair); - $pair=~s/^\s+//; - $pair=~s/\s+$//; - @files=split(/\s+/,$pair); - if(scalar @files < 2) { - die "File list has to have at least 2 filenames per line (peer model1 model2 ... modelN)\n"; - } - $peerFile=$files[0]; - unless(exists($ROUGEEvals->{$evalID})) { - $ROUGEEvals->{$evalID}={}; - push(@{$ROUGEEvalIDs},$evalID); - $ROUGEEvals->{$evalID}{"IF"}=$opt_z; - } - unless(exists($ROUGEPeerIDTable->{$peerID})) { - $ROUGEPeerIDTable->{$peerID}=1; # save peer ID for reference - } - if(exists($ROUGEEvals->{$evalID})) { - unless(exists($ROUGEEvals->{$evalID}{"Ps"})) { - $ROUGEEvals->{$evalID}{"Ps"}={}; - $ROUGEEvals->{$evalID}{"PIDList"}=[]; - } - push(@{$ROUGEEvals->{$evalID}{"PIDList"}},$peerID); # save peer IDs - } - else { - die "(PEERS) Evaluation database does not contain entry for this evaluation ID: $evalID\n"; - } - # remove leading and trailing newlines and - # spaces - if(exists($ROUGEEvals->{$evalID}{"Ps"})) { - $ROUGEEvals->{$evalID}{"Ps"}{$peerID}=$peerFile; # save peer filename - } - else { - die "(P) Evaluation database does not contain entry for this evaluation ID: $evalID\n"; - } - for($mid=1;$mid<=$#files;$mid++) { - $modelFile=$files[$mid]; - if(exists($ROUGEEvals->{$evalID})) { - unless(exists($ROUGEEvals->{$evalID}{"Ms"})) { - $ROUGEEvals->{$evalID}{"Ms"}={}; - $ROUGEEvals->{$evalID}{"MIDList"}=[]; - } - push(@{$ROUGEEvals->{$evalID}{"MIDList"}},"$modelID.$mid"); # save model IDs - } - else { - die "(MODELS) Evaluation database does not contain entry for this evaluation ID: $evalID\n"; - } - # remove leading and trailing newlines and - # spaces - if(exists($ROUGEEvals->{$evalID}{"Ms"})) { - $ROUGEEvals->{$evalID}{"Ms"}{"$modelID.$mid"}=$modelFile; # save peer filename - } - else { - die "(M) Evaluation database does not contain entry for this evaluation ID: $evalID\n"; - } - } - $evalID++; - } - } -} - -# read and parse ROUGE evaluation file -sub readEvals { - my ($ROUGEEvals)=shift; - my ($ROUGEEvalIDs)=shift; - my ($ROUGEPeerIDTable)=shift; - my ($node)=shift; - my ($evalID)=shift; - my ($inputFormat,$peerRoot,$modelRoot,$peerFile,$modelFile,$peerID,$modelID); - - if(defined($opt_z)) { - # Input file configuration is a list of peer/model pair for each evaluation - # instance. Each evaluation pair is in a line separated by white spaces - # characters. - &readFileList($ROUGEEvals,$ROUGEEvalIDs,$ROUGEPeerIDTable,$node); - return; - } - # Otherwise, the input file is the standard ROUGE XML evaluation configuration - # file. - if($node->getNodeType==ELEMENT_NODE|| - $node->getNodeType==DOCUMENT_NODE) { - if($node->getNodeType==ELEMENT_NODE) { - $nodeName=$node->getNodeName; - if($nodeName=~/^EVAL$/oi) { - $evalID=$node->getAttributeNode("ID")->getValue; - unless(exists($ROUGEEvals->{$evalID})) { - $ROUGEEvals->{$evalID}={}; - push(@{$ROUGEEvalIDs},$evalID); - } - foreach my $child ($node->getChildNodes()) { - &readEvals($ROUGEEvals,$ROUGEEvalIDs,$ROUGEPeerIDTable,$child,$evalID); - } - } - elsif($nodeName=~/^INPUT-FORMAT$/oi) { - $inputFormat=$node->getAttributeNode("TYPE")->getValue; - if($inputFormat=~/^(SEE|ISI|SPL|SIMPLE)$/oi) { # SPL: one sentence per line - if(exists($ROUGEEvals->{$evalID})) { - $ROUGEEvals->{$evalID}{"IF"}=$inputFormat; - } - else { - die "(INPUT-FORMAT) Evaluation database does not contain entry for this evaluation ID: $evalID\n"; - } - } - else { - die "Unknown input type: $inputFormat\n"; - } - } - elsif($nodeName=~/^PEER-ROOT$/oi) { - foreach my $child ($node->getChildNodes()) { - if($child->getNodeType==TEXT_NODE) { - $peerRoot=$child->getData; - # remove leading and trailing newlines and - # spaces - $peerRoot=~s/^[\n\s]+//; - $peerRoot=~s/[\n\s]+$//; - if(exists($ROUGEEvals->{$evalID})) { - $ROUGEEvals->{$evalID}{"PR"}=$peerRoot; - } - else { - die "(PEER-ROOT) Evaluation database does not contain entry for this evaluation ID: $evalID\n"; - } - } - } - } - elsif($nodeName=~/^MODEL-ROOT$/oi) { - foreach my $child ($node->getChildNodes()) { - if($child->getNodeType==TEXT_NODE) { - $modelRoot=$child->getData; - # remove leading and trailing newlines and - # spaces - $modelRoot=~s/^[\n\s]+//; - $modelRoot=~s/[\n\s]+$//; - if(exists($ROUGEEvals->{$evalID})) { - $ROUGEEvals->{$evalID}{"MR"}=$modelRoot; - } - else { - die "(MODEL-ROOT) Evaluation database does not contain entry for this evaluation ID: $evalID\n"; - } - } - } - } - elsif($nodeName=~/^PEERS$/oi) { - foreach my $child ($node->getChildNodes()) { - if($child->getNodeType==ELEMENT_NODE&& - $child->getNodeName=~/^P$/oi) { - $peerID=$child->getAttributeNode("ID")->getValue; - unless(exists($ROUGEPeerIDTable->{$peerID})) { - $ROUGEPeerIDTable->{$peerID}=1; # save peer ID for reference - } - if(exists($ROUGEEvals->{$evalID})) { - unless(exists($ROUGEEvals->{$evalID}{"Ps"})) { - $ROUGEEvals->{$evalID}{"Ps"}={}; - $ROUGEEvals->{$evalID}{"PIDList"}=[]; - } - push(@{$ROUGEEvals->{$evalID}{"PIDList"}},$peerID); # save peer IDs - } - else { - die "(PEERS) Evaluation database does not contain entry for this evaluation ID: $evalID\n"; - } - foreach my $grandchild ($child->getChildNodes()) { - if($grandchild->getNodeType==TEXT_NODE) { - $peerFile=$grandchild->getData; - # remove leading and trailing newlines and - # spaces - $peerFile=~s/^[\n\s]+//; - $peerFile=~s/[\n\s]+$//; - if(exists($ROUGEEvals->{$evalID}{"Ps"})) { - $ROUGEEvals->{$evalID}{"Ps"}{$peerID}=$peerFile; # save peer filename - } - else { - die "(P) Evaluation database does not contain entry for this evaluation ID: $evalID\n"; - } - } - } - } - } - } - elsif($nodeName=~/^MODELS$/oi) { - foreach my $child ($node->getChildNodes()) { - if($child->getNodeType==ELEMENT_NODE&& - $child->getNodeName=~/^M$/oi) { - $modelID=$child->getAttributeNode("ID")->getValue; - if(exists($ROUGEEvals->{$evalID})) { - unless(exists($ROUGEEvals->{$evalID}{"Ms"})) { - $ROUGEEvals->{$evalID}{"Ms"}={}; - $ROUGEEvals->{$evalID}{"MIDList"}=[]; - } - push(@{$ROUGEEvals->{$evalID}{"MIDList"}},$modelID); # save model IDs - } - else { - die "(MODELS) Evaluation database does not contain entry for this evaluation ID: $evalID\n"; - } - foreach my $grandchild ($child->getChildNodes()) { - if($grandchild->getNodeType==TEXT_NODE) { - $modelFile=$grandchild->getData; - # remove leading and trailing newlines and - # spaces - $modelFile=~s/^[\n\s]+//; - $modelFile=~s/[\n\s]+$//; - if(exists($ROUGEEvals->{$evalID}{"Ms"})) { - $ROUGEEvals->{$evalID}{"Ms"}{$modelID}=$modelFile; # save peer filename - } - else { - die "(M) Evaluation database does not contain entry for this evaluation ID: $evalID\n"; - } - } - } - } - } - } - else { - foreach my $child ($node->getChildNodes()) { - &readEvals($ROUGEEvals,$ROUGEEvalIDs,$ROUGEPeerIDTable,$child,$evalID); - } - } - } - else { - foreach my $child ($node->getChildNodes()) { - &readEvals($ROUGEEvals,$ROUGEEvalIDs,$ROUGEPeerIDTable,$child,$evalID); - } - } - } - else { - if(defined($node->getChildNodes())) { - foreach my $child ($node->getChildNodes()) { - &readEvals($ROUGEEvals,$ROUGEEvalIDs,$ROUGEPeerIDTable,$child,$evalID); - } - } - } -} - -# Porter stemmer in Perl. Few comments, but it's easy to follow against the rules in the original -# paper, in -# -# Porter, 1980, An algorithm for suffix stripping, Program, Vol. 14, -# no. 3, pp 130-137, -# -# see also http://www.tartarus.org/~martin/PorterStemmer - -# Release 1 - -local %step2list; -local %step3list; -local ($c, $v, $C, $V, $mgr0, $meq1, $mgr1, $_v); - - -sub stem - { my ($stem, $suffix, $firstch); - my $w = shift; - if (length($w) < 3) { return $w; } # length at least 3 - # now map initial y to Y so that the patterns never treat it as vowel: - $w =~ /^./; $firstch = $&; - if ($firstch =~ /^y/) { $w = ucfirst $w; } - - # Step 1a - if ($w =~ /(ss|i)es$/) { $w=$`.$1; } - elsif ($w =~ /([^s])s$/) { $w=$`.$1; } - # Step 1b - if ($w =~ /eed$/) { if ($` =~ /$mgr0/o) { chop($w); } } - elsif ($w =~ /(ed|ing)$/) - { $stem = $`; - if ($stem =~ /$_v/o) - { $w = $stem; - if ($w =~ /(at|bl|iz)$/) { $w .= "e"; } - elsif ($w =~ /([^aeiouylsz])\1$/) { chop($w); } - elsif ($w =~ /^${C}${v}[^aeiouwxy]$/o) { $w .= "e"; } - } -} -# Step 1c - if ($w =~ /y$/) { $stem = $`; if ($stem =~ /$_v/o) { $w = $stem."i"; } } - -# Step 2 -if ($w =~ /(ational|tional|enci|anci|izer|bli|alli|entli|eli|ousli|ization|ation|ator|alism|iveness|fulness|ousness|aliti|iviti|biliti|logi)$/) - { $stem = $`; $suffix = $1; - if ($stem =~ /$mgr0/o) { $w = $stem . $step2list{$suffix}; } - } - -# Step 3 - -if ($w =~ /(icate|ative|alize|iciti|ical|ful|ness)$/) - { $stem = $`; $suffix = $1; - if ($stem =~ /$mgr0/o) { $w = $stem . $step3list{$suffix}; } - } - -# Step 4 - - # CYL: Modified 02/14/2004, a word ended in -ement will not try the rules "-ment" and "-ent" -# if ($w =~ /(al|ance|ence|er|ic|able|ible|ant|ement|ment|ent|ou|ism|ate|iti|ous|ive|ize)$/) -# elsif ($w =~ /(s|t)(ion)$/) -# { $stem = $` . $1; if ($stem =~ /$mgr1/o) { $w = $stem; } } - if ($w =~ /(al|ance|ence|er|ic|able|ible|ant|ement|ou|ism|ate|iti|ous|ive|ize)$/) - { $stem = $`; if ($stem =~ /$mgr1/o) { $w = $stem; } } - if ($w =~ /ment$/) - { $stem = $`; if ($stem =~ /$mgr1/o) { $w = $stem; } } - if ($w =~ /ent$/) - { $stem = $`; if ($stem =~ /$mgr1/o) { $w = $stem; } } - elsif ($w =~ /(s|t)(ion)$/) - { $stem = $` . $1; if ($stem =~ /$mgr1/o) { $w = $stem; } } - -# Step 5 - -if ($w =~ /e$/) - { $stem = $`; - if ($stem =~ /$mgr1/o or - ($stem =~ /$meq1/o and not $stem =~ /^${C}${v}[^aeiouwxy]$/o)) -{ $w = $stem; } -} -if ($w =~ /ll$/ and $w =~ /$mgr1/o) { chop($w); } - -# and turn initial Y back to y -if ($firstch =~ /^y/) { $w = lcfirst $w; } -return $w; -} - - sub initialise { - - %step2list = - ( 'ational'=>'ate', 'tional'=>'tion', 'enci'=>'ence', 'anci'=>'ance', 'izer'=>'ize', 'bli'=>'ble', - 'alli'=>'al', 'entli'=>'ent', 'eli'=>'e', 'ousli'=>'ous', 'ization'=>'ize', 'ation'=>'ate', - 'ator'=>'ate', 'alism'=>'al', 'iveness'=>'ive', 'fulness'=>'ful', 'ousness'=>'ous', 'aliti'=>'al', - 'iviti'=>'ive', 'biliti'=>'ble', 'logi'=>'log'); - - %step3list = - ('icate'=>'ic', 'ative'=>'', 'alize'=>'al', 'iciti'=>'ic', 'ical'=>'ic', 'ful'=>'', 'ness'=>''); - - - $c = "[^aeiou]"; # consonant - $v = "[aeiouy]"; # vowel - $C = "${c}[^aeiouy]*"; # consonant sequence - $V = "${v}[aeiou]*"; # vowel sequence - - $mgr0 = "^(${C})?${V}${C}"; # [C]VC... is m>0 - $meq1 = "^(${C})?${V}${C}(${V})?" . '$'; # [C]VC[V] is m=1 - $mgr1 = "^(${C})?${V}${C}${V}${C}"; # [C]VCVC... is m>1 - $_v = "^(${C})?${v}"; # vowel in stem - -} diff --git a/spaces/ali-ghamdan/deoldify/deoldify/loss.py b/spaces/ali-ghamdan/deoldify/deoldify/loss.py deleted file mode 100644 index b78caabb33133572cefaacf816468277ee7da18f..0000000000000000000000000000000000000000 --- a/spaces/ali-ghamdan/deoldify/deoldify/loss.py +++ /dev/null @@ -1,136 +0,0 @@ -from fastai import * -from fastai.core import * -from fastai.torch_core import * -from fastai.callbacks import hook_outputs -import torchvision.models as models - - -class FeatureLoss(nn.Module): - def __init__(self, layer_wgts=[20, 70, 10]): - super().__init__() - - self.m_feat = models.vgg16_bn(True).features.cuda().eval() - requires_grad(self.m_feat, False) - blocks = [ - i - 1 - for i, o in enumerate(children(self.m_feat)) - if isinstance(o, nn.MaxPool2d) - ] - layer_ids = blocks[2:5] - self.loss_features = [self.m_feat[i] for i in layer_ids] - self.hooks = hook_outputs(self.loss_features, detach=False) - self.wgts = layer_wgts - self.metric_names = ['pixel'] + [f'feat_{i}' for i in range(len(layer_ids))] - self.base_loss = F.l1_loss - - def _make_features(self, x, clone=False): - self.m_feat(x) - return [(o.clone() if clone else o) for o in self.hooks.stored] - - def forward(self, input, target): - out_feat = self._make_features(target, clone=True) - in_feat = self._make_features(input) - self.feat_losses = [self.base_loss(input, target)] - self.feat_losses += [ - self.base_loss(f_in, f_out) * w - for f_in, f_out, w in zip(in_feat, out_feat, self.wgts) - ] - - self.metrics = dict(zip(self.metric_names, self.feat_losses)) - return sum(self.feat_losses) - - def __del__(self): - self.hooks.remove() - - -# Refactored code, originally from https://github.com/VinceMarron/style_transfer -class WassFeatureLoss(nn.Module): - def __init__(self, layer_wgts=[5, 15, 2], wass_wgts=[3.0, 0.7, 0.01]): - super().__init__() - self.m_feat = models.vgg16_bn(True).features.cuda().eval() - requires_grad(self.m_feat, False) - blocks = [ - i - 1 - for i, o in enumerate(children(self.m_feat)) - if isinstance(o, nn.MaxPool2d) - ] - layer_ids = blocks[2:5] - self.loss_features = [self.m_feat[i] for i in layer_ids] - self.hooks = hook_outputs(self.loss_features, detach=False) - self.wgts = layer_wgts - self.wass_wgts = wass_wgts - self.metric_names = ( - ['pixel'] - + [f'feat_{i}' for i in range(len(layer_ids))] - + [f'wass_{i}' for i in range(len(layer_ids))] - ) - self.base_loss = F.l1_loss - - def _make_features(self, x, clone=False): - self.m_feat(x) - return [(o.clone() if clone else o) for o in self.hooks.stored] - - def _calc_2_moments(self, tensor): - chans = tensor.shape[1] - tensor = tensor.view(1, chans, -1) - n = tensor.shape[2] - mu = tensor.mean(2) - tensor = (tensor - mu[:, :, None]).squeeze(0) - # Prevents nasty bug that happens very occassionally- divide by zero. Why such things happen? - if n == 0: - return None, None - cov = torch.mm(tensor, tensor.t()) / float(n) - return mu, cov - - def _get_style_vals(self, tensor): - mean, cov = self._calc_2_moments(tensor) - if mean is None: - return None, None, None - eigvals, eigvects = torch.symeig(cov, eigenvectors=True) - eigroot_mat = torch.diag(torch.sqrt(eigvals.clamp(min=0))) - root_cov = torch.mm(torch.mm(eigvects, eigroot_mat), eigvects.t()) - tr_cov = eigvals.clamp(min=0).sum() - return mean, tr_cov, root_cov - - def _calc_l2wass_dist( - self, mean_stl, tr_cov_stl, root_cov_stl, mean_synth, cov_synth - ): - tr_cov_synth = torch.symeig(cov_synth, eigenvectors=True)[0].clamp(min=0).sum() - mean_diff_squared = (mean_stl - mean_synth).pow(2).sum() - cov_prod = torch.mm(torch.mm(root_cov_stl, cov_synth), root_cov_stl) - var_overlap = torch.sqrt( - torch.symeig(cov_prod, eigenvectors=True)[0].clamp(min=0) + 1e-8 - ).sum() - dist = mean_diff_squared + tr_cov_stl + tr_cov_synth - 2 * var_overlap - return dist - - def _single_wass_loss(self, pred, targ): - mean_test, tr_cov_test, root_cov_test = targ - mean_synth, cov_synth = self._calc_2_moments(pred) - loss = self._calc_l2wass_dist( - mean_test, tr_cov_test, root_cov_test, mean_synth, cov_synth - ) - return loss - - def forward(self, input, target): - out_feat = self._make_features(target, clone=True) - in_feat = self._make_features(input) - self.feat_losses = [self.base_loss(input, target)] - self.feat_losses += [ - self.base_loss(f_in, f_out) * w - for f_in, f_out, w in zip(in_feat, out_feat, self.wgts) - ] - - styles = [self._get_style_vals(i) for i in out_feat] - - if styles[0][0] is not None: - self.feat_losses += [ - self._single_wass_loss(f_pred, f_targ) * w - for f_pred, f_targ, w in zip(in_feat, styles, self.wass_wgts) - ] - - self.metrics = dict(zip(self.metric_names, self.feat_losses)) - return sum(self.feat_losses) - - def __del__(self): - self.hooks.remove() diff --git a/spaces/aliabid94/AutoGPT/autogpt/permanent_memory/__init__.py b/spaces/aliabid94/AutoGPT/autogpt/permanent_memory/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/allknowingroger/Image-Models-Test99/README.md b/spaces/allknowingroger/Image-Models-Test99/README.md deleted file mode 100644 index 74736f79cb2be9de07176cd7a891ce1c33b15d29..0000000000000000000000000000000000000000 --- a/spaces/allknowingroger/Image-Models-Test99/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: More Image Models -emoji: 😻 -colorFrom: red -colorTo: gray -sdk: gradio -sdk_version: 3.23.0 -app_file: app.py -duplicated_from: allknowingroger/Image-Models-Test98 ---- - - \ No newline at end of file diff --git a/spaces/allknowingroger/text-generation-webui-space-1/modules/models.py b/spaces/allknowingroger/text-generation-webui-space-1/modules/models.py deleted file mode 100644 index f4bb11fd3f7292657b008ab644b5be121d9980e5..0000000000000000000000000000000000000000 --- a/spaces/allknowingroger/text-generation-webui-space-1/modules/models.py +++ /dev/null @@ -1,168 +0,0 @@ -import json -import os -import time -import zipfile -from pathlib import Path - -import numpy as np -import torch -import transformers -from transformers import AutoModelForCausalLM, AutoTokenizer - -import modules.shared as shared - -transformers.logging.set_verbosity_error() - -local_rank = None - -if shared.args.flexgen: - from flexgen.flex_opt import (CompressionConfig, ExecutionEnv, OptLM, - Policy, str2bool) - -if shared.args.deepspeed: - import deepspeed - from transformers.deepspeed import (HfDeepSpeedConfig, - is_deepspeed_zero3_enabled) - - from modules.deepspeed_parameters import generate_ds_config - - # Distributed setup - local_rank = shared.args.local_rank if shared.args.local_rank is not None else int(os.getenv("LOCAL_RANK", "0")) - world_size = int(os.getenv("WORLD_SIZE", "1")) - torch.cuda.set_device(local_rank) - deepspeed.init_distributed() - ds_config = generate_ds_config(shared.args.bf16, 1 * world_size, shared.args.nvme_offload_dir) - dschf = HfDeepSpeedConfig(ds_config) # Keep this object alive for the Transformers integration - - -def load_model(model_name): - print(f"Loading {model_name}...") - t0 = time.time() - - shared.is_RWKV = model_name.lower().startswith('rwkv-') - - # Default settings - if not any([shared.args.cpu, shared.args.load_in_8bit, shared.args.gptq_bits, shared.args.auto_devices, shared.args.disk, shared.args.gpu_memory is not None, shared.args.cpu_memory is not None, shared.args.deepspeed, shared.args.flexgen, shared.is_RWKV]): - if any(size in shared.model_name.lower() for size in ('13b', '20b', '30b')): - model = AutoModelForCausalLM.from_pretrained(Path(f"models/{shared.model_name}"), device_map='auto', load_in_8bit=True) - else: - model = AutoModelForCausalLM.from_pretrained(Path(f"models/{shared.model_name}"), low_cpu_mem_usage=True, torch_dtype=torch.bfloat16 if shared.args.bf16 else torch.float16).cuda() - - # FlexGen - elif shared.args.flexgen: - # Initialize environment - env = ExecutionEnv.create(shared.args.disk_cache_dir) - - # Offloading policy - policy = Policy(1, 1, - shared.args.percent[0], shared.args.percent[1], - shared.args.percent[2], shared.args.percent[3], - shared.args.percent[4], shared.args.percent[5], - overlap=True, sep_layer=True, pin_weight=shared.args.pin_weight, - cpu_cache_compute=False, attn_sparsity=1.0, - compress_weight=shared.args.compress_weight, - comp_weight_config=CompressionConfig( - num_bits=4, group_size=64, - group_dim=0, symmetric=False), - compress_cache=False, - comp_cache_config=CompressionConfig( - num_bits=4, group_size=64, - group_dim=2, symmetric=False)) - - model = OptLM(f"facebook/{shared.model_name}", env, "models", policy) - - # DeepSpeed ZeRO-3 - elif shared.args.deepspeed: - model = AutoModelForCausalLM.from_pretrained(Path(f"models/{shared.model_name}"), torch_dtype=torch.bfloat16 if shared.args.bf16 else torch.float16) - model = deepspeed.initialize(model=model, config_params=ds_config, model_parameters=None, optimizer=None, lr_scheduler=None)[0] - model.module.eval() # Inference - print(f"DeepSpeed ZeRO-3 is enabled: {is_deepspeed_zero3_enabled()}") - - # RMKV model (not on HuggingFace) - elif shared.is_RWKV: - from modules.RWKV import RWKVModel, RWKVTokenizer - - model = RWKVModel.from_pretrained(Path(f'models/{model_name}'), dtype="fp32" if shared.args.cpu else "bf16" if shared.args.bf16 else "fp16", device="cpu" if shared.args.cpu else "cuda") - tokenizer = RWKVTokenizer.from_pretrained(Path('models')) - - return model, tokenizer - - # Quantized model - elif shared.args.gptq_bits > 0: - from modules.GPTQ_loader import load_quantized - - model = load_quantized(model_name) - - # Custom - else: - command = "AutoModelForCausalLM.from_pretrained" - params = ["low_cpu_mem_usage=True"] - if not shared.args.cpu and not torch.cuda.is_available(): - print("Warning: no GPU has been detected.\nFalling back to CPU mode.\n") - shared.args.cpu = True - - if shared.args.cpu: - params.append("low_cpu_mem_usage=True") - params.append("torch_dtype=torch.float32") - else: - params.append("device_map='auto'") - params.append("load_in_8bit=True" if shared.args.load_in_8bit else "torch_dtype=torch.bfloat16" if shared.args.bf16 else "torch_dtype=torch.float16") - - if shared.args.gpu_memory: - memory_map = shared.args.gpu_memory - max_memory = f"max_memory={{0: '{memory_map[0]}GiB'" - for i in range(1, len(memory_map)): - max_memory += (f", {i}: '{memory_map[i]}GiB'") - max_memory += (f", 'cpu': '{shared.args.cpu_memory or '99'}GiB'}}") - params.append(max_memory) - elif not shared.args.load_in_8bit: - total_mem = (torch.cuda.get_device_properties(0).total_memory/(1024*1024)) - suggestion = round((total_mem-1000)/1000)*1000 - if total_mem-suggestion < 800: - suggestion -= 1000 - suggestion = int(round(suggestion/1000)) - print(f"\033[1;32;1mAuto-assiging --gpu-memory {suggestion} for your GPU to try to prevent out-of-memory errors.\nYou can manually set other values.\033[0;37;0m") - params.append(f"max_memory={{0: '{suggestion}GiB', 'cpu': '{shared.args.cpu_memory or '99'}GiB'}}") - if shared.args.disk: - params.append(f"offload_folder='{shared.args.disk_cache_dir}'") - - command = f"{command}(Path(f'models/{shared.model_name}'), {', '.join(set(params))})" - model = eval(command) - - # Loading the tokenizer - if shared.model_name.lower().startswith(('gpt4chan', 'gpt-4chan', '4chan')) and Path("models/gpt-j-6B/").exists(): - tokenizer = AutoTokenizer.from_pretrained(Path("models/gpt-j-6B/")) - else: - tokenizer = AutoTokenizer.from_pretrained(Path(f"models/{shared.model_name}/")) - tokenizer.truncation_side = 'left' - - print(f"Loaded the model in {(time.time()-t0):.2f} seconds.") - return model, tokenizer - -def load_soft_prompt(name): - if name == 'None': - shared.soft_prompt = False - shared.soft_prompt_tensor = None - else: - with zipfile.ZipFile(Path(f'softprompts/{name}.zip')) as zf: - zf.extract('tensor.npy') - zf.extract('meta.json') - j = json.loads(open('meta.json', 'r').read()) - print(f"\nLoading the softprompt \"{name}\".") - for field in j: - if field != 'name': - if type(j[field]) is list: - print(f"{field}: {', '.join(j[field])}") - else: - print(f"{field}: {j[field]}") - print() - tensor = np.load('tensor.npy') - Path('tensor.npy').unlink() - Path('meta.json').unlink() - tensor = torch.Tensor(tensor).to(device=shared.model.device, dtype=shared.model.dtype) - tensor = torch.reshape(tensor, (1, tensor.shape[0], tensor.shape[1])) - - shared.soft_prompt = True - shared.soft_prompt_tensor = tensor - - return name diff --git a/spaces/amanmibra/void-demo-aisf/dataset.py b/spaces/amanmibra/void-demo-aisf/dataset.py deleted file mode 100644 index b65edfa39944a71284edf1837d32b4d5e699798a..0000000000000000000000000000000000000000 --- a/spaces/amanmibra/void-demo-aisf/dataset.py +++ /dev/null @@ -1,103 +0,0 @@ -import os - -import torch -from torch.utils.data import Dataset -import pandas as pd -import torchaudio - -class VoiceDataset(Dataset): - - def __init__( - self, - data_directory, - transformation, - device, - target_sample_rate=48000, - time_limit_in_secs=5, - ): - # file processing - self._data_path = os.path.join(data_directory) - self._labels = os.listdir(self._data_path) - self.label_mapping = {label: i for i, label in enumerate(self._labels)} - self.audio_files_labels = self._join_audio_files() - - self.device = device - - # audio processing - self.transformation = transformation - self.target_sample_rate = target_sample_rate - self.num_samples = time_limit_in_secs * self.target_sample_rate - - # preprocess all wavs - self.wavs = self._process_wavs() - - def __len__(self): - return len(self.audio_files_labels) - - def __getitem__(self, index): - return self.wavs[index] - - def _process_wavs(self): - wavs = [] - for file, label in self.audio_files_labels: - filepath = os.path.join(self._data_path, label, file) - - # load wav - wav, sr = torchaudio.load(filepath, normalize=True) - - # modify wav file, if necessary - wav = wav.to(self.device) - wav = self._resample(wav, sr) - wav = self._mix_down(wav) - wav = self._cut_or_pad(wav) - - # apply transformation - wav = self.transformation(wav) - - wavs.append((wav, self.label_mapping[label])) - - return wavs - - - def _join_audio_files(self): - """Join all the audio file names and labels into one single dimenional array""" - audio_files_labels = [] - - for label in self._labels: - label_path = os.path.join(self._data_path, label) - for f in os.listdir(label_path): - audio_files_labels.append((f, label)) - - return audio_files_labels - - def _resample(self, wav, current_sample_rate): - """Resample audio to the target sample rate, if necessary""" - if current_sample_rate != self.target_sample_rate: - resampler = torchaudio.transforms.Resample(current_sample_rate, self.target_sample_rate) - wav = resampler(wav) - - return wav - - def _mix_down(self, wav): - """Mix down audio to a single channel, if necessary""" - if wav.shape[0] > 1: - wav = torch.mean(wav, dim=0, keepdim=True) - - return wav - - def _cut_or_pad(self, wav): - """Modify audio if number of samples != target number of samples of the dataset. - - If there are too many samples, cut the audio. - If there are not enough samples, pad the audio with zeros. - """ - - length_signal = wav.shape[1] - if length_signal > self.num_samples: - wav = wav[:, :self.num_samples] - elif length_signal < self.num_samples: - num_of_missing_samples = self.num_samples - length_signal - pad = (0, num_of_missing_samples) - wav = torch.nn.functional.pad(wav, pad) - - return wav diff --git a/spaces/amirDev/crowd-counting-p2p/models/vgg_.py b/spaces/amirDev/crowd-counting-p2p/models/vgg_.py deleted file mode 100644 index 5b0484be92d214b9717cc1f28bb65096ef7d7b1b..0000000000000000000000000000000000000000 --- a/spaces/amirDev/crowd-counting-p2p/models/vgg_.py +++ /dev/null @@ -1,196 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved -""" -Mostly copy-paste from torchvision references. -""" -import torch -import torch.nn as nn - - -__all__ = [ - 'VGG', 'vgg11', 'vgg11_bn', 'vgg13', 'vgg13_bn', 'vgg16', 'vgg16_bn', - 'vgg19_bn', 'vgg19', -] - - -model_urls = { - 'vgg11': 'https://download.pytorch.org/models/vgg11-bbd30ac9.pth', - 'vgg13': 'https://download.pytorch.org/models/vgg13-c768596a.pth', - 'vgg16': 'https://download.pytorch.org/models/vgg16-397923af.pth', - 'vgg19': 'https://download.pytorch.org/models/vgg19-dcbb9e9d.pth', - 'vgg11_bn': 'https://download.pytorch.org/models/vgg11_bn-6002323d.pth', - 'vgg13_bn': 'https://download.pytorch.org/models/vgg13_bn-abd245e5.pth', - 'vgg16_bn': 'https://download.pytorch.org/models/vgg16_bn-6c64b313.pth', - 'vgg19_bn': 'https://download.pytorch.org/models/vgg19_bn-c79401a0.pth', -} - - -model_paths = { - 'vgg16_bn': './vgg16_bn-6c64b313.pth', - 'vgg16': './weights/vgg16-397923af.pth', - -} - - -class VGG(nn.Module): - - def __init__(self, features, num_classes=1000, init_weights=True): - super(VGG, self).__init__() - self.features = features - self.avgpool = nn.AdaptiveAvgPool2d((7, 7)) - self.classifier = nn.Sequential( - nn.Linear(512 * 7 * 7, 4096), - nn.ReLU(True), - nn.Dropout(), - nn.Linear(4096, 4096), - nn.ReLU(True), - nn.Dropout(), - nn.Linear(4096, num_classes), - ) - if init_weights: - self._initialize_weights() - - def forward(self, x): - x = self.features(x) - x = self.avgpool(x) - x = torch.flatten(x, 1) - x = self.classifier(x) - return x - - def _initialize_weights(self): - for m in self.modules(): - if isinstance(m, nn.Conv2d): - nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu') - if m.bias is not None: - nn.init.constant_(m.bias, 0) - elif isinstance(m, nn.BatchNorm2d): - nn.init.constant_(m.weight, 1) - nn.init.constant_(m.bias, 0) - elif isinstance(m, nn.Linear): - nn.init.normal_(m.weight, 0, 0.01) - nn.init.constant_(m.bias, 0) - - -def make_layers(cfg, batch_norm=False, sync=False): - layers = [] - in_channels = 3 - for v in cfg: - if v == 'M': - layers += [nn.MaxPool2d(kernel_size=2, stride=2)] - else: - conv2d = nn.Conv2d(in_channels, v, kernel_size=3, padding=1) - if batch_norm: - if sync: - print('use sync backbone') - layers += [conv2d, nn.SyncBatchNorm(v), nn.ReLU(inplace=True)] - else: - layers += [conv2d, nn.BatchNorm2d(v), nn.ReLU(inplace=True)] - else: - layers += [conv2d, nn.ReLU(inplace=True)] - in_channels = v - return nn.Sequential(*layers) - - -cfgs = { - 'A': [64, 'M', 128, 'M', 256, 256, 'M', 512, 512, 'M', 512, 512, 'M'], - 'B': [64, 64, 'M', 128, 128, 'M', 256, 256, 'M', 512, 512, 'M', 512, 512, 'M'], - 'D': [64, 64, 'M', 128, 128, 'M', 256, 256, 256, 'M', 512, 512, 512, 'M', 512, 512, 512, 'M'], - 'E': [64, 64, 'M', 128, 128, 'M', 256, 256, 256, 256, 'M', 512, 512, 512, 512, 'M', 512, 512, 512, 512, 'M'], -} - - -def _vgg(arch, cfg, batch_norm, pretrained, progress, sync=False, **kwargs): - if pretrained: - kwargs['init_weights'] = False - model = VGG(make_layers(cfgs[cfg], batch_norm=batch_norm, sync=sync), **kwargs) - if pretrained: - state_dict = torch.load(model_paths[arch]) - model.load_state_dict(state_dict) - return model - - -def vgg11(pretrained=False, progress=True, **kwargs): - r"""VGG 11-layer model (configuration "A") from - `"Very Deep Convolutional Networks For Large-Scale Image Recognition" `_ - - Args: - pretrained (bool): If True, returns a model pre-trained on ImageNet - progress (bool): If True, displays a progress bar of the download to stderr - """ - return _vgg('vgg11', 'A', False, pretrained, progress, **kwargs) - - -def vgg11_bn(pretrained=False, progress=True, **kwargs): - r"""VGG 11-layer model (configuration "A") with batch normalization - `"Very Deep Convolutional Networks For Large-Scale Image Recognition" `_ - - Args: - pretrained (bool): If True, returns a model pre-trained on ImageNet - progress (bool): If True, displays a progress bar of the download to stderr - """ - return _vgg('vgg11_bn', 'A', True, pretrained, progress, **kwargs) - - -def vgg13(pretrained=False, progress=True, **kwargs): - r"""VGG 13-layer model (configuration "B") - `"Very Deep Convolutional Networks For Large-Scale Image Recognition" `_ - - Args: - pretrained (bool): If True, returns a model pre-trained on ImageNet - progress (bool): If True, displays a progress bar of the download to stderr - """ - return _vgg('vgg13', 'B', False, pretrained, progress, **kwargs) - - -def vgg13_bn(pretrained=False, progress=True, **kwargs): - r"""VGG 13-layer model (configuration "B") with batch normalization - `"Very Deep Convolutional Networks For Large-Scale Image Recognition" `_ - - Args: - pretrained (bool): If True, returns a model pre-trained on ImageNet - progress (bool): If True, displays a progress bar of the download to stderr - """ - return _vgg('vgg13_bn', 'B', True, pretrained, progress, **kwargs) - - -def vgg16(pretrained=False, progress=True, **kwargs): - r"""VGG 16-layer model (configuration "D") - `"Very Deep Convolutional Networks For Large-Scale Image Recognition" `_ - - Args: - pretrained (bool): If True, returns a model pre-trained on ImageNet - progress (bool): If True, displays a progress bar of the download to stderr - """ - return _vgg('vgg16', 'D', False, pretrained, progress, **kwargs) - - -def vgg16_bn(pretrained=False, progress=True, sync=False, **kwargs): - r"""VGG 16-layer model (configuration "D") with batch normalization - `"Very Deep Convolutional Networks For Large-Scale Image Recognition" `_ - - Args: - pretrained (bool): If True, returns a model pre-trained on ImageNet - progress (bool): If True, displays a progress bar of the download to stderr - """ - return _vgg('vgg16_bn', 'D', True, pretrained, progress, sync=sync, **kwargs) - - -def vgg19(pretrained=False, progress=True, **kwargs): - r"""VGG 19-layer model (configuration "E") - `"Very Deep Convolutional Networks For Large-Scale Image Recognition" `_ - - Args: - pretrained (bool): If True, returns a model pre-trained on ImageNet - progress (bool): If True, displays a progress bar of the download to stderr - """ - return _vgg('vgg19', 'E', False, pretrained, progress, **kwargs) - - -def vgg19_bn(pretrained=False, progress=True, **kwargs): - r"""VGG 19-layer model (configuration 'E') with batch normalization - `"Very Deep Convolutional Networks For Large-Scale Image Recognition" `_ - - Args: - pretrained (bool): If True, returns a model pre-trained on ImageNet - progress (bool): If True, displays a progress bar of the download to stderr - """ - return _vgg('vgg19_bn', 'E', True, pretrained, progress, **kwargs) diff --git a/spaces/analyticsinmotion/word-error-rate/app.py b/spaces/analyticsinmotion/word-error-rate/app.py deleted file mode 100644 index c79d0ac44f15925df7f362ab354586ec65551062..0000000000000000000000000000000000000000 --- a/spaces/analyticsinmotion/word-error-rate/app.py +++ /dev/null @@ -1,25 +0,0 @@ -import werpy -import gradio as gr - -def word_error_rate(reference, hypothesis): - normalized_reference = werpy.normalize(reference) - normalized_hypothesis = werpy.normalize(hypothesis) - wer_result = werpy.wer(normalized_reference, normalized_hypothesis) - return wer_result - -title = "Word Error Rate Calculator" -description = "A simple application to quickly calculate the Word Error Rate (WER) powered by werpy." - -# Define the input and output interfaces -input_reference = gr.Textbox(lines=2, label="Input Reference Text") -input_hypothesis = gr.Textbox(lines=2, label="Input Hypothesis Text") -output_wer = gr.Number(label="Word Error Rate") - -iface = gr.Interface( - fn = word_error_rate, - inputs = [input_reference, input_hypothesis], - outputs = output_wer, - title = title, - description = description -) -iface.launch() \ No newline at end of file diff --git a/spaces/antonovmaxim/text-generation-webui-space/modules/LoRA.py b/spaces/antonovmaxim/text-generation-webui-space/modules/LoRA.py deleted file mode 100644 index 08bf5b88c6d80e6b4c597942e34373c9b4c99bb4..0000000000000000000000000000000000000000 --- a/spaces/antonovmaxim/text-generation-webui-space/modules/LoRA.py +++ /dev/null @@ -1,55 +0,0 @@ -import logging -from pathlib import Path - -import torch -from peft import PeftModel - -import modules.shared as shared - - -def add_lora_to_model(lora_names): - prior_set = set(shared.lora_names) - added_set = set(lora_names) - prior_set - removed_set = prior_set - set(lora_names) - shared.lora_names = list(lora_names) - - # If no LoRA needs to be added or removed, exit - if len(added_set) == 0 and len(removed_set) == 0: - return - - # Add a LoRA when another LoRA is already present - if len(removed_set) == 0 and len(prior_set) > 0: - logging.info(f"Adding the LoRA(s) named {added_set} to the model...") - for lora in added_set: - shared.model.load_adapter(Path(f"{shared.args.lora_dir}/{lora}"), lora) - - return - - # If any LoRA needs to be removed, start over - if len(removed_set) > 0: - shared.model.disable_adapter() - shared.model = shared.model.base_model.model - - if len(lora_names) > 0: - logging.info("Applying the following LoRAs to {}: {}".format(shared.model_name, ', '.join(lora_names))) - params = {} - if not shared.args.cpu: - params['dtype'] = shared.model.dtype - if hasattr(shared.model, "hf_device_map"): - params['device_map'] = {"base_model.model." + k: v for k, v in shared.model.hf_device_map.items()} - elif shared.args.load_in_8bit: - params['device_map'] = {'': 0} - - shared.model = PeftModel.from_pretrained(shared.model, Path(f"{shared.args.lora_dir}/{lora_names[0]}"), **params) - - for lora in lora_names[1:]: - shared.model.load_adapter(Path(f"{shared.args.lora_dir}/{lora}"), lora) - - if not shared.args.load_in_8bit and not shared.args.cpu: - shared.model.half() - if not hasattr(shared.model, "hf_device_map"): - if torch.has_mps: - device = torch.device('mps') - shared.model = shared.model.to(device) - else: - shared.model = shared.model.cuda() diff --git a/spaces/antonovmaxim/text-generation-webui-space/text-generation-webui-main/docs/System-requirements.md b/spaces/antonovmaxim/text-generation-webui-space/text-generation-webui-main/docs/System-requirements.md deleted file mode 100644 index 3a88416d34ad7c8babd90a81db902e95288a8197..0000000000000000000000000000000000000000 --- a/spaces/antonovmaxim/text-generation-webui-space/text-generation-webui-main/docs/System-requirements.md +++ /dev/null @@ -1,42 +0,0 @@ -These are the VRAM and RAM requirements (in MiB) to run some examples of models **in 16-bit (default) precision**: - -| model | VRAM (GPU) | RAM | -|:-----------------------|-------------:|--------:| -| arxiv_ai_gpt2 | 1512.37 | 5824.2 | -| blenderbot-1B-distill | 2441.75 | 4425.91 | -| opt-1.3b | 2509.61 | 4427.79 | -| gpt-neo-1.3b | 2605.27 | 5851.58 | -| opt-2.7b | 5058.05 | 4863.95 | -| gpt4chan_model_float16 | 11653.7 | 4437.71 | -| gpt-j-6B | 11653.7 | 5633.79 | -| galactica-6.7b | 12697.9 | 4429.89 | -| opt-6.7b | 12700 | 4368.66 | -| bloomz-7b1-p3 | 13483.1 | 4470.34 | - -#### GPU mode with 8-bit precision - -Allows you to load models that would not normally fit into your GPU. Enabled by default for 13b and 20b models in this web UI. - -| model | VRAM (GPU) | RAM | -|:---------------|-------------:|--------:| -| opt-13b | 12528.1 | 1152.39 | -| gpt-neox-20b | 20384 | 2291.7 | - -#### CPU mode (32-bit precision) - -A lot slower, but does not require a GPU. - -On my i5-12400F, 6B models take around 10-20 seconds to respond in chat mode, and around 5 minutes to generate a 200 tokens completion. - -| model | RAM | -|:-----------------------|---------:| -| arxiv_ai_gpt2 | 4430.82 | -| gpt-neo-1.3b | 6089.31 | -| opt-1.3b | 8411.12 | -| blenderbot-1B-distill | 8508.16 | -| opt-2.7b | 14969.3 | -| bloomz-7b1-p3 | 21371.2 | -| gpt-j-6B | 24200.3 | -| gpt4chan_model | 24246.3 | -| galactica-6.7b | 26561.4 | -| opt-6.7b | 29596.6 | diff --git a/spaces/antonovmaxim/text-generation-webui-space/text-generation-webui-main/modules/LoRA.py b/spaces/antonovmaxim/text-generation-webui-space/text-generation-webui-main/modules/LoRA.py deleted file mode 100644 index f996289a5646f659ac83dd4689d3f036ca23b57f..0000000000000000000000000000000000000000 --- a/spaces/antonovmaxim/text-generation-webui-space/text-generation-webui-main/modules/LoRA.py +++ /dev/null @@ -1,54 +0,0 @@ -import logging -from pathlib import Path - -import torch -from peft import PeftModel - -import modules.shared as shared - - -def add_lora_to_model(lora_names): - prior_set = set(shared.lora_names) - added_set = set(lora_names) - prior_set - removed_set = prior_set - set(lora_names) - shared.lora_names = list(lora_names) - - # If no LoRA needs to be added or removed, exit - if len(added_set) == 0 and len(removed_set) == 0: - return - - # Add a LoRA when another LoRA is already present - if len(removed_set) == 0 and len(prior_set) > 0: - logging.info(f"Adding the LoRA(s) named {added_set} to the model...") - for lora in added_set: - shared.model.load_adapter(Path(f"{shared.args.lora_dir}/{lora}"), lora) - - return - - # If any LoRA needs to be removed, start over - if len(removed_set) > 0: - shared.model.disable_adapter() - - if len(lora_names) > 0: - logging.info("Applying the following LoRAs to {}: {}".format(shared.model_name, ', '.join(lora_names))) - params = {} - if not shared.args.cpu: - params['dtype'] = shared.model.dtype - if hasattr(shared.model, "hf_device_map"): - params['device_map'] = {"base_model.model." + k: v for k, v in shared.model.hf_device_map.items()} - elif shared.args.load_in_8bit: - params['device_map'] = {'': 0} - - shared.model = PeftModel.from_pretrained(shared.model, Path(f"{shared.args.lora_dir}/{lora_names[0]}"), **params) - - for lora in lora_names[1:]: - shared.model.load_adapter(Path(f"{shared.args.lora_dir}/{lora}"), lora) - - if not shared.args.load_in_8bit and not shared.args.cpu: - shared.model.half() - if not hasattr(shared.model, "hf_device_map"): - if torch.has_mps: - device = torch.device('mps') - shared.model = shared.model.to(device) - else: - shared.model = shared.model.cuda() diff --git a/spaces/artificialguybr/video-dubbing/TTS/TTS/server/templates/index.html b/spaces/artificialguybr/video-dubbing/TTS/TTS/server/templates/index.html deleted file mode 100644 index 6354d3919d9a1e9c1e22e9866c84c4eb8284bc13..0000000000000000000000000000000000000000 --- a/spaces/artificialguybr/video-dubbing/TTS/TTS/server/templates/index.html +++ /dev/null @@ -1,154 +0,0 @@ - - - - - - - - - - - TTS engine - - - - - - - - - - Fork me on GitHub - - - - - -
      -
      -
      - - -
        -
      - - {%if use_gst%} - - {%endif%} - - -

      - - {%if use_multi_speaker%} - Choose a speaker: -

      - {%endif%} - - {%if use_multi_language%} - Choose a language: -

      - {%endif%} - - - {%if show_details%} -

      - {%endif%} - -

      -
      -
      -
      - - - - - - - \ No newline at end of file diff --git a/spaces/artificialguybr/video-dubbing/TTS/TTS/vocoder/models/melgan_discriminator.py b/spaces/artificialguybr/video-dubbing/TTS/TTS/vocoder/models/melgan_discriminator.py deleted file mode 100644 index 14f00c5927cb28449c4fb0dc0727cde014370c2b..0000000000000000000000000000000000000000 --- a/spaces/artificialguybr/video-dubbing/TTS/TTS/vocoder/models/melgan_discriminator.py +++ /dev/null @@ -1,84 +0,0 @@ -import numpy as np -from torch import nn -from torch.nn.utils import weight_norm - - -class MelganDiscriminator(nn.Module): - def __init__( - self, - in_channels=1, - out_channels=1, - kernel_sizes=(5, 3), - base_channels=16, - max_channels=1024, - downsample_factors=(4, 4, 4, 4), - groups_denominator=4, - ): - super().__init__() - self.layers = nn.ModuleList() - - layer_kernel_size = np.prod(kernel_sizes) - layer_padding = (layer_kernel_size - 1) // 2 - - # initial layer - self.layers += [ - nn.Sequential( - nn.ReflectionPad1d(layer_padding), - weight_norm(nn.Conv1d(in_channels, base_channels, layer_kernel_size, stride=1)), - nn.LeakyReLU(0.2, inplace=True), - ) - ] - - # downsampling layers - layer_in_channels = base_channels - for downsample_factor in downsample_factors: - layer_out_channels = min(layer_in_channels * downsample_factor, max_channels) - layer_kernel_size = downsample_factor * 10 + 1 - layer_padding = (layer_kernel_size - 1) // 2 - layer_groups = layer_in_channels // groups_denominator - self.layers += [ - nn.Sequential( - weight_norm( - nn.Conv1d( - layer_in_channels, - layer_out_channels, - kernel_size=layer_kernel_size, - stride=downsample_factor, - padding=layer_padding, - groups=layer_groups, - ) - ), - nn.LeakyReLU(0.2, inplace=True), - ) - ] - layer_in_channels = layer_out_channels - - # last 2 layers - layer_padding1 = (kernel_sizes[0] - 1) // 2 - layer_padding2 = (kernel_sizes[1] - 1) // 2 - self.layers += [ - nn.Sequential( - weight_norm( - nn.Conv1d( - layer_out_channels, - layer_out_channels, - kernel_size=kernel_sizes[0], - stride=1, - padding=layer_padding1, - ) - ), - nn.LeakyReLU(0.2, inplace=True), - ), - weight_norm( - nn.Conv1d( - layer_out_channels, out_channels, kernel_size=kernel_sizes[1], stride=1, padding=layer_padding2 - ) - ), - ] - - def forward(self, x): - feats = [] - for layer in self.layers: - x = layer(x) - feats.append(x) - return x, feats diff --git a/spaces/artificialguybr/video-dubbing/TTS/recipes/bel-alex73/train_hifigan.py b/spaces/artificialguybr/video-dubbing/TTS/recipes/bel-alex73/train_hifigan.py deleted file mode 100644 index 3e740b2ff400ab8f8815d3958bae9d6664c49142..0000000000000000000000000000000000000000 --- a/spaces/artificialguybr/video-dubbing/TTS/recipes/bel-alex73/train_hifigan.py +++ /dev/null @@ -1,60 +0,0 @@ -import os - -from coqpit import Coqpit -from trainer import Trainer, TrainerArgs - -from TTS.tts.configs.shared_configs import BaseAudioConfig -from TTS.utils.audio import AudioProcessor -from TTS.vocoder.configs.hifigan_config import * -from TTS.vocoder.datasets.preprocess import load_wav_data -from TTS.vocoder.models.gan import GAN - -output_path = "/storage/output-hifigan/" - -audio_config = BaseAudioConfig( - mel_fmin=50, - mel_fmax=8000, - hop_length=256, - stats_path="/storage/TTS/scale_stats.npy", -) - -config = HifiganConfig( - batch_size=74, - eval_batch_size=16, - num_loader_workers=8, - num_eval_loader_workers=8, - lr_disc=0.0002, - lr_gen=0.0002, - run_eval=True, - test_delay_epochs=5, - epochs=1000, - use_noise_augment=True, - seq_len=8192, - pad_short=2000, - save_step=5000, - print_step=50, - print_eval=True, - mixed_precision=False, - eval_split_size=30, - save_n_checkpoints=2, - save_best_after=5000, - data_path="/storage/filtered_dataset", - output_path=output_path, - audio=audio_config, -) - -# init audio processor -ap = AudioProcessor.init_from_config(config) - -# load training samples -print("config.eval_split_size = ", config.eval_split_size) -eval_samples, train_samples = load_wav_data(config.data_path, config.eval_split_size) - -# init model -model = GAN(config, ap) - -# init the trainer and 🚀 -trainer = Trainer( - TrainerArgs(), config, output_path, model=model, train_samples=train_samples, eval_samples=eval_samples -) -trainer.fit() diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/anyio/from_thread.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/anyio/from_thread.py deleted file mode 100644 index e4f871fdeb421d8ae2dd1c60a7f3c547cef50c3e..0000000000000000000000000000000000000000 --- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/anyio/from_thread.py +++ /dev/null @@ -1,502 +0,0 @@ -import threading -from asyncio import iscoroutine -from concurrent.futures import FIRST_COMPLETED, Future, ThreadPoolExecutor, wait -from contextlib import AbstractContextManager, contextmanager -from types import TracebackType -from typing import ( - Any, - AsyncContextManager, - Callable, - ContextManager, - Coroutine, - Dict, - Generator, - Iterable, - Optional, - Tuple, - Type, - TypeVar, - Union, - cast, - overload, -) -from warnings import warn - -from ._core import _eventloop -from ._core._eventloop import get_asynclib, get_cancelled_exc_class, threadlocals -from ._core._synchronization import Event -from ._core._tasks import CancelScope, create_task_group -from .abc._tasks import TaskStatus - -T_Retval = TypeVar("T_Retval") -T_co = TypeVar("T_co") - - -def run(func: Callable[..., Coroutine[Any, Any, T_Retval]], *args: object) -> T_Retval: - """ - Call a coroutine function from a worker thread. - - :param func: a coroutine function - :param args: positional arguments for the callable - :return: the return value of the coroutine function - - """ - try: - asynclib = threadlocals.current_async_module - except AttributeError: - raise RuntimeError("This function can only be run from an AnyIO worker thread") - - return asynclib.run_async_from_thread(func, *args) - - -def run_async_from_thread( - func: Callable[..., Coroutine[Any, Any, T_Retval]], *args: object -) -> T_Retval: - warn( - "run_async_from_thread() has been deprecated, use anyio.from_thread.run() instead", - DeprecationWarning, - ) - return run(func, *args) - - -def run_sync(func: Callable[..., T_Retval], *args: object) -> T_Retval: - """ - Call a function in the event loop thread from a worker thread. - - :param func: a callable - :param args: positional arguments for the callable - :return: the return value of the callable - - """ - try: - asynclib = threadlocals.current_async_module - except AttributeError: - raise RuntimeError("This function can only be run from an AnyIO worker thread") - - return asynclib.run_sync_from_thread(func, *args) - - -def run_sync_from_thread(func: Callable[..., T_Retval], *args: object) -> T_Retval: - warn( - "run_sync_from_thread() has been deprecated, use anyio.from_thread.run_sync() instead", - DeprecationWarning, - ) - return run_sync(func, *args) - - -class _BlockingAsyncContextManager(AbstractContextManager): - _enter_future: Future - _exit_future: Future - _exit_event: Event - _exit_exc_info: Tuple[ - Optional[Type[BaseException]], Optional[BaseException], Optional[TracebackType] - ] = (None, None, None) - - def __init__(self, async_cm: AsyncContextManager[T_co], portal: "BlockingPortal"): - self._async_cm = async_cm - self._portal = portal - - async def run_async_cm(self) -> Optional[bool]: - try: - self._exit_event = Event() - value = await self._async_cm.__aenter__() - except BaseException as exc: - self._enter_future.set_exception(exc) - raise - else: - self._enter_future.set_result(value) - - try: - # Wait for the sync context manager to exit. - # This next statement can raise `get_cancelled_exc_class()` if - # something went wrong in a task group in this async context - # manager. - await self._exit_event.wait() - finally: - # In case of cancellation, it could be that we end up here before - # `_BlockingAsyncContextManager.__exit__` is called, and an - # `_exit_exc_info` has been set. - result = await self._async_cm.__aexit__(*self._exit_exc_info) - return result - - def __enter__(self) -> T_co: - self._enter_future = Future() - self._exit_future = self._portal.start_task_soon(self.run_async_cm) - cm = self._enter_future.result() - return cast(T_co, cm) - - def __exit__( - self, - __exc_type: Optional[Type[BaseException]], - __exc_value: Optional[BaseException], - __traceback: Optional[TracebackType], - ) -> Optional[bool]: - self._exit_exc_info = __exc_type, __exc_value, __traceback - self._portal.call(self._exit_event.set) - return self._exit_future.result() - - -class _BlockingPortalTaskStatus(TaskStatus): - def __init__(self, future: Future): - self._future = future - - def started(self, value: object = None) -> None: - self._future.set_result(value) - - -class BlockingPortal: - """An object that lets external threads run code in an asynchronous event loop.""" - - def __new__(cls) -> "BlockingPortal": - return get_asynclib().BlockingPortal() - - def __init__(self) -> None: - self._event_loop_thread_id: Optional[int] = threading.get_ident() - self._stop_event = Event() - self._task_group = create_task_group() - self._cancelled_exc_class = get_cancelled_exc_class() - - async def __aenter__(self) -> "BlockingPortal": - await self._task_group.__aenter__() - return self - - async def __aexit__( - self, - exc_type: Optional[Type[BaseException]], - exc_val: Optional[BaseException], - exc_tb: Optional[TracebackType], - ) -> Optional[bool]: - await self.stop() - return await self._task_group.__aexit__(exc_type, exc_val, exc_tb) - - def _check_running(self) -> None: - if self._event_loop_thread_id is None: - raise RuntimeError("This portal is not running") - if self._event_loop_thread_id == threading.get_ident(): - raise RuntimeError( - "This method cannot be called from the event loop thread" - ) - - async def sleep_until_stopped(self) -> None: - """Sleep until :meth:`stop` is called.""" - await self._stop_event.wait() - - async def stop(self, cancel_remaining: bool = False) -> None: - """ - Signal the portal to shut down. - - This marks the portal as no longer accepting new calls and exits from - :meth:`sleep_until_stopped`. - - :param cancel_remaining: ``True`` to cancel all the remaining tasks, ``False`` to let them - finish before returning - - """ - self._event_loop_thread_id = None - self._stop_event.set() - if cancel_remaining: - self._task_group.cancel_scope.cancel() - - async def _call_func( - self, func: Callable, args: tuple, kwargs: Dict[str, Any], future: Future - ) -> None: - def callback(f: Future) -> None: - if f.cancelled() and self._event_loop_thread_id not in ( - None, - threading.get_ident(), - ): - self.call(scope.cancel) - - try: - retval = func(*args, **kwargs) - if iscoroutine(retval): - with CancelScope() as scope: - if future.cancelled(): - scope.cancel() - else: - future.add_done_callback(callback) - - retval = await retval - except self._cancelled_exc_class: - future.cancel() - except BaseException as exc: - if not future.cancelled(): - future.set_exception(exc) - - # Let base exceptions fall through - if not isinstance(exc, Exception): - raise - else: - if not future.cancelled(): - future.set_result(retval) - finally: - scope = None # type: ignore[assignment] - - def _spawn_task_from_thread( - self, - func: Callable, - args: tuple, - kwargs: Dict[str, Any], - name: object, - future: Future, - ) -> None: - """ - Spawn a new task using the given callable. - - Implementors must ensure that the future is resolved when the task finishes. - - :param func: a callable - :param args: positional arguments to be passed to the callable - :param kwargs: keyword arguments to be passed to the callable - :param name: name of the task (will be coerced to a string if not ``None``) - :param future: a future that will resolve to the return value of the callable, or the - exception raised during its execution - - """ - raise NotImplementedError - - @overload - def call( - self, func: Callable[..., Coroutine[Any, Any, T_Retval]], *args: object - ) -> T_Retval: - ... - - @overload - def call(self, func: Callable[..., T_Retval], *args: object) -> T_Retval: - ... - - def call( - self, - func: Callable[..., Union[Coroutine[Any, Any, T_Retval], T_Retval]], - *args: object - ) -> T_Retval: - """ - Call the given function in the event loop thread. - - If the callable returns a coroutine object, it is awaited on. - - :param func: any callable - :raises RuntimeError: if the portal is not running or if this method is called from within - the event loop thread - - """ - return cast(T_Retval, self.start_task_soon(func, *args).result()) - - @overload - def spawn_task( - self, - func: Callable[..., Coroutine[Any, Any, T_Retval]], - *args: object, - name: object = None - ) -> "Future[T_Retval]": - ... - - @overload - def spawn_task( - self, func: Callable[..., T_Retval], *args: object, name: object = None - ) -> "Future[T_Retval]": - ... - - def spawn_task( - self, - func: Callable[..., Union[Coroutine[Any, Any, T_Retval], T_Retval]], - *args: object, - name: object = None - ) -> "Future[T_Retval]": - """ - Start a task in the portal's task group. - - :param func: the target coroutine function - :param args: positional arguments passed to ``func`` - :param name: name of the task (will be coerced to a string if not ``None``) - :return: a future that resolves with the return value of the callable if the task completes - successfully, or with the exception raised in the task - :raises RuntimeError: if the portal is not running or if this method is called from within - the event loop thread - - .. versionadded:: 2.1 - .. deprecated:: 3.0 - Use :meth:`start_task_soon` instead. If your code needs AnyIO 2 compatibility, you - can keep using this until AnyIO 4. - - """ - warn( - "spawn_task() is deprecated -- use start_task_soon() instead", - DeprecationWarning, - ) - return self.start_task_soon(func, *args, name=name) # type: ignore[arg-type] - - @overload - def start_task_soon( - self, - func: Callable[..., Coroutine[Any, Any, T_Retval]], - *args: object, - name: object = None - ) -> "Future[T_Retval]": - ... - - @overload - def start_task_soon( - self, func: Callable[..., T_Retval], *args: object, name: object = None - ) -> "Future[T_Retval]": - ... - - def start_task_soon( - self, - func: Callable[..., Union[Coroutine[Any, Any, T_Retval], T_Retval]], - *args: object, - name: object = None - ) -> "Future[T_Retval]": - """ - Start a task in the portal's task group. - - The task will be run inside a cancel scope which can be cancelled by cancelling the - returned future. - - :param func: the target coroutine function - :param args: positional arguments passed to ``func`` - :param name: name of the task (will be coerced to a string if not ``None``) - :return: a future that resolves with the return value of the callable if the task completes - successfully, or with the exception raised in the task - :raises RuntimeError: if the portal is not running or if this method is called from within - the event loop thread - - .. versionadded:: 3.0 - - """ - self._check_running() - f: Future = Future() - self._spawn_task_from_thread(func, args, {}, name, f) - return f - - def start_task( - self, - func: Callable[..., Coroutine[Any, Any, Any]], - *args: object, - name: object = None - ) -> Tuple["Future[Any]", Any]: - """ - Start a task in the portal's task group and wait until it signals for readiness. - - This method works the same way as :meth:`TaskGroup.start`. - - :param func: the target coroutine function - :param args: positional arguments passed to ``func`` - :param name: name of the task (will be coerced to a string if not ``None``) - :return: a tuple of (future, task_status_value) where the ``task_status_value`` is the - value passed to ``task_status.started()`` from within the target function - - .. versionadded:: 3.0 - - """ - - def task_done(future: Future) -> None: - if not task_status_future.done(): - if future.cancelled(): - task_status_future.cancel() - elif future.exception(): - task_status_future.set_exception(future.exception()) - else: - exc = RuntimeError( - "Task exited without calling task_status.started()" - ) - task_status_future.set_exception(exc) - - self._check_running() - task_status_future: Future = Future() - task_status = _BlockingPortalTaskStatus(task_status_future) - f: Future = Future() - f.add_done_callback(task_done) - self._spawn_task_from_thread(func, args, {"task_status": task_status}, name, f) - return f, task_status_future.result() - - def wrap_async_context_manager( - self, cm: AsyncContextManager[T_co] - ) -> ContextManager[T_co]: - """ - Wrap an async context manager as a synchronous context manager via this portal. - - Spawns a task that will call both ``__aenter__()`` and ``__aexit__()``, stopping in the - middle until the synchronous context manager exits. - - :param cm: an asynchronous context manager - :return: a synchronous context manager - - .. versionadded:: 2.1 - - """ - return _BlockingAsyncContextManager(cm, self) - - -def create_blocking_portal() -> BlockingPortal: - """ - Create a portal for running functions in the event loop thread from external threads. - - Use this function in asynchronous code when you need to allow external threads access to the - event loop where your asynchronous code is currently running. - - .. deprecated:: 3.0 - Use :class:`.BlockingPortal` directly. - - """ - warn( - "create_blocking_portal() has been deprecated -- use anyio.from_thread.BlockingPortal() " - "directly", - DeprecationWarning, - ) - return BlockingPortal() - - -@contextmanager -def start_blocking_portal( - backend: str = "asyncio", backend_options: Optional[Dict[str, Any]] = None -) -> Generator[BlockingPortal, Any, None]: - """ - Start a new event loop in a new thread and run a blocking portal in its main task. - - The parameters are the same as for :func:`~anyio.run`. - - :param backend: name of the backend - :param backend_options: backend options - :return: a context manager that yields a blocking portal - - .. versionchanged:: 3.0 - Usage as a context manager is now required. - - """ - - async def run_portal() -> None: - async with BlockingPortal() as portal_: - if future.set_running_or_notify_cancel(): - future.set_result(portal_) - await portal_.sleep_until_stopped() - - future: Future[BlockingPortal] = Future() - with ThreadPoolExecutor(1) as executor: - run_future = executor.submit( - _eventloop.run, - run_portal, # type: ignore[arg-type] - backend=backend, - backend_options=backend_options, - ) - try: - wait( - cast(Iterable[Future], [run_future, future]), - return_when=FIRST_COMPLETED, - ) - except BaseException: - future.cancel() - run_future.cancel() - raise - - if future.done(): - portal = future.result() - try: - yield portal - except BaseException: - portal.call(portal.stop, True) - raise - - portal.call(portal.stop, False) - - run_future.result() diff --git a/spaces/asafAdge/Detic/detic/data/datasets/coco_zeroshot.py b/spaces/asafAdge/Detic/detic/data/datasets/coco_zeroshot.py deleted file mode 100644 index aee895de41db95e379874fa6e1badd95c5fe6742..0000000000000000000000000000000000000000 --- a/spaces/asafAdge/Detic/detic/data/datasets/coco_zeroshot.py +++ /dev/null @@ -1,121 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import os - -from detectron2.data.datasets.register_coco import register_coco_instances -from detectron2.data.datasets.builtin_meta import _get_builtin_metadata -from .lvis_v1 import custom_register_lvis_instances - -categories_seen = [ - {'id': 1, 'name': 'person'}, - {'id': 2, 'name': 'bicycle'}, - {'id': 3, 'name': 'car'}, - {'id': 4, 'name': 'motorcycle'}, - {'id': 7, 'name': 'train'}, - {'id': 8, 'name': 'truck'}, - {'id': 9, 'name': 'boat'}, - {'id': 15, 'name': 'bench'}, - {'id': 16, 'name': 'bird'}, - {'id': 19, 'name': 'horse'}, - {'id': 20, 'name': 'sheep'}, - {'id': 23, 'name': 'bear'}, - {'id': 24, 'name': 'zebra'}, - {'id': 25, 'name': 'giraffe'}, - {'id': 27, 'name': 'backpack'}, - {'id': 31, 'name': 'handbag'}, - {'id': 33, 'name': 'suitcase'}, - {'id': 34, 'name': 'frisbee'}, - {'id': 35, 'name': 'skis'}, - {'id': 38, 'name': 'kite'}, - {'id': 42, 'name': 'surfboard'}, - {'id': 44, 'name': 'bottle'}, - {'id': 48, 'name': 'fork'}, - {'id': 50, 'name': 'spoon'}, - {'id': 51, 'name': 'bowl'}, - {'id': 52, 'name': 'banana'}, - {'id': 53, 'name': 'apple'}, - {'id': 54, 'name': 'sandwich'}, - {'id': 55, 'name': 'orange'}, - {'id': 56, 'name': 'broccoli'}, - {'id': 57, 'name': 'carrot'}, - {'id': 59, 'name': 'pizza'}, - {'id': 60, 'name': 'donut'}, - {'id': 62, 'name': 'chair'}, - {'id': 65, 'name': 'bed'}, - {'id': 70, 'name': 'toilet'}, - {'id': 72, 'name': 'tv'}, - {'id': 73, 'name': 'laptop'}, - {'id': 74, 'name': 'mouse'}, - {'id': 75, 'name': 'remote'}, - {'id': 78, 'name': 'microwave'}, - {'id': 79, 'name': 'oven'}, - {'id': 80, 'name': 'toaster'}, - {'id': 82, 'name': 'refrigerator'}, - {'id': 84, 'name': 'book'}, - {'id': 85, 'name': 'clock'}, - {'id': 86, 'name': 'vase'}, - {'id': 90, 'name': 'toothbrush'}, -] - -categories_unseen = [ - {'id': 5, 'name': 'airplane'}, - {'id': 6, 'name': 'bus'}, - {'id': 17, 'name': 'cat'}, - {'id': 18, 'name': 'dog'}, - {'id': 21, 'name': 'cow'}, - {'id': 22, 'name': 'elephant'}, - {'id': 28, 'name': 'umbrella'}, - {'id': 32, 'name': 'tie'}, - {'id': 36, 'name': 'snowboard'}, - {'id': 41, 'name': 'skateboard'}, - {'id': 47, 'name': 'cup'}, - {'id': 49, 'name': 'knife'}, - {'id': 61, 'name': 'cake'}, - {'id': 63, 'name': 'couch'}, - {'id': 76, 'name': 'keyboard'}, - {'id': 81, 'name': 'sink'}, - {'id': 87, 'name': 'scissors'}, -] - -def _get_metadata(cat): - if cat == 'all': - return _get_builtin_metadata('coco') - elif cat == 'seen': - id_to_name = {x['id']: x['name'] for x in categories_seen} - else: - assert cat == 'unseen' - id_to_name = {x['id']: x['name'] for x in categories_unseen} - - thing_dataset_id_to_contiguous_id = { - x: i for i, x in enumerate(sorted(id_to_name))} - thing_classes = [id_to_name[k] for k in sorted(id_to_name)] - return { - "thing_dataset_id_to_contiguous_id": thing_dataset_id_to_contiguous_id, - "thing_classes": thing_classes} - -_PREDEFINED_SPLITS_COCO = { - "coco_zeroshot_train": ("coco/train2017", "coco/zero-shot/instances_train2017_seen_2.json", 'seen'), - "coco_zeroshot_val": ("coco/val2017", "coco/zero-shot/instances_val2017_unseen_2.json", 'unseen'), - "coco_not_zeroshot_val": ("coco/val2017", "coco/zero-shot/instances_val2017_seen_2.json", 'seen'), - "coco_generalized_zeroshot_val": ("coco/val2017", "coco/zero-shot/instances_val2017_all_2_oriorder.json", 'all'), - "coco_zeroshot_train_oriorder": ("coco/train2017", "coco/zero-shot/instances_train2017_seen_2_oriorder.json", 'all'), -} - -for key, (image_root, json_file, cat) in _PREDEFINED_SPLITS_COCO.items(): - register_coco_instances( - key, - _get_metadata(cat), - os.path.join("datasets", json_file) if "://" not in json_file else json_file, - os.path.join("datasets", image_root), - ) - -_CUSTOM_SPLITS_COCO = { - "cc3m_coco_train_tags": ("cc3m/training/", "cc3m/coco_train_image_info_tags.json"), - "coco_caption_train_tags": ("coco/train2017/", "coco/annotations/captions_train2017_tags_allcaps.json"),} - -for key, (image_root, json_file) in _CUSTOM_SPLITS_COCO.items(): - custom_register_lvis_instances( - key, - _get_builtin_metadata('coco'), - os.path.join("datasets", json_file) if "://" not in json_file else json_file, - os.path.join("datasets", image_root), - ) \ No newline at end of file diff --git a/spaces/asigalov61/Allegro-Music-Transformer/README.md b/spaces/asigalov61/Allegro-Music-Transformer/README.md deleted file mode 100644 index e0712d81a7cd1755533d2cf0109ed540823b7bf5..0000000000000000000000000000000000000000 --- a/spaces/asigalov61/Allegro-Music-Transformer/README.md +++ /dev/null @@ -1,16 +0,0 @@ ---- -title: Allegro Music Transformer -emoji: 🎼🎶 -colorFrom: red -colorTo: indigo -sdk: gradio -sdk_version: 3.41.1 -app_file: app.py -pinned: false -license: apache-2.0 -tags: -- midi -- music ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/attention-refocusing/Attention-refocusing/__init__.py b/spaces/attention-refocusing/Attention-refocusing/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/auto-academic/auto-draft/utils/storage.py b/spaces/auto-academic/auto-draft/utils/storage.py deleted file mode 100644 index 39112794ae01fe6cba711a46b685cf01e06e5cad..0000000000000000000000000000000000000000 --- a/spaces/auto-academic/auto-draft/utils/storage.py +++ /dev/null @@ -1,52 +0,0 @@ -# This script `storage.py` is used to handle the cloud storage. -# `upload_file`: -# Function to upload a local file to the specified S3 bucket. -# If the target_name is not specified, it will use the file_name as the object key. -# `list_all_files`: -# Function to list all the files in the specified S3 bucket. -# `download_file`: -# Function to download a file from the specified S3 bucket to the local machine using the specified file_name. - -import os -import boto3 - -BUCKET_NAME = "hf-storage" - -def get_client(): - access_key_id = os.getenv('AWS_ACCESS_KEY_ID') - secret_access_key = os.getenv('AWS_SECRET_ACCESS_KEY') - session = boto3.Session( - aws_access_key_id=access_key_id, - aws_secret_access_key=secret_access_key, - ) - s3 = session.resource('s3') - bucket = s3.Bucket(BUCKET_NAME) - return s3, bucket - -def upload_file(file_name, target_name=None): - s3, _ = get_client() - - if target_name is None: - target_name = file_name - s3.meta.client.upload_file(Filename=file_name, Bucket=BUCKET_NAME, Key=target_name) - print(f"The file {file_name} has been uploaded!") - - -def list_all_files(): - _, bucket = get_client() - return [obj.key for obj in bucket.objects.all()] - - -def download_file(file_name): - ''' Download `file_name` from the bucket. - Bucket (str) – The name of the bucket to download from. - Key (str) – The name of the key to download from. - Filename (str) – The path to the file to download to. - ''' - s3, _ = get_client() - s3.meta.client.download_file(Bucket=BUCKET_NAME, Key=file_name, Filename=file_name) - print(f"The file {file_name} has been downloaded!") - -if __name__ == "__main__": - file = "sample-output.pdf" - upload_file(file) diff --git a/spaces/awacke1/Azure.Terraform.Github.Actions.Web.App.MLOps/app.py b/spaces/awacke1/Azure.Terraform.Github.Actions.Web.App.MLOps/app.py deleted file mode 100644 index 8a88ef28f6e8dbf6e0991a2677f5e3f2b23f9137..0000000000000000000000000000000000000000 --- a/spaces/awacke1/Azure.Terraform.Github.Actions.Web.App.MLOps/app.py +++ /dev/null @@ -1,115 +0,0 @@ -import streamlit as st -st.markdown(""" - -Create a Terraform script that spins up an Azure Container Registry, a Dockerfile that pulls ubuntu latest python and then pulls a github repository, creates a github action for it when it updates to create a new web app inside the azure container registry. Show python and terraform code to automate this. - -To accomplish this task, you'll need to create the following components: - -An Azure Container Registry to store the Docker images. -A Dockerfile that pulls the latest Ubuntu and Python images and then pulls the code from the GitHub repository. -A GitHub Action that triggers the creation of a new web app inside the Azure Container Registry when the code in the GitHub repository is updated. -A Terraform script to automate the deployment of these components. -Here is the Terraform script that you can use to deploy an Azure Container Registry: - -terraform -Copy code -provider "azurerm" { - features {} -} - -resource "azurerm_resource_group" "example" { - name = "example-resource-group" - location = "East US" -} - -resource "azurerm_container_registry" "example" { - name = "example-container-registry" - resource_group_name = azurerm_resource_group.example.name - location = azurerm_resource_group.example.location - sku = "Premium" - admin_enabled = true - georeplication_locations = ["East US 2"] -} -This script creates an Azure resource group and an Azure Container Registry. - -Next, here is the Dockerfile that pulls the latest Ubuntu and Python images and then pulls the code from the GitHub repository: - -dockerfile -Copy code -FROM ubuntu:latest - -RUN apt-get update && \ - apt-get install -y python3 python3-pip && \ - rm -rf /var/lib/apt/lists/* - -RUN mkdir /app -WORKDIR /app - -COPY requirements.txt /app -RUN pip3 install -r requirements.txt - -COPY . /app -This Dockerfile installs Python 3 and its dependencies, creates a directory for the application, copies the requirements file and installs the dependencies. It then copies the entire project into the container. - -Finally, here is the Python code to deploy a new web app inside the Azure Container Registry: - -python -Copy code -import os -import subprocess - -az_acr_name = os.environ.get("AZ_ACR_NAME") - -def build_and_push_docker_image(): - subprocess.call(f"docker build -t {az_acr_name}.azurecr.io/myapp:latest .", shell=True) - subprocess.call(f"docker push {az_acr_name}.azurecr.io/myapp:latest", shell=True) - -if __name__ == "__main__": - build_and_push_docker_image() -This Python code builds a new Docker image for the app and then pushes it to the Azure Container Registry. - -Finally, here is the GitHub Action that triggers the creation of a new web app inside the Azure Container Registry: - -yaml -Copy code -name: Build and Push Docker Image - -on: - push: - branches: - - main - -env: - AZ_ACR_NAME: myacrname - -jobs: - build-and-push: - runs-on: ubuntu-latest - steps: - - name: Checkout code - uses: actions/checkout@v2 - - - name: Login to Azure - uses: azure/login@v1 - with: - creds: ${{ secrets.AZURE_CREDENTIALS }} - - - name: Build and Push Docker Image - run: python3 build_and_push.py - env: - AZ_ACR_NAME: ${{ env.AZ_ACR_NAME }} -This GitHub Action triggers when code is pushed to the main branch. It checks out the code, logs into Azure using the provided credentials, and then runs the Python script to build and push the Docker image to the Azure Container Registry. - -Finally, here is the Terraform script that automates the deployment of the components: - -terraform -Copy code -provider "azurerm" { - features {} -} - -resource "azurerm_resource_group" " - - - -""") \ No newline at end of file diff --git a/spaces/awacke1/Balloon.Club/app.py b/spaces/awacke1/Balloon.Club/app.py deleted file mode 100644 index 9e02499b494bb216b24056b74cb4d018acb63fff..0000000000000000000000000000000000000000 --- a/spaces/awacke1/Balloon.Club/app.py +++ /dev/null @@ -1,32 +0,0 @@ -import streamlit as st -import random - -# List of jokes -jokes = [ - "🩸💉📝 Why did the nurse need a red pen at work? In case she needed to draw blood.", - "🍴👨‍🍳👤 Did you hear about the new restaurant called Karma? There's no menu – you get what you deserve.", - "💀❌💪 Why don't skeletons fight each other? They don't have the guts.", - "🩸💉👨‍⚕️ Why did the doctor always carry a red pen? In case he needed to draw some blood.", - "💪🏋️‍♂️😴 I'm not feeling very work-out-y today. I think I might have a case of gymnesia.", - "☕️❄️😷 Why did the hipster get sick? He drank his coffee before it was cool.", - "👨‍🌾🏆👏 Did you hear about the scarecrow who won an award? Because he was outstanding in his field of medicine.", - "💰💵🍬 Did you hear about the guy who invented Lifesavers? He made a mint.", - "👻👻🔼 Why don't ghosts use elevators? They lift spirits naturally.", - "🥬🏃‍♀️🦖 I'm on a health kick – I'm drinking kale smoothies and jogging every day. It's exhausting – I'm starting to feel like a vegetarian T-Rex.", - "🍌👨‍⚕️😷 Why did the banana go to the doctor? Because it wasn't peeling well.", - "🎈🌬️🚫 I tried to start a hot air balloon club for people with asthma. It never really took off.", - "👨‍⚕️💻🌐 What do you call a doctor who fixes websites? A URL-ologist.", - "🦠🔬💉 Why did the germ cross the microscope? To get to the other slide.", - "👓🎩🐇 Did you hear about the optometrist who fell into his lens grinder? He made a spectacle of himself.", - "🤕🔧😖 What's the best way to cure a headache? Put your head in a vise and tighten it until the headache goes away.", - "💪🏋️‍♂️🤪 I think I'm getting a cold. I've been coughing so much that my abs have a six-pack.", - "🧛‍♂️💉😱 Why did the nurse refuse to give the vampire a shot? He might have had a bat reaction.", - "🏥🦵🏥 Why did the man with a broken leg cross the road? To get to the other side of the hospital.", - "🚪🎉🥇 Did you hear about the guy who invented the knock-knock joke? He won the 'No-bell' prize." -] - -# Add sliders to adjust the number of jokes to display and the size of the font -num_jokes = st.sidebar.slider("Number of jokes to display", 1, len(jokes), 5) -font_size = st.sidebar.slider("Font size", 12, 36, 24) - -# Shuffle the list of diff --git a/spaces/awacke1/EleutherAI-gpt-j-6B/README.md b/spaces/awacke1/EleutherAI-gpt-j-6B/README.md deleted file mode 100644 index b108c274bf034d49afb4f213794655c62bb15cf4..0000000000000000000000000000000000000000 --- a/spaces/awacke1/EleutherAI-gpt-j-6B/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: EleutherAI Gpt J 6B -emoji: 👀 -colorFrom: indigo -colorTo: purple -sdk: gradio -sdk_version: 3.16.2 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/awacke1/HTML5-Tower-Building-3D-Game/README.md b/spaces/awacke1/HTML5-Tower-Building-3D-Game/README.md deleted file mode 100644 index 21c1af3f4c46d25c0767325fe9115726a349bd87..0000000000000000000000000000000000000000 --- a/spaces/awacke1/HTML5-Tower-Building-3D-Game/README.md +++ /dev/null @@ -1,11 +0,0 @@ ---- -title: HTML5 Tower Building 3D Game -emoji: 🚀 -colorFrom: pink -colorTo: yellow -sdk: static -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/awacke1/Streamlit-ALBERT-Transformers-Sequence-Classify-Visualize/app.py b/spaces/awacke1/Streamlit-ALBERT-Transformers-Sequence-Classify-Visualize/app.py deleted file mode 100644 index 0d579854e1eade1c3f6bede79f3c0f531c6cd338..0000000000000000000000000000000000000000 --- a/spaces/awacke1/Streamlit-ALBERT-Transformers-Sequence-Classify-Visualize/app.py +++ /dev/null @@ -1,43 +0,0 @@ -import streamlit as st -import altair as alt -import torch -from transformers import AlbertTokenizer, AlbertForSequenceClassification -import sentencepiece as spm -import pandas as pd - -# Load pre-trained model and tokenizer -model_name = "albert-base-v2" -tokenizer = AlbertTokenizer.from_pretrained(model_name) -model = AlbertForSequenceClassification.from_pretrained(model_name) - -# Define function to classify input text -def classify_text(text): - inputs = tokenizer(text, padding=True, truncation=True, return_tensors="pt") - outputs = model(**inputs) - logits = outputs.logits.detach().numpy()[0] - probabilities = torch.softmax(torch.tensor(logits), dim=0).tolist() - return probabilities - -# Set up Streamlit app -st.title("ALBERT Text Classification App") - -# Create input box for user to enter text -default_text = "Streamlit-Altair: A component that allows the creation of Altair visualizations within Streamlit.\nStreamlit-Bokeh: A component that allows the creation of Bokeh visualizations within Streamlit.\nStreamlit-Plotly: A component that allows the creation of Plotly visualizations within Streamlit.\nStreamlit-Mapbox: A component that allows the creation of Mapbox maps within Streamlit.\nStreamlit-DeckGL: A component that allows the creation of Deck.GL visualizations within Streamlit.\nStreamlit-Wordcloud: A component that allows the creation of word clouds within Streamlit.\nStreamlit-Audio: A component that allows the playing of audio files within Streamlit.\nStreamlit-Video: A component that allows the playing of video files within Streamlit.\nStreamlit-EmbedCode: A component that allows the embedding of code snippets within Streamlit.\nStreamlit-Components: A component that provides a library of custom Streamlit components created by the Streamlit community." -text_input = st.text_area("Enter text to classify", default_text, height=200) - - -# Classify input text and display results -if st.button("Classify"): - if text_input: - probabilities = classify_text(text_input) - df = pd.DataFrame({ - 'Label': ['Negative', 'Positive'], - 'Probability': probabilities - }) - chart = alt.Chart(df).mark_bar().encode( - x='Probability', - y=alt.Y('Label', sort=['Negative', 'Positive']) - ) - st.write(chart) - else: - st.write("Please enter some text to classify.") diff --git a/spaces/awacke1/Video-Summary/README.md b/spaces/awacke1/Video-Summary/README.md deleted file mode 100644 index 698b879ca8776d21dc9cdefcd1108796b92cd535..0000000000000000000000000000000000000000 --- a/spaces/awacke1/Video-Summary/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: 📺NLP Video Summary📝 -emoji: 📺mT5-Bart -colorFrom: red -colorTo: green -sdk: gradio -sdk_version: 2.9.1 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference diff --git a/spaces/balaramas/indic_s2t/README.md b/spaces/balaramas/indic_s2t/README.md deleted file mode 100644 index 62101f35e27e4df584aed574309701a30338cc80..0000000000000000000000000000000000000000 --- a/spaces/balaramas/indic_s2t/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Indic S2t -emoji: 🌖 -colorFrom: blue -colorTo: indigo -sdk: gradio -sdk_version: 3.36.1 -app_file: app.py -pinned: false -license: other ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/banana-projects/web3d/node_modules/three/examples/js/postprocessing/TAARenderPass.js b/spaces/banana-projects/web3d/node_modules/three/examples/js/postprocessing/TAARenderPass.js deleted file mode 100644 index fe11a076fb7b8bae5aee4824e9bf31616ca584b2..0000000000000000000000000000000000000000 --- a/spaces/banana-projects/web3d/node_modules/three/examples/js/postprocessing/TAARenderPass.js +++ /dev/null @@ -1,140 +0,0 @@ -/** - * - * Temporal Anti-Aliasing Render Pass - * - * @author bhouston / http://clara.io/ - * - * When there is no motion in the scene, the TAA render pass accumulates jittered camera samples across frames to create a high quality anti-aliased result. - * - * References: - * - * TODO: Add support for motion vector pas so that accumulation of samples across frames can occur on dynamics scenes. - * - */ - -THREE.TAARenderPass = function ( scene, camera, params ) { - - if ( THREE.SSAARenderPass === undefined ) { - - console.error( "THREE.TAARenderPass relies on THREE.SSAARenderPass" ); - - } - - THREE.SSAARenderPass.call( this, scene, camera, params ); - - this.sampleLevel = 0; - this.accumulate = false; - -}; - -THREE.TAARenderPass.JitterVectors = THREE.SSAARenderPass.JitterVectors; - -THREE.TAARenderPass.prototype = Object.assign( Object.create( THREE.SSAARenderPass.prototype ), { - - constructor: THREE.TAARenderPass, - - render: function ( renderer, writeBuffer, readBuffer, deltaTime ) { - - if ( ! this.accumulate ) { - - THREE.SSAARenderPass.prototype.render.call( this, renderer, writeBuffer, readBuffer, deltaTime ); - - this.accumulateIndex = - 1; - return; - - } - - var jitterOffsets = THREE.TAARenderPass.JitterVectors[ 5 ]; - - if ( ! this.sampleRenderTarget ) { - - this.sampleRenderTarget = new THREE.WebGLRenderTarget( readBuffer.width, readBuffer.height, this.params ); - this.sampleRenderTarget.texture.name = "TAARenderPass.sample"; - - } - - if ( ! this.holdRenderTarget ) { - - this.holdRenderTarget = new THREE.WebGLRenderTarget( readBuffer.width, readBuffer.height, this.params ); - this.holdRenderTarget.texture.name = "TAARenderPass.hold"; - - } - - if ( this.accumulate && this.accumulateIndex === - 1 ) { - - THREE.SSAARenderPass.prototype.render.call( this, renderer, this.holdRenderTarget, readBuffer, deltaTime ); - - this.accumulateIndex = 0; - - } - - var autoClear = renderer.autoClear; - renderer.autoClear = false; - - var sampleWeight = 1.0 / ( jitterOffsets.length ); - - if ( this.accumulateIndex >= 0 && this.accumulateIndex < jitterOffsets.length ) { - - this.copyUniforms[ "opacity" ].value = sampleWeight; - this.copyUniforms[ "tDiffuse" ].value = writeBuffer.texture; - - // render the scene multiple times, each slightly jitter offset from the last and accumulate the results. - var numSamplesPerFrame = Math.pow( 2, this.sampleLevel ); - for ( var i = 0; i < numSamplesPerFrame; i ++ ) { - - var j = this.accumulateIndex; - var jitterOffset = jitterOffsets[ j ]; - - if ( this.camera.setViewOffset ) { - - this.camera.setViewOffset( readBuffer.width, readBuffer.height, - jitterOffset[ 0 ] * 0.0625, jitterOffset[ 1 ] * 0.0625, // 0.0625 = 1 / 16 - readBuffer.width, readBuffer.height ); - - } - - renderer.setRenderTarget( writeBuffer ); - renderer.clear(); - renderer.render( this.scene, this.camera ); - - renderer.setRenderTarget( this.sampleRenderTarget ); - if ( this.accumulateIndex === 0 ) renderer.clear(); - this.fsQuad.render( renderer ); - - this.accumulateIndex ++; - - if ( this.accumulateIndex >= jitterOffsets.length ) break; - - } - - if ( this.camera.clearViewOffset ) this.camera.clearViewOffset(); - - } - - var accumulationWeight = this.accumulateIndex * sampleWeight; - - if ( accumulationWeight > 0 ) { - - this.copyUniforms[ "opacity" ].value = 1.0; - this.copyUniforms[ "tDiffuse" ].value = this.sampleRenderTarget.texture; - renderer.setRenderTarget( writeBuffer ); - renderer.clear(); - this.fsQuad.render( renderer ); - - } - - if ( accumulationWeight < 1.0 ) { - - this.copyUniforms[ "opacity" ].value = 1.0 - accumulationWeight; - this.copyUniforms[ "tDiffuse" ].value = this.holdRenderTarget.texture; - renderer.setRenderTarget( writeBuffer ); - if ( accumulationWeight === 0 ) renderer.clear(); - this.fsQuad.render( renderer ); - - } - - renderer.autoClear = autoClear; - - } - -} ); diff --git a/spaces/banana-projects/web3d/node_modules/three/src/loaders/DataTextureLoader.d.ts b/spaces/banana-projects/web3d/node_modules/three/src/loaders/DataTextureLoader.d.ts deleted file mode 100644 index 8e4f6ba4baf44e1da0b2bea28d58a77ceeeb7d8e..0000000000000000000000000000000000000000 --- a/spaces/banana-projects/web3d/node_modules/three/src/loaders/DataTextureLoader.d.ts +++ /dev/null @@ -1,15 +0,0 @@ -import { LoadingManager } from './LoadingManager'; -import { DataTexture } from './../textures/DataTexture'; - -export class DataTextureLoader { - constructor(manager?: LoadingManager); - - manager: LoadingManager; - - load( - url: string, - onLoad: (dataTexture: DataTexture) => void, - onProgress?: (event: ProgressEvent) => void, - onError?: (event: ErrorEvent) => void - ): void; -} diff --git a/spaces/banana-projects/web3d/node_modules/three/src/materials/ShadowMaterial.d.ts b/spaces/banana-projects/web3d/node_modules/three/src/materials/ShadowMaterial.d.ts deleted file mode 100644 index 691a7208fe53fc4020281abe7d9e47870738b7a0..0000000000000000000000000000000000000000 --- a/spaces/banana-projects/web3d/node_modules/three/src/materials/ShadowMaterial.d.ts +++ /dev/null @@ -1,5 +0,0 @@ -import { ShaderMaterialParameters, ShaderMaterial } from './ShaderMaterial'; - -export class ShadowMaterial extends ShaderMaterial { - constructor(parameters?: ShaderMaterialParameters); -} diff --git a/spaces/bikemright/overweight-AI/README.md b/spaces/bikemright/overweight-AI/README.md deleted file mode 100644 index c176be188897b522c779e1feb7e68433a41351eb..0000000000000000000000000000000000000000 --- a/spaces/bikemright/overweight-AI/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Overweight AI -emoji: 📉 -colorFrom: gray -colorTo: purple -sdk: gradio -sdk_version: 3.0.13 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/bookbot/Grad-TTS-Weildan-Playground/Grad-TTS/finetune_multi_speaker.py b/spaces/bookbot/Grad-TTS-Weildan-Playground/Grad-TTS/finetune_multi_speaker.py deleted file mode 100644 index 0eacd2287025b33f1e1a8093819c4737c3566989..0000000000000000000000000000000000000000 --- a/spaces/bookbot/Grad-TTS-Weildan-Playground/Grad-TTS/finetune_multi_speaker.py +++ /dev/null @@ -1,237 +0,0 @@ -# Copyright (C) 2021. Huawei Technologies Co., Ltd. All rights reserved. -# This program is free software; you can redistribute it and/or modify -# it under the terms of the MIT License. -# This program is distributed in the hope that it will be useful, -# but WITHOUT ANY WARRANTY; without even the implied warranty of -# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the -# MIT License for more details. - -import numpy as np -from tqdm import tqdm - -import torch -from torch.utils.data import DataLoader -from torch.utils.tensorboard import SummaryWriter - -import finetune_params as params -from model import GradTTS -from data import TextMelSpeakerDataset, TextMelSpeakerBatchCollate -from utils import plot_tensor, save_plot -from text.symbols import symbols - - -train_filelist_path = params.train_filelist_path -valid_filelist_path = params.valid_filelist_path -cmudict_path = params.cmudict_path -add_blank = params.add_blank -n_spks = params.n_spks -spk_emb_dim = params.spk_emb_dim - -log_dir = params.log_dir -n_epochs = params.n_epochs -batch_size = params.batch_size -out_size = params.out_size -learning_rate = params.learning_rate -random_seed = params.seed - -nsymbols = len(symbols) + 1 if add_blank else len(symbols) -n_enc_channels = params.n_enc_channels -filter_channels = params.filter_channels -filter_channels_dp = params.filter_channels_dp -n_enc_layers = params.n_enc_layers -enc_kernel = params.enc_kernel -enc_dropout = params.enc_dropout -n_heads = params.n_heads -window_size = params.window_size - -n_feats = params.n_feats -n_fft = params.n_fft -sample_rate = params.sample_rate -hop_length = params.hop_length -win_length = params.win_length -f_min = params.f_min -f_max = params.f_max - -dec_dim = params.dec_dim -beta_min = params.beta_min -beta_max = params.beta_max -pe_scale = params.pe_scale - -num_workers = params.num_workers -checkpoint = params.checkpoint - -if __name__ == "__main__": - torch.manual_seed(random_seed) - np.random.seed(random_seed) - - print("Initializing logger...") - logger = SummaryWriter(log_dir=log_dir) - - print("Initializing data loaders...") - train_dataset = TextMelSpeakerDataset( - train_filelist_path, - cmudict_path, - add_blank, - n_fft, - n_feats, - sample_rate, - hop_length, - win_length, - f_min, - f_max, - ) - batch_collate = TextMelSpeakerBatchCollate() - loader = DataLoader( - dataset=train_dataset, - batch_size=batch_size, - collate_fn=batch_collate, - drop_last=True, - num_workers=num_workers, - shuffle=True, - ) - test_dataset = TextMelSpeakerDataset( - valid_filelist_path, - cmudict_path, - add_blank, - n_fft, - n_feats, - sample_rate, - hop_length, - win_length, - f_min, - f_max, - ) - - print("Initializing model...") - model = GradTTS( - nsymbols, - n_spks, - spk_emb_dim, - n_enc_channels, - filter_channels, - filter_channels_dp, - n_heads, - n_enc_layers, - enc_kernel, - enc_dropout, - window_size, - n_feats, - dec_dim, - beta_min, - beta_max, - pe_scale, - ).cuda() - model.load_state_dict(torch.load(checkpoint, map_location=torch.device("cuda"))) - print("Number of encoder parameters = %.2fm" % (model.encoder.nparams / 1e6)) - print("Number of decoder parameters = %.2fm" % (model.decoder.nparams / 1e6)) - - print("Initializing optimizer...") - optimizer = torch.optim.Adam(params=model.parameters(), lr=learning_rate) - - print("Logging test batch...") - test_batch = test_dataset.sample_test_batch(size=params.test_size) - for item in test_batch: - mel, spk = item["y"], item["spk"] - i = int(spk.cpu()) - logger.add_image( - f"image_{i}/ground_truth", - plot_tensor(mel.squeeze()), - global_step=0, - dataformats="HWC", - ) - save_plot(mel.squeeze(), f"{log_dir}/original_{i}.png") - - print("Start training...") - iteration = 0 - for epoch in range(1, n_epochs + 1): - model.eval() - print("Synthesis...") - with torch.no_grad(): - for item in test_batch: - x = item["x"].to(torch.long).unsqueeze(0).cuda() - x_lengths = torch.LongTensor([x.shape[-1]]).cuda() - spk = item["spk"].to(torch.long).cuda() - i = int(spk.cpu()) - - y_enc, y_dec, attn = model(x, x_lengths, n_timesteps=50, spk=spk) - logger.add_image( - f"image_{i}/generated_enc", - plot_tensor(y_enc.squeeze().cpu()), - global_step=iteration, - dataformats="HWC", - ) - logger.add_image( - f"image_{i}/generated_dec", - plot_tensor(y_dec.squeeze().cpu()), - global_step=iteration, - dataformats="HWC", - ) - logger.add_image( - f"image_{i}/alignment", - plot_tensor(attn.squeeze().cpu()), - global_step=iteration, - dataformats="HWC", - ) - save_plot(y_enc.squeeze().cpu(), f"{log_dir}/generated_enc_{i}.png") - save_plot(y_dec.squeeze().cpu(), f"{log_dir}/generated_dec_{i}.png") - save_plot(attn.squeeze().cpu(), f"{log_dir}/alignment_{i}.png") - - model.train() - dur_losses = [] - prior_losses = [] - diff_losses = [] - with tqdm(loader, total=len(train_dataset) // batch_size) as progress_bar: - for batch in progress_bar: - model.zero_grad() - x, x_lengths = batch["x"].cuda(), batch["x_lengths"].cuda() - y, y_lengths = batch["y"].cuda(), batch["y_lengths"].cuda() - spk = batch["spk"].cuda() - dur_loss, prior_loss, diff_loss = model.compute_loss( - x, x_lengths, y, y_lengths, spk=spk, out_size=out_size - ) - loss = sum([dur_loss, prior_loss, diff_loss]) - loss.backward() - - enc_grad_norm = torch.nn.utils.clip_grad_norm_( - model.encoder.parameters(), max_norm=1 - ) - dec_grad_norm = torch.nn.utils.clip_grad_norm_( - model.decoder.parameters(), max_norm=1 - ) - optimizer.step() - - logger.add_scalar( - "training/duration_loss", dur_loss, global_step=iteration - ) - logger.add_scalar( - "training/prior_loss", prior_loss, global_step=iteration - ) - logger.add_scalar( - "training/diffusion_loss", diff_loss, global_step=iteration - ) - logger.add_scalar( - "training/encoder_grad_norm", enc_grad_norm, global_step=iteration - ) - logger.add_scalar( - "training/decoder_grad_norm", dec_grad_norm, global_step=iteration - ) - - msg = f"Epoch: {epoch}, iteration: {iteration} | dur_loss: {dur_loss.item()}, prior_loss: {prior_loss.item()}, diff_loss: {diff_loss.item()}" - progress_bar.set_description(msg) - - dur_losses.append(dur_loss.item()) - prior_losses.append(prior_loss.item()) - diff_losses.append(diff_loss.item()) - iteration += 1 - - msg = "Epoch %d: duration loss = %.3f " % (epoch, np.mean(dur_losses)) - msg += "| prior loss = %.3f " % np.mean(prior_losses) - msg += "| diffusion loss = %.3f\n" % np.mean(diff_losses) - with open(f"{log_dir}/train.log", "a") as f: - f.write(msg) - - if epoch % params.save_every > 0: - continue - - ckpt = model.state_dict() - torch.save(ckpt, f=f"{log_dir}/grad_{epoch}.pt") diff --git a/spaces/camenduru-com/VITS-Umamusume-voice-synthesizer/README.md b/spaces/camenduru-com/VITS-Umamusume-voice-synthesizer/README.md deleted file mode 100644 index 383c1b3bd49bc505f996f5655f3b504d9efa88ee..0000000000000000000000000000000000000000 --- a/spaces/camenduru-com/VITS-Umamusume-voice-synthesizer/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Uma Voice -emoji: 🚀 -colorFrom: green -colorTo: gray -sdk: gradio -sdk_version: 3.7 -app_file: app.py -pinned: false -duplicated_from: Plachta/VITS-Umamusume-voice-synthesizer ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/camilosegura/traductor-multilenguaje/Lib/site-packages/PIL/JpegImagePlugin.py b/spaces/camilosegura/traductor-multilenguaje/Lib/site-packages/PIL/JpegImagePlugin.py deleted file mode 100644 index dfc7e6e9f569e05e3a1f9e3fd1407b5f202a6d56..0000000000000000000000000000000000000000 --- a/spaces/camilosegura/traductor-multilenguaje/Lib/site-packages/PIL/JpegImagePlugin.py +++ /dev/null @@ -1,849 +0,0 @@ -# -# The Python Imaging Library. -# $Id$ -# -# JPEG (JFIF) file handling -# -# See "Digital Compression and Coding of Continuous-Tone Still Images, -# Part 1, Requirements and Guidelines" (CCITT T.81 / ISO 10918-1) -# -# History: -# 1995-09-09 fl Created -# 1995-09-13 fl Added full parser -# 1996-03-25 fl Added hack to use the IJG command line utilities -# 1996-05-05 fl Workaround Photoshop 2.5 CMYK polarity bug -# 1996-05-28 fl Added draft support, JFIF version (0.1) -# 1996-12-30 fl Added encoder options, added progression property (0.2) -# 1997-08-27 fl Save mode 1 images as BW (0.3) -# 1998-07-12 fl Added YCbCr to draft and save methods (0.4) -# 1998-10-19 fl Don't hang on files using 16-bit DQT's (0.4.1) -# 2001-04-16 fl Extract DPI settings from JFIF files (0.4.2) -# 2002-07-01 fl Skip pad bytes before markers; identify Exif files (0.4.3) -# 2003-04-25 fl Added experimental EXIF decoder (0.5) -# 2003-06-06 fl Added experimental EXIF GPSinfo decoder -# 2003-09-13 fl Extract COM markers -# 2009-09-06 fl Added icc_profile support (from Florian Hoech) -# 2009-03-06 fl Changed CMYK handling; always use Adobe polarity (0.6) -# 2009-03-08 fl Added subsampling support (from Justin Huff). -# -# Copyright (c) 1997-2003 by Secret Labs AB. -# Copyright (c) 1995-1996 by Fredrik Lundh. -# -# See the README file for information on usage and redistribution. -# -import array -import io -import math -import os -import struct -import subprocess -import sys -import tempfile -import warnings - -from . import Image, ImageFile -from ._binary import i16be as i16 -from ._binary import i32be as i32 -from ._binary import o8 -from ._binary import o16be as o16 -from .JpegPresets import presets - -# -# Parser - - -def Skip(self, marker): - n = i16(self.fp.read(2)) - 2 - ImageFile._safe_read(self.fp, n) - - -def APP(self, marker): - # - # Application marker. Store these in the APP dictionary. - # Also look for well-known application markers. - - n = i16(self.fp.read(2)) - 2 - s = ImageFile._safe_read(self.fp, n) - - app = "APP%d" % (marker & 15) - - self.app[app] = s # compatibility - self.applist.append((app, s)) - - if marker == 0xFFE0 and s[:4] == b"JFIF": - # extract JFIF information - self.info["jfif"] = version = i16(s, 5) # version - self.info["jfif_version"] = divmod(version, 256) - # extract JFIF properties - try: - jfif_unit = s[7] - jfif_density = i16(s, 8), i16(s, 10) - except Exception: - pass - else: - if jfif_unit == 1: - self.info["dpi"] = jfif_density - self.info["jfif_unit"] = jfif_unit - self.info["jfif_density"] = jfif_density - elif marker == 0xFFE1 and s[:5] == b"Exif\0": - if "exif" not in self.info: - # extract EXIF information (incomplete) - self.info["exif"] = s # FIXME: value will change - self._exif_offset = self.fp.tell() - n + 6 - elif marker == 0xFFE2 and s[:5] == b"FPXR\0": - # extract FlashPix information (incomplete) - self.info["flashpix"] = s # FIXME: value will change - elif marker == 0xFFE2 and s[:12] == b"ICC_PROFILE\0": - # Since an ICC profile can be larger than the maximum size of - # a JPEG marker (64K), we need provisions to split it into - # multiple markers. The format defined by the ICC specifies - # one or more APP2 markers containing the following data: - # Identifying string ASCII "ICC_PROFILE\0" (12 bytes) - # Marker sequence number 1, 2, etc (1 byte) - # Number of markers Total of APP2's used (1 byte) - # Profile data (remainder of APP2 data) - # Decoders should use the marker sequence numbers to - # reassemble the profile, rather than assuming that the APP2 - # markers appear in the correct sequence. - self.icclist.append(s) - elif marker == 0xFFED and s[:14] == b"Photoshop 3.0\x00": - # parse the image resource block - offset = 14 - photoshop = self.info.setdefault("photoshop", {}) - while s[offset : offset + 4] == b"8BIM": - try: - offset += 4 - # resource code - code = i16(s, offset) - offset += 2 - # resource name (usually empty) - name_len = s[offset] - # name = s[offset+1:offset+1+name_len] - offset += 1 + name_len - offset += offset & 1 # align - # resource data block - size = i32(s, offset) - offset += 4 - data = s[offset : offset + size] - if code == 0x03ED: # ResolutionInfo - data = { - "XResolution": i32(data, 0) / 65536, - "DisplayedUnitsX": i16(data, 4), - "YResolution": i32(data, 8) / 65536, - "DisplayedUnitsY": i16(data, 12), - } - photoshop[code] = data - offset += size - offset += offset & 1 # align - except struct.error: - break # insufficient data - - elif marker == 0xFFEE and s[:5] == b"Adobe": - self.info["adobe"] = i16(s, 5) - # extract Adobe custom properties - try: - adobe_transform = s[11] - except IndexError: - pass - else: - self.info["adobe_transform"] = adobe_transform - elif marker == 0xFFE2 and s[:4] == b"MPF\0": - # extract MPO information - self.info["mp"] = s[4:] - # offset is current location minus buffer size - # plus constant header size - self.info["mpoffset"] = self.fp.tell() - n + 4 - - # If DPI isn't in JPEG header, fetch from EXIF - if "dpi" not in self.info and "exif" in self.info: - try: - exif = self.getexif() - resolution_unit = exif[0x0128] - x_resolution = exif[0x011A] - try: - dpi = float(x_resolution[0]) / x_resolution[1] - except TypeError: - dpi = x_resolution - if math.isnan(dpi): - raise ValueError - if resolution_unit == 3: # cm - # 1 dpcm = 2.54 dpi - dpi *= 2.54 - self.info["dpi"] = dpi, dpi - except (TypeError, KeyError, SyntaxError, ValueError, ZeroDivisionError): - # SyntaxError for invalid/unreadable EXIF - # KeyError for dpi not included - # ZeroDivisionError for invalid dpi rational value - # ValueError or TypeError for dpi being an invalid float - self.info["dpi"] = 72, 72 - - -def COM(self, marker): - # - # Comment marker. Store these in the APP dictionary. - n = i16(self.fp.read(2)) - 2 - s = ImageFile._safe_read(self.fp, n) - - self.info["comment"] = s - self.app["COM"] = s # compatibility - self.applist.append(("COM", s)) - - -def SOF(self, marker): - # - # Start of frame marker. Defines the size and mode of the - # image. JPEG is colour blind, so we use some simple - # heuristics to map the number of layers to an appropriate - # mode. Note that this could be made a bit brighter, by - # looking for JFIF and Adobe APP markers. - - n = i16(self.fp.read(2)) - 2 - s = ImageFile._safe_read(self.fp, n) - self._size = i16(s, 3), i16(s, 1) - - self.bits = s[0] - if self.bits != 8: - msg = f"cannot handle {self.bits}-bit layers" - raise SyntaxError(msg) - - self.layers = s[5] - if self.layers == 1: - self.mode = "L" - elif self.layers == 3: - self.mode = "RGB" - elif self.layers == 4: - self.mode = "CMYK" - else: - msg = f"cannot handle {self.layers}-layer images" - raise SyntaxError(msg) - - if marker in [0xFFC2, 0xFFC6, 0xFFCA, 0xFFCE]: - self.info["progressive"] = self.info["progression"] = 1 - - if self.icclist: - # fixup icc profile - self.icclist.sort() # sort by sequence number - if self.icclist[0][13] == len(self.icclist): - profile = [] - for p in self.icclist: - profile.append(p[14:]) - icc_profile = b"".join(profile) - else: - icc_profile = None # wrong number of fragments - self.info["icc_profile"] = icc_profile - self.icclist = [] - - for i in range(6, len(s), 3): - t = s[i : i + 3] - # 4-tuples: id, vsamp, hsamp, qtable - self.layer.append((t[0], t[1] // 16, t[1] & 15, t[2])) - - -def DQT(self, marker): - # - # Define quantization table. Note that there might be more - # than one table in each marker. - - # FIXME: The quantization tables can be used to estimate the - # compression quality. - - n = i16(self.fp.read(2)) - 2 - s = ImageFile._safe_read(self.fp, n) - while len(s): - v = s[0] - precision = 1 if (v // 16 == 0) else 2 # in bytes - qt_length = 1 + precision * 64 - if len(s) < qt_length: - msg = "bad quantization table marker" - raise SyntaxError(msg) - data = array.array("B" if precision == 1 else "H", s[1:qt_length]) - if sys.byteorder == "little" and precision > 1: - data.byteswap() # the values are always big-endian - self.quantization[v & 15] = [data[i] for i in zigzag_index] - s = s[qt_length:] - - -# -# JPEG marker table - -MARKER = { - 0xFFC0: ("SOF0", "Baseline DCT", SOF), - 0xFFC1: ("SOF1", "Extended Sequential DCT", SOF), - 0xFFC2: ("SOF2", "Progressive DCT", SOF), - 0xFFC3: ("SOF3", "Spatial lossless", SOF), - 0xFFC4: ("DHT", "Define Huffman table", Skip), - 0xFFC5: ("SOF5", "Differential sequential DCT", SOF), - 0xFFC6: ("SOF6", "Differential progressive DCT", SOF), - 0xFFC7: ("SOF7", "Differential spatial", SOF), - 0xFFC8: ("JPG", "Extension", None), - 0xFFC9: ("SOF9", "Extended sequential DCT (AC)", SOF), - 0xFFCA: ("SOF10", "Progressive DCT (AC)", SOF), - 0xFFCB: ("SOF11", "Spatial lossless DCT (AC)", SOF), - 0xFFCC: ("DAC", "Define arithmetic coding conditioning", Skip), - 0xFFCD: ("SOF13", "Differential sequential DCT (AC)", SOF), - 0xFFCE: ("SOF14", "Differential progressive DCT (AC)", SOF), - 0xFFCF: ("SOF15", "Differential spatial (AC)", SOF), - 0xFFD0: ("RST0", "Restart 0", None), - 0xFFD1: ("RST1", "Restart 1", None), - 0xFFD2: ("RST2", "Restart 2", None), - 0xFFD3: ("RST3", "Restart 3", None), - 0xFFD4: ("RST4", "Restart 4", None), - 0xFFD5: ("RST5", "Restart 5", None), - 0xFFD6: ("RST6", "Restart 6", None), - 0xFFD7: ("RST7", "Restart 7", None), - 0xFFD8: ("SOI", "Start of image", None), - 0xFFD9: ("EOI", "End of image", None), - 0xFFDA: ("SOS", "Start of scan", Skip), - 0xFFDB: ("DQT", "Define quantization table", DQT), - 0xFFDC: ("DNL", "Define number of lines", Skip), - 0xFFDD: ("DRI", "Define restart interval", Skip), - 0xFFDE: ("DHP", "Define hierarchical progression", SOF), - 0xFFDF: ("EXP", "Expand reference component", Skip), - 0xFFE0: ("APP0", "Application segment 0", APP), - 0xFFE1: ("APP1", "Application segment 1", APP), - 0xFFE2: ("APP2", "Application segment 2", APP), - 0xFFE3: ("APP3", "Application segment 3", APP), - 0xFFE4: ("APP4", "Application segment 4", APP), - 0xFFE5: ("APP5", "Application segment 5", APP), - 0xFFE6: ("APP6", "Application segment 6", APP), - 0xFFE7: ("APP7", "Application segment 7", APP), - 0xFFE8: ("APP8", "Application segment 8", APP), - 0xFFE9: ("APP9", "Application segment 9", APP), - 0xFFEA: ("APP10", "Application segment 10", APP), - 0xFFEB: ("APP11", "Application segment 11", APP), - 0xFFEC: ("APP12", "Application segment 12", APP), - 0xFFED: ("APP13", "Application segment 13", APP), - 0xFFEE: ("APP14", "Application segment 14", APP), - 0xFFEF: ("APP15", "Application segment 15", APP), - 0xFFF0: ("JPG0", "Extension 0", None), - 0xFFF1: ("JPG1", "Extension 1", None), - 0xFFF2: ("JPG2", "Extension 2", None), - 0xFFF3: ("JPG3", "Extension 3", None), - 0xFFF4: ("JPG4", "Extension 4", None), - 0xFFF5: ("JPG5", "Extension 5", None), - 0xFFF6: ("JPG6", "Extension 6", None), - 0xFFF7: ("JPG7", "Extension 7", None), - 0xFFF8: ("JPG8", "Extension 8", None), - 0xFFF9: ("JPG9", "Extension 9", None), - 0xFFFA: ("JPG10", "Extension 10", None), - 0xFFFB: ("JPG11", "Extension 11", None), - 0xFFFC: ("JPG12", "Extension 12", None), - 0xFFFD: ("JPG13", "Extension 13", None), - 0xFFFE: ("COM", "Comment", COM), -} - - -def _accept(prefix): - # Magic number was taken from https://en.wikipedia.org/wiki/JPEG - return prefix[:3] == b"\xFF\xD8\xFF" - - -## -# Image plugin for JPEG and JFIF images. - - -class JpegImageFile(ImageFile.ImageFile): - format = "JPEG" - format_description = "JPEG (ISO 10918)" - - def _open(self): - s = self.fp.read(3) - - if not _accept(s): - msg = "not a JPEG file" - raise SyntaxError(msg) - s = b"\xFF" - - # Create attributes - self.bits = self.layers = 0 - - # JPEG specifics (internal) - self.layer = [] - self.huffman_dc = {} - self.huffman_ac = {} - self.quantization = {} - self.app = {} # compatibility - self.applist = [] - self.icclist = [] - - while True: - i = s[0] - if i == 0xFF: - s = s + self.fp.read(1) - i = i16(s) - else: - # Skip non-0xFF junk - s = self.fp.read(1) - continue - - if i in MARKER: - name, description, handler = MARKER[i] - if handler is not None: - handler(self, i) - if i == 0xFFDA: # start of scan - rawmode = self.mode - if self.mode == "CMYK": - rawmode = "CMYK;I" # assume adobe conventions - self.tile = [("jpeg", (0, 0) + self.size, 0, (rawmode, ""))] - # self.__offset = self.fp.tell() - break - s = self.fp.read(1) - elif i == 0 or i == 0xFFFF: - # padded marker or junk; move on - s = b"\xff" - elif i == 0xFF00: # Skip extraneous data (escaped 0xFF) - s = self.fp.read(1) - else: - msg = "no marker found" - raise SyntaxError(msg) - - def load_read(self, read_bytes): - """ - internal: read more image data - For premature EOF and LOAD_TRUNCATED_IMAGES adds EOI marker - so libjpeg can finish decoding - """ - s = self.fp.read(read_bytes) - - if not s and ImageFile.LOAD_TRUNCATED_IMAGES and not hasattr(self, "_ended"): - # Premature EOF. - # Pretend file is finished adding EOI marker - self._ended = True - return b"\xFF\xD9" - - return s - - def draft(self, mode, size): - if len(self.tile) != 1: - return - - # Protect from second call - if self.decoderconfig: - return - - d, e, o, a = self.tile[0] - scale = 1 - original_size = self.size - - if a[0] == "RGB" and mode in ["L", "YCbCr"]: - self.mode = mode - a = mode, "" - - if size: - scale = min(self.size[0] // size[0], self.size[1] // size[1]) - for s in [8, 4, 2, 1]: - if scale >= s: - break - e = ( - e[0], - e[1], - (e[2] - e[0] + s - 1) // s + e[0], - (e[3] - e[1] + s - 1) // s + e[1], - ) - self._size = ((self.size[0] + s - 1) // s, (self.size[1] + s - 1) // s) - scale = s - - self.tile = [(d, e, o, a)] - self.decoderconfig = (scale, 0) - - box = (0, 0, original_size[0] / scale, original_size[1] / scale) - return self.mode, box - - def load_djpeg(self): - # ALTERNATIVE: handle JPEGs via the IJG command line utilities - - f, path = tempfile.mkstemp() - os.close(f) - if os.path.exists(self.filename): - subprocess.check_call(["djpeg", "-outfile", path, self.filename]) - else: - try: - os.unlink(path) - except OSError: - pass - - msg = "Invalid Filename" - raise ValueError(msg) - - try: - with Image.open(path) as _im: - _im.load() - self.im = _im.im - finally: - try: - os.unlink(path) - except OSError: - pass - - self.mode = self.im.mode - self._size = self.im.size - - self.tile = [] - - def _getexif(self): - return _getexif(self) - - def _getmp(self): - return _getmp(self) - - def getxmp(self): - """ - Returns a dictionary containing the XMP tags. - Requires defusedxml to be installed. - - :returns: XMP tags in a dictionary. - """ - - for segment, content in self.applist: - if segment == "APP1": - marker, xmp_tags = content.rsplit(b"\x00", 1) - if marker == b"http://ns.adobe.com/xap/1.0/": - return self._getxmp(xmp_tags) - return {} - - -def _getexif(self): - if "exif" not in self.info: - return None - return self.getexif()._get_merged_dict() - - -def _getmp(self): - # Extract MP information. This method was inspired by the "highly - # experimental" _getexif version that's been in use for years now, - # itself based on the ImageFileDirectory class in the TIFF plugin. - - # The MP record essentially consists of a TIFF file embedded in a JPEG - # application marker. - try: - data = self.info["mp"] - except KeyError: - return None - file_contents = io.BytesIO(data) - head = file_contents.read(8) - endianness = ">" if head[:4] == b"\x4d\x4d\x00\x2a" else "<" - # process dictionary - from . import TiffImagePlugin - - try: - info = TiffImagePlugin.ImageFileDirectory_v2(head) - file_contents.seek(info.next) - info.load(file_contents) - mp = dict(info) - except Exception as e: - msg = "malformed MP Index (unreadable directory)" - raise SyntaxError(msg) from e - # it's an error not to have a number of images - try: - quant = mp[0xB001] - except KeyError as e: - msg = "malformed MP Index (no number of images)" - raise SyntaxError(msg) from e - # get MP entries - mpentries = [] - try: - rawmpentries = mp[0xB002] - for entrynum in range(0, quant): - unpackedentry = struct.unpack_from( - f"{endianness}LLLHH", rawmpentries, entrynum * 16 - ) - labels = ("Attribute", "Size", "DataOffset", "EntryNo1", "EntryNo2") - mpentry = dict(zip(labels, unpackedentry)) - mpentryattr = { - "DependentParentImageFlag": bool(mpentry["Attribute"] & (1 << 31)), - "DependentChildImageFlag": bool(mpentry["Attribute"] & (1 << 30)), - "RepresentativeImageFlag": bool(mpentry["Attribute"] & (1 << 29)), - "Reserved": (mpentry["Attribute"] & (3 << 27)) >> 27, - "ImageDataFormat": (mpentry["Attribute"] & (7 << 24)) >> 24, - "MPType": mpentry["Attribute"] & 0x00FFFFFF, - } - if mpentryattr["ImageDataFormat"] == 0: - mpentryattr["ImageDataFormat"] = "JPEG" - else: - msg = "unsupported picture format in MPO" - raise SyntaxError(msg) - mptypemap = { - 0x000000: "Undefined", - 0x010001: "Large Thumbnail (VGA Equivalent)", - 0x010002: "Large Thumbnail (Full HD Equivalent)", - 0x020001: "Multi-Frame Image (Panorama)", - 0x020002: "Multi-Frame Image: (Disparity)", - 0x020003: "Multi-Frame Image: (Multi-Angle)", - 0x030000: "Baseline MP Primary Image", - } - mpentryattr["MPType"] = mptypemap.get(mpentryattr["MPType"], "Unknown") - mpentry["Attribute"] = mpentryattr - mpentries.append(mpentry) - mp[0xB002] = mpentries - except KeyError as e: - msg = "malformed MP Index (bad MP Entry)" - raise SyntaxError(msg) from e - # Next we should try and parse the individual image unique ID list; - # we don't because I've never seen this actually used in a real MPO - # file and so can't test it. - return mp - - -# -------------------------------------------------------------------- -# stuff to save JPEG files - -RAWMODE = { - "1": "L", - "L": "L", - "RGB": "RGB", - "RGBX": "RGB", - "CMYK": "CMYK;I", # assume adobe conventions - "YCbCr": "YCbCr", -} - -# fmt: off -zigzag_index = ( - 0, 1, 5, 6, 14, 15, 27, 28, - 2, 4, 7, 13, 16, 26, 29, 42, - 3, 8, 12, 17, 25, 30, 41, 43, - 9, 11, 18, 24, 31, 40, 44, 53, - 10, 19, 23, 32, 39, 45, 52, 54, - 20, 22, 33, 38, 46, 51, 55, 60, - 21, 34, 37, 47, 50, 56, 59, 61, - 35, 36, 48, 49, 57, 58, 62, 63, -) - -samplings = { - (1, 1, 1, 1, 1, 1): 0, - (2, 1, 1, 1, 1, 1): 1, - (2, 2, 1, 1, 1, 1): 2, -} -# fmt: on - - -def get_sampling(im): - # There's no subsampling when images have only 1 layer - # (grayscale images) or when they are CMYK (4 layers), - # so set subsampling to the default value. - # - # NOTE: currently Pillow can't encode JPEG to YCCK format. - # If YCCK support is added in the future, subsampling code will have - # to be updated (here and in JpegEncode.c) to deal with 4 layers. - if not hasattr(im, "layers") or im.layers in (1, 4): - return -1 - sampling = im.layer[0][1:3] + im.layer[1][1:3] + im.layer[2][1:3] - return samplings.get(sampling, -1) - - -def _save(im, fp, filename): - if im.width == 0 or im.height == 0: - msg = "cannot write empty image as JPEG" - raise ValueError(msg) - - try: - rawmode = RAWMODE[im.mode] - except KeyError as e: - msg = f"cannot write mode {im.mode} as JPEG" - raise OSError(msg) from e - - info = im.encoderinfo - - dpi = [round(x) for x in info.get("dpi", (0, 0))] - - quality = info.get("quality", -1) - subsampling = info.get("subsampling", -1) - qtables = info.get("qtables") - - if quality == "keep": - quality = -1 - subsampling = "keep" - qtables = "keep" - elif quality in presets: - preset = presets[quality] - quality = -1 - subsampling = preset.get("subsampling", -1) - qtables = preset.get("quantization") - elif not isinstance(quality, int): - msg = "Invalid quality setting" - raise ValueError(msg) - else: - if subsampling in presets: - subsampling = presets[subsampling].get("subsampling", -1) - if isinstance(qtables, str) and qtables in presets: - qtables = presets[qtables].get("quantization") - - if subsampling == "4:4:4": - subsampling = 0 - elif subsampling == "4:2:2": - subsampling = 1 - elif subsampling == "4:2:0": - subsampling = 2 - elif subsampling == "4:1:1": - # For compatibility. Before Pillow 4.3, 4:1:1 actually meant 4:2:0. - # Set 4:2:0 if someone is still using that value. - subsampling = 2 - elif subsampling == "keep": - if im.format != "JPEG": - msg = "Cannot use 'keep' when original image is not a JPEG" - raise ValueError(msg) - subsampling = get_sampling(im) - - def validate_qtables(qtables): - if qtables is None: - return qtables - if isinstance(qtables, str): - try: - lines = [ - int(num) - for line in qtables.splitlines() - for num in line.split("#", 1)[0].split() - ] - except ValueError as e: - msg = "Invalid quantization table" - raise ValueError(msg) from e - else: - qtables = [lines[s : s + 64] for s in range(0, len(lines), 64)] - if isinstance(qtables, (tuple, list, dict)): - if isinstance(qtables, dict): - qtables = [ - qtables[key] for key in range(len(qtables)) if key in qtables - ] - elif isinstance(qtables, tuple): - qtables = list(qtables) - if not (0 < len(qtables) < 5): - msg = "None or too many quantization tables" - raise ValueError(msg) - for idx, table in enumerate(qtables): - try: - if len(table) != 64: - raise TypeError - table = array.array("H", table) - except TypeError as e: - msg = "Invalid quantization table" - raise ValueError(msg) from e - else: - qtables[idx] = list(table) - return qtables - - if qtables == "keep": - if im.format != "JPEG": - msg = "Cannot use 'keep' when original image is not a JPEG" - raise ValueError(msg) - qtables = getattr(im, "quantization", None) - qtables = validate_qtables(qtables) - - extra = info.get("extra", b"") - - MAX_BYTES_IN_MARKER = 65533 - icc_profile = info.get("icc_profile") - if icc_profile: - ICC_OVERHEAD_LEN = 14 - MAX_DATA_BYTES_IN_MARKER = MAX_BYTES_IN_MARKER - ICC_OVERHEAD_LEN - markers = [] - while icc_profile: - markers.append(icc_profile[:MAX_DATA_BYTES_IN_MARKER]) - icc_profile = icc_profile[MAX_DATA_BYTES_IN_MARKER:] - i = 1 - for marker in markers: - size = o16(2 + ICC_OVERHEAD_LEN + len(marker)) - extra += ( - b"\xFF\xE2" - + size - + b"ICC_PROFILE\0" - + o8(i) - + o8(len(markers)) - + marker - ) - i += 1 - - comment = info.get("comment", im.info.get("comment")) - - # "progressive" is the official name, but older documentation - # says "progression" - # FIXME: issue a warning if the wrong form is used (post-1.1.7) - progressive = info.get("progressive", False) or info.get("progression", False) - - optimize = info.get("optimize", False) - - exif = info.get("exif", b"") - if isinstance(exif, Image.Exif): - exif = exif.tobytes() - if len(exif) > MAX_BYTES_IN_MARKER: - msg = "EXIF data is too long" - raise ValueError(msg) - - # get keyword arguments - im.encoderconfig = ( - quality, - progressive, - info.get("smooth", 0), - optimize, - info.get("streamtype", 0), - dpi[0], - dpi[1], - subsampling, - qtables, - comment, - extra, - exif, - ) - - # if we optimize, libjpeg needs a buffer big enough to hold the whole image - # in a shot. Guessing on the size, at im.size bytes. (raw pixel size is - # channels*size, this is a value that's been used in a django patch. - # https://github.com/matthewwithanm/django-imagekit/issues/50 - bufsize = 0 - if optimize or progressive: - # CMYK can be bigger - if im.mode == "CMYK": - bufsize = 4 * im.size[0] * im.size[1] - # keep sets quality to -1, but the actual value may be high. - elif quality >= 95 or quality == -1: - bufsize = 2 * im.size[0] * im.size[1] - else: - bufsize = im.size[0] * im.size[1] - - # The EXIF info needs to be written as one block, + APP1, + one spare byte. - # Ensure that our buffer is big enough. Same with the icc_profile block. - bufsize = max(ImageFile.MAXBLOCK, bufsize, len(exif) + 5, len(extra) + 1) - - ImageFile._save(im, fp, [("jpeg", (0, 0) + im.size, 0, rawmode)], bufsize) - - -def _save_cjpeg(im, fp, filename): - # ALTERNATIVE: handle JPEGs via the IJG command line utilities. - tempfile = im._dump() - subprocess.check_call(["cjpeg", "-outfile", filename, tempfile]) - try: - os.unlink(tempfile) - except OSError: - pass - - -## -# Factory for making JPEG and MPO instances -def jpeg_factory(fp=None, filename=None): - im = JpegImageFile(fp, filename) - try: - mpheader = im._getmp() - if mpheader[45057] > 1: - # It's actually an MPO - from .MpoImagePlugin import MpoImageFile - - # Don't reload everything, just convert it. - im = MpoImageFile.adopt(im, mpheader) - except (TypeError, IndexError): - # It is really a JPEG - pass - except SyntaxError: - warnings.warn( - "Image appears to be a malformed MPO file, it will be " - "interpreted as a base JPEG file" - ) - return im - - -# --------------------------------------------------------------------- -# Registry stuff - -Image.register_open(JpegImageFile.format, jpeg_factory, _accept) -Image.register_save(JpegImageFile.format, _save) - -Image.register_extensions(JpegImageFile.format, [".jfif", ".jpe", ".jpg", ".jpeg"]) - -Image.register_mime(JpegImageFile.format, "image/jpeg") diff --git a/spaces/carlosalonso/Detection-video/carpeta_deteccion/projects/DensePose/densepose/evaluation/tensor_storage.py b/spaces/carlosalonso/Detection-video/carpeta_deteccion/projects/DensePose/densepose/evaluation/tensor_storage.py deleted file mode 100644 index 72e3cb64caf91c684607a5fd7cb696b267c21e16..0000000000000000000000000000000000000000 --- a/spaces/carlosalonso/Detection-video/carpeta_deteccion/projects/DensePose/densepose/evaluation/tensor_storage.py +++ /dev/null @@ -1,238 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. - -import io -import numpy as np -import os -from dataclasses import dataclass -from functools import reduce -from operator import mul -from typing import BinaryIO, Dict, Optional, Tuple -import torch - -from detectron2.utils.comm import gather, get_rank -from detectron2.utils.file_io import PathManager - - -@dataclass -class SizeData: - dtype: str - shape: Tuple[int] - - -def _calculate_record_field_size_b(data_schema: Dict[str, SizeData], field_name: str) -> int: - schema = data_schema[field_name] - element_size_b = np.dtype(schema.dtype).itemsize - record_field_size_b = reduce(mul, schema.shape) * element_size_b - return record_field_size_b - - -def _calculate_record_size_b(data_schema: Dict[str, SizeData]) -> int: - record_size_b = 0 - for field_name in data_schema: - record_field_size_b = _calculate_record_field_size_b(data_schema, field_name) - record_size_b += record_field_size_b - return record_size_b - - -def _calculate_record_field_sizes_b(data_schema: Dict[str, SizeData]) -> Dict[str, int]: - field_sizes_b = {} - for field_name in data_schema: - field_sizes_b[field_name] = _calculate_record_field_size_b(data_schema, field_name) - return field_sizes_b - - -class SingleProcessTensorStorage: - """ - Compact tensor storage to keep tensor data of predefined size and type. - """ - - def __init__(self, data_schema: Dict[str, SizeData], storage_impl: BinaryIO): - """ - Construct tensor storage based on information on data shape and size. - Internally uses numpy to interpret the type specification. - The storage must support operations `seek(offset, whence=os.SEEK_SET)` and - `read(size)` to be able to perform the `get` operation. - The storage must support operation `write(bytes)` to be able to perform - the `put` operation. - - Args: - data_schema (dict: str -> SizeData): dictionary which maps tensor name - to its size data (shape and data type), e.g. - ``` - { - "coarse_segm": SizeData(dtype="float32", shape=(112, 112)), - "embedding": SizeData(dtype="float32", shape=(16, 112, 112)), - } - ``` - storage_impl (BinaryIO): io instance that handles file-like seek, read - and write operations, e.g. a file handle or a memory buffer like io.BytesIO - """ - self.data_schema = data_schema - self.record_size_b = _calculate_record_size_b(data_schema) - self.record_field_sizes_b = _calculate_record_field_sizes_b(data_schema) - self.storage_impl = storage_impl - self.next_record_id = 0 - - def get(self, record_id: int) -> Dict[str, torch.Tensor]: - """ - Load tensors from the storage by record ID - - Args: - record_id (int): Record ID, for which to load the data - - Return: - dict: str -> tensor: tensor name mapped to tensor data, recorded under the provided ID - """ - self.storage_impl.seek(record_id * self.record_size_b, os.SEEK_SET) - data_bytes = self.storage_impl.read(self.record_size_b) - assert len(data_bytes) == self.record_size_b, ( - f"Expected data size {self.record_size_b} B could not be read: " - f"got {len(data_bytes)} B" - ) - record = {} - cur_idx = 0 - # it's important to read and write in the same order - for field_name in sorted(self.data_schema): - schema = self.data_schema[field_name] - field_size_b = self.record_field_sizes_b[field_name] - chunk = data_bytes[cur_idx : cur_idx + field_size_b] - data_np = np.frombuffer( - chunk, dtype=schema.dtype, count=reduce(mul, schema.shape) - ).reshape(schema.shape) - record[field_name] = torch.from_numpy(data_np) - cur_idx += field_size_b - return record - - def put(self, data: Dict[str, torch.Tensor]) -> int: - """ - Store tensors in the storage - - Args: - data (dict: str -> tensor): data to store, a dictionary which maps - tensor names into tensors; tensor shapes must match those specified - in data schema. - Return: - int: record ID, under which the data is stored - """ - # it's important to read and write in the same order - for field_name in sorted(self.data_schema): - assert ( - field_name in data - ), f"Field '{field_name}' not present in data: data keys are {data.keys()}" - value = data[field_name] - assert value.shape == self.data_schema[field_name].shape, ( - f"Mismatched tensor shapes for field '{field_name}': " - f"expected {self.data_schema[field_name].shape}, got {value.shape}" - ) - data_bytes = value.cpu().numpy().tobytes() - assert len(data_bytes) == self.record_field_sizes_b[field_name], ( - f"Expected field {field_name} to be of size " - f"{self.record_field_sizes_b[field_name]} B, got {len(data_bytes)} B" - ) - self.storage_impl.write(data_bytes) - record_id = self.next_record_id - self.next_record_id += 1 - return record_id - - -class SingleProcessFileTensorStorage(SingleProcessTensorStorage): - """ - Implementation of a single process tensor storage which stores data in a file - """ - - def __init__(self, data_schema: Dict[str, SizeData], fpath: str, mode: str): - self.fpath = fpath - assert "b" in mode, f"Tensor storage should be opened in binary mode, got '{mode}'" - if "w" in mode: - file_h = PathManager.open(fpath, mode) - elif "r" in mode: - local_fpath = PathManager.get_local_path(fpath) - file_h = open(local_fpath, mode) - else: - raise ValueError(f"Unsupported file mode {mode}, supported modes: rb, wb") - super().__init__(data_schema, file_h) # pyre-ignore[6] - - -class SingleProcessRamTensorStorage(SingleProcessTensorStorage): - """ - Implementation of a single process tensor storage which stores data in RAM - """ - - def __init__(self, data_schema: Dict[str, SizeData], buf: io.BytesIO): - super().__init__(data_schema, buf) - - -class MultiProcessTensorStorage: - """ - Representation of a set of tensor storages created by individual processes, - allows to access those storages from a single owner process. The storages - should either be shared or broadcasted to the owner process. - The processes are identified by their rank, data is uniquely defined by - the rank of the process and the record ID. - """ - - def __init__(self, rank_to_storage: Dict[int, SingleProcessTensorStorage]): - self.rank_to_storage = rank_to_storage - - def get(self, rank: int, record_id: int) -> Dict[str, torch.Tensor]: - storage = self.rank_to_storage[rank] - return storage.get(record_id) - - def put(self, rank: int, data: Dict[str, torch.Tensor]) -> int: - storage = self.rank_to_storage[rank] - return storage.put(data) - - -class MultiProcessFileTensorStorage(MultiProcessTensorStorage): - def __init__(self, data_schema: Dict[str, SizeData], rank_to_fpath: Dict[int, str], mode: str): - rank_to_storage = { - rank: SingleProcessFileTensorStorage(data_schema, fpath, mode) - for rank, fpath in rank_to_fpath.items() - } - super().__init__(rank_to_storage) # pyre-ignore[6] - - -class MultiProcessRamTensorStorage(MultiProcessTensorStorage): - def __init__(self, data_schema: Dict[str, SizeData], rank_to_buffer: Dict[int, io.BytesIO]): - rank_to_storage = { - rank: SingleProcessRamTensorStorage(data_schema, buf) - for rank, buf in rank_to_buffer.items() - } - super().__init__(rank_to_storage) # pyre-ignore[6] - - -def _ram_storage_gather( - storage: SingleProcessRamTensorStorage, dst_rank: int = 0 -) -> Optional[MultiProcessRamTensorStorage]: - storage.storage_impl.seek(0, os.SEEK_SET) - # TODO: overhead, pickling a bytes object, can just pass bytes in a tensor directly - # see detectron2/utils.comm.py - data_list = gather(storage.storage_impl.read(), dst=dst_rank) - if get_rank() != dst_rank: - return None - rank_to_buffer = {i: io.BytesIO(data_list[i]) for i in range(len(data_list))} - multiprocess_storage = MultiProcessRamTensorStorage(storage.data_schema, rank_to_buffer) - return multiprocess_storage - - -def _file_storage_gather( - storage: SingleProcessFileTensorStorage, - dst_rank: int = 0, - mode: str = "rb", -) -> Optional[MultiProcessFileTensorStorage]: - storage.storage_impl.close() - fpath_list = gather(storage.fpath, dst=dst_rank) - if get_rank() != dst_rank: - return None - rank_to_fpath = {i: fpath_list[i] for i in range(len(fpath_list))} - return MultiProcessFileTensorStorage(storage.data_schema, rank_to_fpath, mode) - - -def storage_gather( - storage: SingleProcessTensorStorage, dst_rank: int = 0 -) -> Optional[MultiProcessTensorStorage]: - if isinstance(storage, SingleProcessRamTensorStorage): - return _ram_storage_gather(storage, dst_rank) - elif isinstance(storage, SingleProcessFileTensorStorage): - return _file_storage_gather(storage, dst_rank) - raise Exception(f"Unsupported storage for gather operation: {storage}") diff --git a/spaces/chansung/LLM-As-Chatbot/models/baize.py b/spaces/chansung/LLM-As-Chatbot/models/baize.py deleted file mode 100644 index dbc64001c67787b84e1e7ea7b3ab07f2f36ca82c..0000000000000000000000000000000000000000 --- a/spaces/chansung/LLM-As-Chatbot/models/baize.py +++ /dev/null @@ -1,91 +0,0 @@ -import torch -from peft import PeftModel -from transformers import LlamaTokenizer, LlamaForCausalLM -from optimum.bettertransformer import BetterTransformer - -def load_model( - base, - finetuned, - mode_cpu, - mode_mps, - mode_full_gpu, - mode_8bit, - mode_4bit, - force_download_ckpt -): - tokenizer = LlamaTokenizer.from_pretrained(base) - tokenizer.pad_token_id = 0 - tokenizer.padding_side = "left" - - if mode_cpu: - print("cpu mode") - model = LlamaForCausalLM.from_pretrained( - base, - device_map={"": "cpu"}, - use_safetensors=False - ) - - if finetuned is not None and \ - finetuned != "" and \ - finetuned != "N/A": - - model = PeftModel.from_pretrained( - model, - finetuned, - device_map={"": "cpu"} - # force_download=force_download_ckpt, - ) - else: - model = BetterTransformer.transform(model) - - elif mode_mps: - print("mps mode") - model = LlamaForCausalLM.from_pretrained( - base, - device_map={"": "mps"}, - torch_dtype=torch.float16, - use_safetensors=False - ) - - if finetuned is not None and \ - finetuned != "" and \ - finetuned != "N/A": - - model = PeftModel.from_pretrained( - model, - finetuned, - torch_dtype=torch.float16, - device_map={"": "mps"} - # force_download=force_download_ckpt, - ) - else: - model = BetterTransformer.transform(model) - - else: - print("gpu mode") - print(f"8bit = {mode_8bit}, 4bit = {mode_4bit}") - model = LlamaForCausalLM.from_pretrained( - base, - load_in_8bit=mode_8bit, - # load_in_4bit=mode_4bit, - torch_dtype=torch.float16, - device_map="auto", - use_safetensors=False - ) - - if not mode_8bit and not mode_4bit: - model.half() - - if finetuned is not None and \ - finetuned != "" and \ - finetuned != "N/A": - - model = PeftModel.from_pretrained( - model, - finetuned, - # force_download=force_download_ckpt, - ) - else: - model = BetterTransformer.transform(model) - - return model, tokenizer \ No newline at end of file diff --git a/spaces/chasemcdo/hf_localai/README.md b/spaces/chasemcdo/hf_localai/README.md deleted file mode 100644 index 992a304fa26c834bbe84b6263998bbd8050f1cb7..0000000000000000000000000000000000000000 --- a/spaces/chasemcdo/hf_localai/README.md +++ /dev/null @@ -1,11 +0,0 @@ ---- -title: Localai -emoji: 🏃 -colorFrom: green -colorTo: pink -sdk: docker -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/chendl/compositional_test/transformers/examples/flax/language-modeling/run_clm_flax.py b/spaces/chendl/compositional_test/transformers/examples/flax/language-modeling/run_clm_flax.py deleted file mode 100644 index 952419dc965656d91afa6c27f0103063e3e609b2..0000000000000000000000000000000000000000 --- a/spaces/chendl/compositional_test/transformers/examples/flax/language-modeling/run_clm_flax.py +++ /dev/null @@ -1,841 +0,0 @@ -#!/usr/bin/env python -# coding=utf-8 -# Copyright 2021 The HuggingFace Team All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -""" -Pre-training/Fine-tuning the library models for causal language modeling (GPT, GPT-2, CTRL, ...) on a text file or a dataset. - -Here is the full list of checkpoints on the hub that can be fine-tuned by this script: -https://huggingface.co/models?filter=text-generation -""" -# You can also adapt this script on your own causal language modeling task. Pointers for this are left as comments. - -import json -import logging -import math -import os -import sys -import time -from dataclasses import asdict, dataclass, field -from enum import Enum -from itertools import chain -from pathlib import Path -from typing import Callable, Optional - -import datasets -import jax -import jax.numpy as jnp -import numpy as np -import optax -from datasets import Dataset, load_dataset -from flax import jax_utils, traverse_util -from flax.jax_utils import pad_shard_unpad, unreplicate -from flax.training import train_state -from flax.training.common_utils import get_metrics, onehot, shard, shard_prng_key -from huggingface_hub import Repository, create_repo -from tqdm import tqdm - -import transformers -from transformers import ( - CONFIG_MAPPING, - FLAX_MODEL_FOR_CAUSAL_LM_MAPPING, - AutoConfig, - AutoTokenizer, - FlaxAutoModelForCausalLM, - HfArgumentParser, - is_tensorboard_available, - set_seed, -) -from transformers.testing_utils import CaptureLogger -from transformers.utils import get_full_repo_name, send_example_telemetry - - -logger = logging.getLogger(__name__) - -MODEL_CONFIG_CLASSES = list(FLAX_MODEL_FOR_CAUSAL_LM_MAPPING.keys()) -MODEL_TYPES = tuple(conf.model_type for conf in MODEL_CONFIG_CLASSES) - - -@dataclass -class TrainingArguments: - output_dir: str = field( - metadata={"help": "The output directory where the model predictions and checkpoints will be written."}, - ) - overwrite_output_dir: bool = field( - default=False, - metadata={ - "help": ( - "Overwrite the content of the output directory. " - "Use this to continue training if output_dir points to a checkpoint directory." - ) - }, - ) - do_train: bool = field(default=False, metadata={"help": "Whether to run training."}) - do_eval: bool = field(default=False, metadata={"help": "Whether to run eval on the dev set."}) - per_device_train_batch_size: int = field( - default=8, metadata={"help": "Batch size per GPU/TPU core/CPU for training."} - ) - per_device_eval_batch_size: int = field( - default=8, metadata={"help": "Batch size per GPU/TPU core/CPU for evaluation."} - ) - learning_rate: float = field(default=5e-5, metadata={"help": "The initial learning rate for AdamW."}) - weight_decay: float = field(default=0.0, metadata={"help": "Weight decay for AdamW if we apply some."}) - adam_beta1: float = field(default=0.9, metadata={"help": "Beta1 for AdamW optimizer"}) - adam_beta2: float = field(default=0.999, metadata={"help": "Beta2 for AdamW optimizer"}) - adam_epsilon: float = field(default=1e-8, metadata={"help": "Epsilon for AdamW optimizer."}) - adafactor: bool = field(default=False, metadata={"help": "Whether or not to replace AdamW by Adafactor."}) - num_train_epochs: float = field(default=3.0, metadata={"help": "Total number of training epochs to perform."}) - warmup_steps: int = field(default=0, metadata={"help": "Linear warmup over warmup_steps."}) - logging_steps: int = field(default=500, metadata={"help": "Log every X updates steps."}) - save_steps: int = field(default=500, metadata={"help": "Save checkpoint every X updates steps."}) - eval_steps: int = field(default=None, metadata={"help": "Run an evaluation every X steps."}) - seed: int = field(default=42, metadata={"help": "Random seed that will be set at the beginning of training."}) - push_to_hub: bool = field( - default=False, metadata={"help": "Whether or not to upload the trained model to the model hub after training."} - ) - hub_model_id: str = field( - default=None, metadata={"help": "The name of the repository to keep in sync with the local `output_dir`."} - ) - hub_token: str = field(default=None, metadata={"help": "The token to use to push to the Model Hub."}) - - def __post_init__(self): - if self.output_dir is not None: - self.output_dir = os.path.expanduser(self.output_dir) - - def to_dict(self): - """ - Serializes this instance while replace `Enum` by their values (for JSON serialization support). It obfuscates - the token values by removing their value. - """ - d = asdict(self) - for k, v in d.items(): - if isinstance(v, Enum): - d[k] = v.value - if isinstance(v, list) and len(v) > 0 and isinstance(v[0], Enum): - d[k] = [x.value for x in v] - if k.endswith("_token"): - d[k] = f"<{k.upper()}>" - return d - - -@dataclass -class ModelArguments: - """ - Arguments pertaining to which model/config/tokenizer we are going to fine-tune, or train from scratch. - """ - - model_name_or_path: Optional[str] = field( - default=None, - metadata={ - "help": ( - "The model checkpoint for weights initialization.Don't set if you want to train a model from scratch." - ) - }, - ) - model_type: Optional[str] = field( - default=None, - metadata={"help": "If training from scratch, pass a model type from the list: " + ", ".join(MODEL_TYPES)}, - ) - config_name: Optional[str] = field( - default=None, metadata={"help": "Pretrained config name or path if not the same as model_name"} - ) - tokenizer_name: Optional[str] = field( - default=None, metadata={"help": "Pretrained tokenizer name or path if not the same as model_name"} - ) - cache_dir: Optional[str] = field( - default=None, metadata={"help": "Where do you want to store the pretrained models downloaded from s3"} - ) - use_fast_tokenizer: bool = field( - default=True, - metadata={"help": "Whether to use one of the fast tokenizer (backed by the tokenizers library) or not."}, - ) - dtype: Optional[str] = field( - default="float32", - metadata={ - "help": ( - "Floating-point format in which the model weights should be initialized and trained. Choose one of" - " `[float32, float16, bfloat16]`." - ) - }, - ) - use_auth_token: bool = field( - default=False, - metadata={ - "help": ( - "Will use the token generated when running `huggingface-cli login` (necessary to use this script " - "with private models)." - ) - }, - ) - - -@dataclass -class DataTrainingArguments: - """ - Arguments pertaining to what data we are going to input our model for training and eval. - """ - - dataset_name: Optional[str] = field( - default=None, metadata={"help": "The name of the dataset to use (via the datasets library)."} - ) - dataset_config_name: Optional[str] = field( - default=None, metadata={"help": "The configuration name of the dataset to use (via the datasets library)."} - ) - train_file: Optional[str] = field(default=None, metadata={"help": "The input training data file (a text file)."}) - validation_file: Optional[str] = field( - default=None, - metadata={"help": "An optional input evaluation data file to evaluate the perplexity on (a text file)."}, - ) - max_train_samples: Optional[int] = field( - default=None, - metadata={ - "help": ( - "For debugging purposes or quicker training, truncate the number of training examples to this " - "value if set." - ) - }, - ) - max_eval_samples: Optional[int] = field( - default=None, - metadata={ - "help": ( - "For debugging purposes or quicker training, truncate the number of evaluation examples to this " - "value if set." - ) - }, - ) - overwrite_cache: bool = field( - default=False, metadata={"help": "Overwrite the cached training and evaluation sets"} - ) - validation_split_percentage: Optional[int] = field( - default=5, - metadata={ - "help": "The percentage of the train set used as validation set in case there's no validation split" - }, - ) - block_size: Optional[int] = field( - default=None, - metadata={ - "help": ( - "Optional input sequence length after tokenization. " - "The training dataset will be truncated in block of this size for training. " - "Default to the model max input length for single sentence inputs (take into account special tokens)." - ) - }, - ) - overwrite_cache: bool = field( - default=False, metadata={"help": "Overwrite the cached training and evaluation sets"} - ) - preprocessing_num_workers: Optional[int] = field( - default=None, - metadata={"help": "The number of processes to use for the preprocessing."}, - ) - keep_linebreaks: bool = field( - default=True, metadata={"help": "Whether to keep line breaks when using TXT files or not."} - ) - - def __post_init__(self): - if self.dataset_name is None and self.train_file is None and self.validation_file is None: - raise ValueError("Need either a dataset name or a training/validation file.") - else: - if self.train_file is not None: - extension = self.train_file.split(".")[-1] - assert extension in ["csv", "json", "txt"], "`train_file` should be a csv, a json or a txt file." - if self.validation_file is not None: - extension = self.validation_file.split(".")[-1] - assert extension in ["csv", "json", "txt"], "`validation_file` should be a csv, a json or a txt file." - - -class TrainState(train_state.TrainState): - dropout_rng: jnp.ndarray - - def replicate(self): - return jax_utils.replicate(self).replace(dropout_rng=shard_prng_key(self.dropout_rng)) - - -def data_loader(rng: jax.random.PRNGKey, dataset: Dataset, batch_size: int, shuffle: bool = False, drop_last=True): - """ - Returns batches of size `batch_size` from `dataset`. If `drop_last` is set to `False`, the final batch may be incomplete, - and range in size from 1 to `batch_size`. Shuffle batches if `shuffle` is `True`. - """ - if shuffle: - batch_idx = jax.random.permutation(rng, len(dataset)) - batch_idx = np.asarray(batch_idx) - else: - batch_idx = np.arange(len(dataset)) - - if drop_last: - steps_per_epoch = len(dataset) // batch_size - batch_idx = batch_idx[: steps_per_epoch * batch_size] # Skip incomplete batch. - batch_idx = batch_idx.reshape((steps_per_epoch, batch_size)) - else: - steps_per_epoch = math.ceil(len(dataset) / batch_size) - batch_idx = np.array_split(batch_idx, steps_per_epoch) - - for idx in batch_idx: - batch = dataset[idx] - batch = {k: np.array(v) for k, v in batch.items()} - - yield batch - - -def write_train_metric(summary_writer, train_metrics, train_time, step): - summary_writer.scalar("train_time", train_time, step) - - train_metrics = get_metrics(train_metrics) - for key, vals in train_metrics.items(): - tag = f"train_{key}" - for i, val in enumerate(vals): - summary_writer.scalar(tag, val, step - len(vals) + i + 1) - - -def write_eval_metric(summary_writer, eval_metrics, step): - for metric_name, value in eval_metrics.items(): - summary_writer.scalar(f"eval_{metric_name}", value, step) - - -def create_learning_rate_fn( - train_ds_size: int, train_batch_size: int, num_train_epochs: int, num_warmup_steps: int, learning_rate: float -) -> Callable[[int], jnp.array]: - """Returns a linear warmup, linear_decay learning rate function.""" - steps_per_epoch = train_ds_size // train_batch_size - num_train_steps = steps_per_epoch * num_train_epochs - warmup_fn = optax.linear_schedule(init_value=0.0, end_value=learning_rate, transition_steps=num_warmup_steps) - decay_fn = optax.linear_schedule( - init_value=learning_rate, end_value=0, transition_steps=num_train_steps - num_warmup_steps - ) - schedule_fn = optax.join_schedules(schedules=[warmup_fn, decay_fn], boundaries=[num_warmup_steps]) - return schedule_fn - - -def main(): - # See all possible arguments in src/transformers/training_args.py - # or by passing the --help flag to this script. - # We now keep distinct sets of args, for a cleaner separation of concerns. - - parser = HfArgumentParser((ModelArguments, DataTrainingArguments, TrainingArguments)) - if len(sys.argv) == 2 and sys.argv[1].endswith(".json"): - # If we pass only one argument to the script and it's the path to a json file, - # let's parse it to get our arguments. - model_args, data_args, training_args = parser.parse_json_file(json_file=os.path.abspath(sys.argv[1])) - else: - model_args, data_args, training_args = parser.parse_args_into_dataclasses() - - # Sending telemetry. Tracking the example usage helps us better allocate resources to maintain them. The - # information sent is the one passed as arguments along with your Python/PyTorch versions. - send_example_telemetry("run_clm", model_args, data_args, framework="flax") - - if ( - os.path.exists(training_args.output_dir) - and os.listdir(training_args.output_dir) - and training_args.do_train - and not training_args.overwrite_output_dir - ): - raise ValueError( - f"Output directory ({training_args.output_dir}) already exists and is not empty." - "Use --overwrite_output_dir to overcome." - ) - - # Make one log on every process with the configuration for debugging. - logging.basicConfig( - format="%(asctime)s - %(levelname)s - %(name)s - %(message)s", - datefmt="%m/%d/%Y %H:%M:%S", - level=logging.INFO, - ) - # Setup logging, we only want one process per machine to log things on the screen. - logger.setLevel(logging.INFO if jax.process_index() == 0 else logging.ERROR) - if jax.process_index() == 0: - datasets.utils.logging.set_verbosity_warning() - transformers.utils.logging.set_verbosity_info() - else: - datasets.utils.logging.set_verbosity_error() - transformers.utils.logging.set_verbosity_error() - - # Set the verbosity to info of the Transformers logger (on main process only): - logger.info(f"Training/evaluation parameters {training_args}") - - # Set seed before initializing model. - set_seed(training_args.seed) - - # Handle the repository creation - if training_args.push_to_hub: - if training_args.hub_model_id is None: - repo_name = get_full_repo_name( - Path(training_args.output_dir).absolute().name, token=training_args.hub_token - ) - else: - repo_name = training_args.hub_model_id - create_repo(repo_name, exist_ok=True, token=training_args.hub_token) - repo = Repository(training_args.output_dir, clone_from=repo_name, token=training_args.hub_token) - - # Get the datasets: you can either provide your own CSV/JSON/TXT training and evaluation files (see below) - # or just provide the name of one of the public datasets available on the hub at https://huggingface.co/datasets/ - # (the dataset will be downloaded automatically from the datasets Hub). - # - # For CSV/JSON files, this script will use the column called 'text' or the first column if no column called - # 'text' is found. You can easily tweak this behavior (see below). - # - # In distributed training, the load_dataset function guarantees that only one local process can concurrently - # download the dataset. - if data_args.dataset_name is not None: - # Downloading and loading a dataset from the hub. - dataset = load_dataset( - data_args.dataset_name, - data_args.dataset_config_name, - cache_dir=model_args.cache_dir, - keep_in_memory=False, - use_auth_token=True if model_args.use_auth_token else None, - ) - - if "validation" not in dataset.keys(): - dataset["validation"] = load_dataset( - data_args.dataset_name, - data_args.dataset_config_name, - split=f"train[:{data_args.validation_split_percentage}%]", - cache_dir=model_args.cache_dir, - use_auth_token=True if model_args.use_auth_token else None, - ) - dataset["train"] = load_dataset( - data_args.dataset_name, - data_args.dataset_config_name, - split=f"train[{data_args.validation_split_percentage}%:]", - cache_dir=model_args.cache_dir, - use_auth_token=True if model_args.use_auth_token else None, - ) - else: - data_files = {} - dataset_args = {} - if data_args.train_file is not None: - data_files["train"] = data_args.train_file - if data_args.validation_file is not None: - data_files["validation"] = data_args.validation_file - extension = data_args.train_file.split(".")[-1] - if extension == "txt": - extension = "text" - dataset_args["keep_linebreaks"] = data_args.keep_linebreaks - dataset = load_dataset( - extension, - data_files=data_files, - cache_dir=model_args.cache_dir, - **dataset_args, - use_auth_token=True if model_args.use_auth_token else None, - ) - - if "validation" not in dataset.keys(): - dataset["validation"] = load_dataset( - extension, - data_files=data_files, - split=f"train[:{data_args.validation_split_percentage}%]", - cache_dir=model_args.cache_dir, - **dataset_args, - use_auth_token=True if model_args.use_auth_token else None, - ) - dataset["train"] = load_dataset( - extension, - data_files=data_files, - split=f"train[{data_args.validation_split_percentage}%:]", - cache_dir=model_args.cache_dir, - **dataset_args, - use_auth_token=True if model_args.use_auth_token else None, - ) - # See more about loading any type of standard or custom dataset (from files, python dict, pandas DataFrame, etc) at - # https://huggingface.co/docs/datasets/loading_datasets.html. - - # Load pretrained model and tokenizer - - # Distributed training: - # The .from_pretrained methods guarantee that only one local process can concurrently - # download model & vocab. - if model_args.config_name: - config = AutoConfig.from_pretrained( - model_args.config_name, - cache_dir=model_args.cache_dir, - use_auth_token=True if model_args.use_auth_token else None, - ) - elif model_args.model_name_or_path: - config = AutoConfig.from_pretrained( - model_args.model_name_or_path, - cache_dir=model_args.cache_dir, - use_auth_token=True if model_args.use_auth_token else None, - ) - else: - config = CONFIG_MAPPING[model_args.model_type]() - logger.warning("You are instantiating a new config instance from scratch.") - - if model_args.tokenizer_name: - tokenizer = AutoTokenizer.from_pretrained( - model_args.tokenizer_name, - cache_dir=model_args.cache_dir, - use_fast=model_args.use_fast_tokenizer, - use_auth_token=True if model_args.use_auth_token else None, - ) - elif model_args.model_name_or_path: - tokenizer = AutoTokenizer.from_pretrained( - model_args.model_name_or_path, - cache_dir=model_args.cache_dir, - use_fast=model_args.use_fast_tokenizer, - use_auth_token=True if model_args.use_auth_token else None, - ) - else: - raise ValueError( - "You are instantiating a new tokenizer from scratch. This is not supported by this script." - "You can do it from another script, save it, and load it from here, using --tokenizer_name." - ) - - if model_args.model_name_or_path: - model = FlaxAutoModelForCausalLM.from_pretrained( - model_args.model_name_or_path, - config=config, - seed=training_args.seed, - dtype=getattr(jnp, model_args.dtype), - use_auth_token=True if model_args.use_auth_token else None, - ) - else: - model = FlaxAutoModelForCausalLM.from_config( - config, - seed=training_args.seed, - dtype=getattr(jnp, model_args.dtype), - ) - - # Preprocessing the datasets. - # First we tokenize all the texts. - if training_args.do_train: - column_names = dataset["train"].column_names - else: - column_names = dataset["validation"].column_names - text_column_name = "text" if "text" in column_names else column_names[0] - - # since this will be pickled to avoid _LazyModule error in Hasher force logger loading before tokenize_function - tok_logger = transformers.utils.logging.get_logger("transformers.tokenization_utils_base") - - def tokenize_function(examples): - with CaptureLogger(tok_logger) as cl: - output = tokenizer(examples[text_column_name]) - # clm input could be much much longer than block_size - if "Token indices sequence length is longer than the" in cl.out: - tok_logger.warning( - "^^^^^^^^^^^^^^^^ Please ignore the warning above - this long input will be chunked into smaller bits" - " before being passed to the model." - ) - return output - - tokenized_datasets = dataset.map( - tokenize_function, - batched=True, - num_proc=data_args.preprocessing_num_workers, - remove_columns=column_names, - load_from_cache_file=not data_args.overwrite_cache, - ) - - if data_args.block_size is None: - block_size = tokenizer.model_max_length - if block_size > config.max_position_embeddings: - logger.warning( - f"The tokenizer picked seems to have a very large `model_max_length` ({tokenizer.model_max_length}). " - "Picking 1024 instead. You can change that default value by passing --block_size xxx." - ) - block_size = 1024 - else: - if data_args.block_size > tokenizer.model_max_length: - logger.warning( - f"The block_size passed ({data_args.block_size}) is larger than the maximum length for the model" - f"({tokenizer.model_max_length}). Using block_size={tokenizer.model_max_length}." - ) - block_size = min(data_args.block_size, tokenizer.model_max_length) - - # Main data processing function that will concatenate all texts from our dataset and generate chunks of block_size. - def group_texts(examples): - # Concatenate all texts. - concatenated_examples = {k: list(chain(*examples[k])) for k in examples.keys()} - total_length = len(concatenated_examples[list(examples.keys())[0]]) - # We drop the small remainder, we could add padding if the model supported it instead of this drop, you can - # customize this part to your needs. - if total_length >= block_size: - total_length = (total_length // block_size) * block_size - # Split by chunks of max_len. - result = { - k: [t[i : i + block_size] for i in range(0, total_length, block_size)] - for k, t in concatenated_examples.items() - } - result["labels"] = result["input_ids"].copy() - return result - - # Note that with `batched=True`, this map processes 1,000 texts together, so group_texts throws away a remainder - # for each of those groups of 1,000 texts. You can adjust that batch_size here but a higher value might be slower - # to preprocess. - # - # To speed up this part, we use multiprocessing. See the documentation of the map method for more information: - # https://huggingface.co/docs/datasets/package_reference/main_classes.html#datasets.Dataset.map - - lm_datasets = tokenized_datasets.map( - group_texts, - batched=True, - num_proc=data_args.preprocessing_num_workers, - load_from_cache_file=not data_args.overwrite_cache, - ) - - if training_args.do_train: - if "train" not in tokenized_datasets: - raise ValueError("--do_train requires a train dataset") - train_dataset = lm_datasets["train"] - if data_args.max_train_samples is not None: - max_train_samples = min(len(train_dataset), data_args.max_train_samples) - train_dataset = train_dataset.select(range(max_train_samples)) - - if training_args.do_eval: - if "validation" not in tokenized_datasets: - raise ValueError("--do_eval requires a validation dataset") - eval_dataset = lm_datasets["validation"] - if data_args.max_eval_samples is not None: - max_eval_samples = min(len(eval_dataset), data_args.max_eval_samples) - eval_dataset = eval_dataset.select(range(max_eval_samples)) - - # Enable tensorboard only on the master node - has_tensorboard = is_tensorboard_available() - if has_tensorboard and jax.process_index() == 0: - try: - from flax.metrics.tensorboard import SummaryWriter - - summary_writer = SummaryWriter(log_dir=Path(training_args.output_dir)) - except ImportError as ie: - has_tensorboard = False - logger.warning( - f"Unable to display metrics through TensorBoard because some package are not installed: {ie}" - ) - else: - logger.warning( - "Unable to display metrics through TensorBoard because the package is not installed: " - "Please run pip install tensorboard to enable." - ) - - # Initialize our training - rng = jax.random.PRNGKey(training_args.seed) - rng, dropout_rng = jax.random.split(rng) - - # Store some constant - num_epochs = int(training_args.num_train_epochs) - train_batch_size = int(training_args.per_device_train_batch_size) * jax.device_count() - per_device_eval_batch_size = int(training_args.per_device_eval_batch_size) - eval_batch_size = per_device_eval_batch_size * jax.device_count() - steps_per_epoch = len(train_dataset) // train_batch_size - total_train_steps = steps_per_epoch * num_epochs - - # Create learning rate schedule - linear_decay_lr_schedule_fn = create_learning_rate_fn( - len(train_dataset), - train_batch_size, - training_args.num_train_epochs, - training_args.warmup_steps, - training_args.learning_rate, - ) - - # We use Optax's "masking" functionality to not apply weight decay - # to bias and LayerNorm scale parameters. decay_mask_fn returns a - # mask boolean with the same structure as the parameters. - # The mask is True for parameters that should be decayed. - def decay_mask_fn(params): - flat_params = traverse_util.flatten_dict(params) - # find out all LayerNorm parameters - layer_norm_candidates = ["layernorm", "layer_norm", "ln"] - layer_norm_named_params = { - layer[-2:] - for layer_norm_name in layer_norm_candidates - for layer in flat_params.keys() - if layer_norm_name in "".join(layer).lower() - } - flat_mask = {path: (path[-1] != "bias" and path[-2:] not in layer_norm_named_params) for path in flat_params} - return traverse_util.unflatten_dict(flat_mask) - - # create adam optimizer - if training_args.adafactor: - # We use the default parameters here to initialize adafactor, - # For more details about the parameters please check https://github.com/deepmind/optax/blob/ed02befef9bf81cbbf236be3d2b0e032e9ed4a40/optax/_src/alias.py#L74 - optimizer = optax.adafactor( - learning_rate=linear_decay_lr_schedule_fn, - ) - else: - optimizer = optax.adamw( - learning_rate=linear_decay_lr_schedule_fn, - b1=training_args.adam_beta1, - b2=training_args.adam_beta2, - eps=training_args.adam_epsilon, - weight_decay=training_args.weight_decay, - mask=decay_mask_fn, - ) - - # Setup train state - state = TrainState.create(apply_fn=model.__call__, params=model.params, tx=optimizer, dropout_rng=dropout_rng) - - def loss_fn(logits, labels): - shift_logits = logits[..., :-1, :] - shift_labels = labels[..., 1:] - loss = optax.softmax_cross_entropy(shift_logits, onehot(shift_labels, shift_logits.shape[-1])) - return loss.mean() - - # Define gradient update step fn - def train_step(state, batch): - dropout_rng, new_dropout_rng = jax.random.split(state.dropout_rng) - - def compute_loss(params): - labels = batch.pop("labels") - logits = state.apply_fn(**batch, params=params, dropout_rng=dropout_rng, train=True)[0] - loss = loss_fn(logits, labels) - return loss - - grad_fn = jax.value_and_grad(compute_loss) - loss, grad = grad_fn(state.params) - grad = jax.lax.pmean(grad, "batch") - - new_state = state.apply_gradients(grads=grad, dropout_rng=new_dropout_rng) - - metrics = {"loss": loss, "learning_rate": linear_decay_lr_schedule_fn(state.step)} - metrics = jax.lax.pmean(metrics, axis_name="batch") - - return new_state, metrics - - # Define eval fn - def eval_step(params, batch): - labels = batch.pop("labels") - logits = model(**batch, params=params, train=False)[0] - loss = loss_fn(logits, labels) - - # summarize metrics - metrics = {"loss": loss} - metrics = jax.lax.pmean(metrics, axis_name="batch") - return metrics - - # Create parallel version of the train and eval step - p_train_step = jax.pmap(train_step, "batch", donate_argnums=(0,)) - p_eval_step = jax.pmap(eval_step, "batch") - - # Replicate the train state on each device - state = state.replicate() - - logger.info("***** Running training *****") - logger.info(f" Num examples = {len(train_dataset)}") - logger.info(f" Num Epochs = {num_epochs}") - logger.info(f" Instantaneous batch size per device = {training_args.per_device_train_batch_size}") - logger.info(f" Total train batch size (w. parallel & distributed) = {train_batch_size}") - logger.info(f" Total optimization steps = {total_train_steps}") - - train_time = 0 - train_metrics = [] - epochs = tqdm(range(num_epochs), desc="Epoch ... ", position=0) - for epoch in epochs: - # ======================== Training ================================ - train_start = time.time() - - # Create sampling rng - rng, input_rng = jax.random.split(rng) - - # Generate an epoch by shuffling sampling indices from the train dataset - train_loader = data_loader(input_rng, train_dataset, train_batch_size, shuffle=True) - steps_per_epoch = len(train_dataset) // train_batch_size - # train - for step in tqdm(range(steps_per_epoch), desc="Training...", position=1, leave=False): - batch = next(train_loader) - batch = shard(batch) - state, train_metric = p_train_step(state, batch) - train_metrics.append(train_metric) - - cur_step = epoch * (len(train_dataset) // train_batch_size) + step - - if cur_step % training_args.logging_steps == 0 and cur_step > 0: - # Save metrics - train_metric = unreplicate(train_metric) - train_time += time.time() - train_start - if has_tensorboard and jax.process_index() == 0: - write_train_metric(summary_writer, train_metrics, train_time, cur_step) - - epochs.write( - f"Step... ({cur_step} | Loss: {train_metric['loss'].mean()}, Learning Rate:" - f" {train_metric['learning_rate'].mean()})" - ) - - train_metrics = [] - - if cur_step % training_args.eval_steps == 0 and cur_step > 0: - # ======================== Evaluating ============================== - eval_metrics = [] - eval_loader = data_loader(input_rng, eval_dataset, eval_batch_size, drop_last=False) - eval_steps = math.ceil(len(eval_dataset) / eval_batch_size) - for _ in tqdm(range(eval_steps), desc="Evaluating...", position=2, leave=False): - # Model forward - batch = next(eval_loader) - metrics = pad_shard_unpad(p_eval_step, static_return=True)( - state.params, batch, min_device_batch=per_device_eval_batch_size - ) - eval_metrics.append(metrics) - - # normalize eval metrics - eval_metrics = get_metrics(eval_metrics) - eval_metrics = jax.tree_util.tree_map(jnp.mean, eval_metrics) - - try: - eval_metrics["perplexity"] = math.exp(eval_metrics["loss"]) - except OverflowError: - eval_metrics["perplexity"] = float("inf") - - # Print metrics and update progress bar - desc = ( - f"Step... ({cur_step} | Eval Loss: {eval_metrics['loss']} | Eval Perplexity:" - f" {eval_metrics['perplexity']})" - ) - epochs.write(desc) - epochs.desc = desc - - # Save metrics - if has_tensorboard and jax.process_index() == 0: - write_eval_metric(summary_writer, eval_metrics, cur_step) - - if cur_step % training_args.save_steps == 0 and cur_step > 0: - # save checkpoint after each epoch and push checkpoint to the hub - if jax.process_index() == 0: - params = jax.device_get(unreplicate(state.params)) - model.save_pretrained(training_args.output_dir, params=params) - tokenizer.save_pretrained(training_args.output_dir) - if training_args.push_to_hub: - repo.push_to_hub(commit_message=f"Saving weights and logs of step {cur_step}", blocking=False) - - # Eval after training - if training_args.do_eval: - eval_metrics = [] - eval_loader = data_loader(input_rng, eval_dataset, eval_batch_size, drop_last=False) - eval_steps = math.ceil(len(eval_dataset) / eval_batch_size) - for _ in tqdm(range(eval_steps), desc="Evaluating...", position=2, leave=False): - # Model forward - batch = next(eval_loader) - metrics = pad_shard_unpad(p_eval_step, static_return=True)( - state.params, batch, min_device_batch=per_device_eval_batch_size - ) - eval_metrics.append(metrics) - - # normalize eval metrics - eval_metrics = get_metrics(eval_metrics) - eval_metrics = jax.tree_util.tree_map(lambda x: jnp.mean(x).item(), eval_metrics) - - try: - eval_metrics["perplexity"] = math.exp(eval_metrics["loss"]) - except OverflowError: - eval_metrics["perplexity"] = float("inf") - - if jax.process_index() == 0: - eval_metrics = {f"eval_{metric_name}": value for metric_name, value in eval_metrics.items()} - path = os.path.join(training_args.output_dir, "eval_results.json") - with open(path, "w") as f: - json.dump(eval_metrics, f, indent=4, sort_keys=True) - - -if __name__ == "__main__": - main() diff --git a/spaces/chenyangqi/FateZero/FateZero/video_diffusion/prompt_attention/attention_util.py b/spaces/chenyangqi/FateZero/FateZero/video_diffusion/prompt_attention/attention_util.py deleted file mode 100644 index 4a5ff77716dddeafa6c04d63da89d97ffd3826ef..0000000000000000000000000000000000000000 --- a/spaces/chenyangqi/FateZero/FateZero/video_diffusion/prompt_attention/attention_util.py +++ /dev/null @@ -1,1077 +0,0 @@ -""" -Code for prompt2prompt local editing and attention visualization - -""" - -from typing import Optional, Union, Tuple, List, Dict -import abc -import os -import datetime -import numpy as np -from PIL import Image -import copy -import torchvision.utils as tvu -from einops import rearrange - -import torch -import torch.nn.functional as F - -from video_diffusion.common.util import get_time_string -import video_diffusion.prompt_attention.ptp_utils as ptp_utils -import video_diffusion.prompt_attention.seq_aligner as seq_aligner -from video_diffusion.common.image_util import save_gif_mp4_folder_type -device = torch.device('cuda') if torch.cuda.is_available() else torch.device('cpu') - - -class LocalBlend: - """Called in make_controller - self.alpha_layers.shape = torch.Size([2, 1, 1, 1, 1, 77]), 1 denotes the world to be replaced - """ - def get_mask(self, maps, alpha, use_pool, x_t, step_in_store: int=None, prompt_choose='source'): - k = 1 - # ([2, 40, 4, 16, 16, 77]) * ([2, 1, 1, 1, 1, 77]) -> [2, 1, 16, 16] - if maps.dim() == 5: alpha = alpha[:, None, ...] - maps = (maps * alpha).sum(-1).mean(1) - if use_pool: - maps = F.max_pool2d(maps, (k * 2 + 1, k * 2 +1), (1, 1), padding=(k, k)) - mask = F.interpolate(maps, size=(x_t.shape[-2:])) - mask = mask / mask.max(-2, keepdims=True)[0].max(-1, keepdims=True)[0] - mask = mask.gt(self.th[1-int(use_pool)]) - mask = mask[:1] + mask - if self.save_path is not None: - now = datetime.datetime.now().strftime("%Y-%m-%dT%H-%M-%S") - - save_path = f'{self.save_path}/{prompt_choose}/' - if step_in_store is not None: - save_path += f'step_in_store_{step_in_store:04d}' - # f'{self.save_path}/step_in_store_{step_in_store:04d}/mask_{now}_{self.count:02d}.png' - save_path +=f'/mask_{now}_{self.count:02d}.png' - os.makedirs(os.path.dirname(save_path), exist_ok=True) - tvu.save_image(rearrange(mask[1:].float(), "c p h w -> p c h w"), save_path,normalize=True) - self.count +=1 - return mask - - def __call__(self, x_t, attention_store): - """_summary_ - - Args: - x_t (_type_): [1,4,8,64,64] # (prompt, channel, clip_length, res, res) - attention_store (_type_): _description_ - - Returns: - _type_: _description_ - """ - self.counter += 1 - if (self.counter > self.start_blend) and (self.counter < self.end_blend): - - maps = attention_store["down_cross"][2:4] + attention_store["up_cross"][:3] - if maps[0].dim() == 4: - (ph, c, r, w)= maps[0].shape - assert r == 16*16 - # a list of len(5), elements has shape [16, 256, 77] - maps = [rearrange(item, "(p h) c (res_h res_w) w -> p h c res_h res_w w ", - p=self.alpha_layers.shape[0], res_h=16, res_w=16) for item in maps] - maps = torch.cat(maps, dim=1) - mask = self.get_mask(maps, self.alpha_layers, True, x_t) - if self.substruct_layers is not None: - maps_sub = ~self.get_mask(maps, self.substruct_layers, False) - mask = mask * maps_sub - mask = mask.float() - # only for debug - # mask = torch.zeros_like(mask) - # "mask is one: use geenerated information" - # "mask is zero: use geenerated information" - self.mask_list.append(mask[0][:, None, :, :].float().cpu().detach()) - if x_t.dim()==5: - mask = mask[:, None, ...] - # x_t [2,4,2,64,64] - x_t = x_t[:1] + mask * (x_t - x_t[:1]) - else: - (ph, r, w)= maps[0].shape - # a list of len(5), elements has shape [16, 256, 77] - - maps = [item.reshape(self.alpha_layers.shape[0], -1, 1, 16, 16, self.MAX_NUM_WORDS) for item in maps] - maps = [item.reshape(self.alpha_layers.shape[0], -1, 1, 16, 16, self.MAX_NUM_WORDS) for item in maps] - maps = torch.cat(maps, dim=1) - mask = self.get_mask(maps, self.alpha_layers, True, x_t) - if self.substruct_layers is not None: - maps_sub = ~self.get_mask(maps, self.substruct_layers, False) - mask = mask * maps_sub - mask = mask.float() - x_t = x_t[:1] + mask * (x_t - x_t[:1]) - - return x_t - - def __init__(self, prompts: List[str], words: [List[List[str]]], substruct_words=None, - start_blend=0.2, end_blend=0.8, - th=(0.9, 0.9), tokenizer=None, NUM_DDIM_STEPS =None, - save_path =None): - self.count = 0 - self.MAX_NUM_WORDS = 77 - self.NUM_DDIM_STEPS = NUM_DDIM_STEPS - if save_path is not None: - self.save_path = save_path+'/latents_mask' - os.makedirs(self.save_path, exist_ok='True') - else: - self.save_path = None - alpha_layers = torch.zeros(len(prompts), 1, 1, 1, 1, self.MAX_NUM_WORDS) - for i, (prompt, words_) in enumerate(zip(prompts, words)): - if type(words_) is str: - words_ = [words_] - for word in words_: - # debug me - ind = ptp_utils.get_word_inds(prompt, word, tokenizer) - alpha_layers[i, :, :, :, :, ind] = 1 - - if substruct_words is not None: - substruct_layers = torch.zeros(len(prompts), 1, 1, 1, 1, self.MAX_NUM_WORDS) - for i, (prompt, words_) in enumerate(zip(prompts, substruct_words)): - if type(words_) is str: - words_ = [words_] - for word in words_: - ind = ptp_utils.get_word_inds(prompt, word, tokenizer) - substruct_layers[i, :, :, :, :, ind] = 1 - self.substruct_layers = substruct_layers.to(device) - else: - self.substruct_layers = None - - self.alpha_layers = alpha_layers.to(device) - self.start_blend = int(start_blend * self.NUM_DDIM_STEPS) - self.end_blend = int(end_blend * self.NUM_DDIM_STEPS) - self.counter = 0 - self.th=th - self.mask_list = [] - - - -class MaskBlend: - """ - First, we consider only source prompt - Called in make_controller - self.alpha_layers.shape = torch.Size([2, 1, 1, 1, 1, 77]), 1 denotes the world to be replaced - """ - def get_mask(self, maps, alpha, use_pool, h=None, w=None, step_in_store: int=None, prompt_choose='source'): - """ - # ([1, 40, 2, 16, 16, 77]) * ([1, 1, 1, 1, 1, 77]) -> [2, 1, 16, 16] - mask have dimension of [clip_length, dim, res, res] - """ - k = 1 - - if maps.dim() == 5: alpha = alpha[:, None, ...] - maps = (maps * alpha).sum(-1).mean(1) - if use_pool: - maps = F.max_pool2d(maps, (k * 2 + 1, k * 2 +1), (1, 1), padding=(k, k)) - mask = F.interpolate(maps, size=(h, w)) - mask = mask / mask.max(-2, keepdims=True)[0].max(-1, keepdims=True)[0] - mask = mask.gt(self.th[1-int(use_pool)]) - if self.save_path is not None: - now = datetime.datetime.now().strftime("%Y-%m-%dT%H-%M-%S") - - save_path = f'{self.save_path}/{prompt_choose}/' - if step_in_store is not None: - save_path += f'step_in_store_{step_in_store:04d}' - save_path +=f'/mask_{now}_{self.count:02d}.png' - os.makedirs(os.path.dirname(save_path), exist_ok=True) - tvu.save_image(rearrange(mask.float(), "c p h w -> p c h w"), save_path,normalize=True) - self.count +=1 - return mask - - def __call__(self, target_h, target_w, attention_store, step_in_store: int=None): - """ - input has shape (heads) clip res words - one meens using target self-attention, zero is using source - Previous implementation us all zeros - mask should be repeat. - - Args: - x_t (_type_): [1,4,8,64,64] # (prompt, channel, clip_length, res, res) - attention_store (_type_): _description_ - - Returns: - _type_: _description_ - """ - - maps = attention_store["down_cross"][2:4] + attention_store["up_cross"][:3] - - # maps = attention_store # [2,8,1024, 77] = [frames, head, (res, res), word_embedding] - assert maps[0].dim() == 4, "only support temporal data" - ( c, heads, r, w)= maps[0].shape - res_h = int(np.sqrt(r)) - assert r == res_h* res_h - # a list of len(5), elements has shape [16, 256, 77] - target_device = self.alpha_layers.device - target_dtype = self.alpha_layers.dtype - maps = [rearrange(item, " c h (res_h res_w) w -> h c res_h res_w w ", - h=heads, res_h=res_h, res_w=res_h)[None, ...].to(target_device, dtype=target_dtype) - for item in maps] - - - maps = torch.cat(maps, dim=1) - # We only support self-attention blending using source prompt - masked_alpah_layers = self.alpha_layers[0:1] - mask = self.get_mask(maps, masked_alpah_layers, True, target_h, target_w, step_in_store=step_in_store, prompt_choose='source') - - if self.substruct_layers is not None: - maps_sub = ~self.get_mask(maps, self.substruct_layers, False) - mask = mask * maps_sub - mask = mask.float() - - # "mask is one: use geenerated information" - # "mask is zero: use geenerated information" - self.mask_list.append(mask[0][:, None, :, :].float().cpu().detach()) - - return mask - - def __init__(self, prompts: List[str], words: [List[List[str]]], substruct_words=None, - start_blend=0.2, end_blend=0.8, - th=(0.9, 0.9), tokenizer=None, NUM_DDIM_STEPS =None, - save_path = None): - self.count = 0 - # self.config_dict = copy.deepcopy(config_dict) - self.MAX_NUM_WORDS = 77 - self.NUM_DDIM_STEPS = NUM_DDIM_STEPS - if save_path is not None: - self.save_path = save_path+'/blend_mask' - os.makedirs(self.save_path, exist_ok='True') - else: - self.save_path = None - alpha_layers = torch.zeros(len(prompts), 1, 1, 1, 1, self.MAX_NUM_WORDS) - for i, (prompt, words_) in enumerate(zip(prompts, words)): - if type(words_) is str: - words_ = [words_] - for word in words_: - # debug me - ind = ptp_utils.get_word_inds(prompt, word, tokenizer) - alpha_layers[i, :, :, :, :, ind] = 1 - - if substruct_words is not None: - substruct_layers = torch.zeros(len(prompts), 1, 1, 1, 1, self.MAX_NUM_WORDS) - for i, (prompt, words_) in enumerate(zip(prompts, substruct_words)): - if type(words_) is str: - words_ = [words_] - for word in words_: - ind = ptp_utils.get_word_inds(prompt, word, tokenizer) - substruct_layers[i, :, :, :, :, ind] = 1 - self.substruct_layers = substruct_layers.to(device) - else: - self.substruct_layers = None - - self.alpha_layers = alpha_layers.to(device) - print('the index mask of edited word in the prompt') - print(self.alpha_layers[0][..., 0:(len(prompts[0].split(" "))+2)]) - print(self.alpha_layers[1][..., 0:(len(prompts[1].split(" "))+2)]) - - self.start_blend = int(start_blend * self.NUM_DDIM_STEPS) - self.end_blend = int(end_blend * self.NUM_DDIM_STEPS) - self.counter = 0 - self.th=th - self.mask_list = [] - - - - -class EmptyControl: - - - def step_callback(self, x_t): - return x_t - - def between_steps(self): - return - - def __call__(self, attn, is_cross: bool, place_in_unet: str): - return attn - - -class AttentionControl(abc.ABC): - - def step_callback(self, x_t): - self.cur_att_layer = 0 - self.cur_step += 1 - self.between_steps() - return x_t - - def between_steps(self): - return - - @property - def num_uncond_att_layers(self): - """I guess the diffusion of google has some unconditional attention layer - No unconditional attention layer in Stable diffusion - - Returns: - _type_: _description_ - """ - # return self.num_att_layers if config_dict['LOW_RESOURCE'] else 0 - return 0 - - @abc.abstractmethod - def forward (self, attn, is_cross: bool, place_in_unet: str): - raise NotImplementedError - - def __call__(self, attn, is_cross: bool, place_in_unet: str): - if self.cur_att_layer >= self.num_uncond_att_layers: - if self.LOW_RESOURCE: - # For inversion without null text file - attn = self.forward(attn, is_cross, place_in_unet) - else: - # For classifier-free guidance scale!=1 - h = attn.shape[0] - attn[h // 2:] = self.forward(attn[h // 2:], is_cross, place_in_unet) - self.cur_att_layer += 1 - - return attn - - def reset(self): - self.cur_step = 0 - self.cur_att_layer = 0 - - def __init__(self, - ): - self.LOW_RESOURCE = False # assume the edit have cfg - self.cur_step = 0 - self.num_att_layers = -1 - self.cur_att_layer = 0 - -class SpatialReplace(EmptyControl): - - def step_callback(self, x_t): - if self.cur_step < self.stop_inject: - b = x_t.shape[0] - x_t = x_t[:1].expand(b, *x_t.shape[1:]) - return x_t - - def __init__(self, stop_inject: float, NUM_DDIM_STEPS=None): - super(SpatialReplace, self).__init__() - self.stop_inject = int((1 - stop_inject) * NUM_DDIM_STEPS) - - -class AttentionStore(AttentionControl): - def step_callback(self, x_t): - - - x_t = super().step_callback(x_t) - self.latents_store.append(x_t.cpu().detach()) - return x_t - - @staticmethod - def get_empty_store(): - return {"down_cross": [], "mid_cross": [], "up_cross": [], - "down_self": [], "mid_self": [], "up_self": []} - - @staticmethod - def get_empty_cross_store(): - return {"down_cross": [], "mid_cross": [], "up_cross": [], - } - - def forward(self, attn, is_cross: bool, place_in_unet: str): - key = f"{place_in_unet}_{'cross' if is_cross else 'self'}" - if attn.shape[-2] <= 32 ** 2: # avoid memory overhead - # print(f"Store attention map {key} of shape {attn.shape}") - if is_cross or self.save_self_attention: - if attn.shape[-2] == 32**2: - append_tensor = attn.cpu().detach() - else: - append_tensor = attn - self.step_store[key].append(copy.deepcopy(append_tensor)) - return attn - - def between_steps(self): - if len(self.attention_store) == 0: - self.attention_store = self.step_store - else: - for key in self.attention_store: - for i in range(len(self.attention_store[key])): - self.attention_store[key][i] += self.step_store[key][i] - - if self.disk_store: - path = self.store_dir + f'/{self.cur_step:03d}.pt' - torch.save(copy.deepcopy(self.step_store), path) - self.attention_store_all_step.append(path) - else: - self.attention_store_all_step.append(copy.deepcopy(self.step_store)) - self.step_store = self.get_empty_store() - - def get_average_attention(self): - "divide the attention map value in attention store by denoising steps" - average_attention = {key: [item / self.cur_step for item in self.attention_store[key]] for key in self.attention_store} - return average_attention - - - def reset(self): - super(AttentionStore, self).reset() - self.step_store = self.get_empty_store() - self.attention_store_all_step = [] - self.attention_store = {} - - def __init__(self, save_self_attention:bool=True, disk_store=False): - super(AttentionStore, self).__init__() - self.disk_store = disk_store - if self.disk_store: - time_string = get_time_string() - path = f'./trash/attention_cache_{time_string}' - os.makedirs(path, exist_ok=True) - self.store_dir = path - else: - self.store_dir =None - self.step_store = self.get_empty_store() - self.attention_store = {} - self.save_self_attention = save_self_attention - self.latents_store = [] - self.attention_store_all_step = [] - - -class AttentionControlEdit(AttentionStore, abc.ABC): - """Decide self or cross-attention. Call the reweighting cross attention module - - Args: - AttentionStore (_type_): ([1, 4, 8, 64, 64]) - abc (_type_): [8, 8, 1024, 77] - """ - - def step_callback(self, x_t): - x_t = super().step_callback(x_t) - x_t_device = x_t.device - x_t_dtype = x_t.dtype - if self.local_blend is not None: - if self.use_inversion_attention: - step_in_store = len(self.additional_attention_store.latents_store) - self.cur_step - else: - step_in_store = self.cur_step - - inverted_latents = self.additional_attention_store.latents_store[step_in_store] - inverted_latents = inverted_latents.to(device =x_t_device, dtype=x_t_dtype) - # [prompt, channel, clip, res, res] = [1, 4, 2, 64, 64] - - blend_dict = self.get_empty_cross_store() - # each element in blend_dict have (prompt head) clip_length (res res) words, - # to better align with (b c f h w) - - attention_store_step = self.additional_attention_store.attention_store_all_step[step_in_store] - if isinstance(place_in_unet_cross_atten_list, str): attention_store_step = torch.load(attention_store_step) - - for key in blend_dict.keys(): - place_in_unet_cross_atten_list = attention_store_step[key] - for i, attention in enumerate(place_in_unet_cross_atten_list): - - concate_attention = torch.cat([attention[None, ...], self.attention_store[key][i][None, ...]], dim=0) - blend_dict[key].append(copy.deepcopy(rearrange(concate_attention, ' p c h res words -> (p h) c res words'))) - x_t = self.local_blend(copy.deepcopy(torch.cat([inverted_latents, x_t], dim=0)), copy.deepcopy(blend_dict)) - return x_t[1:, ...] - else: - return x_t - - def replace_self_attention(self, attn_base, att_replace, reshaped_mask=None): - if att_replace.shape[-2] <= 32 ** 2: - target_device = att_replace.device - target_dtype = att_replace.dtype - attn_base = attn_base.to(target_device, dtype=target_dtype) - attn_base = attn_base.unsqueeze(0).expand(att_replace.shape[0], *attn_base.shape) - if reshaped_mask is not None: - return_attention = reshaped_mask*att_replace + (1-reshaped_mask)*attn_base - return return_attention - else: - return attn_base - else: - return att_replace - - @abc.abstractmethod - def replace_cross_attention(self, attn_base, att_replace): - raise NotImplementedError - - def update_attention_position_dict(self, current_attention_key): - self.attention_position_counter_dict[current_attention_key] +=1 - - - def forward(self, attn, is_cross: bool, place_in_unet: str): - super(AttentionControlEdit, self).forward(attn, is_cross, place_in_unet) - if attn.shape[-2] <= 32 ** 2: - key = f"{place_in_unet}_{'cross' if is_cross else 'self'}" - current_pos = self.attention_position_counter_dict[key] - - if self.use_inversion_attention: - step_in_store = len(self.additional_attention_store.attention_store_all_step) - self.cur_step -1 - else: - step_in_store = self.cur_step - - place_in_unet_cross_atten_list = self.additional_attention_store.attention_store_all_step[step_in_store] - if isinstance(place_in_unet_cross_atten_list, str): place_in_unet_cross_atten_list = torch.load(place_in_unet_cross_atten_list) - # breakpoint() - # Note that attn is append to step_store, - # if attn is get through clean -> noisy, we should inverse it - attn_base = place_in_unet_cross_atten_list[key][current_pos] - - self.update_attention_position_dict(key) - # save in format of [temporal, head, resolution, text_embedding] - if is_cross or (self.num_self_replace[0] <= self.cur_step < self.num_self_replace[1]): - clip_length = attn.shape[0] // (self.batch_size) - attn = attn.reshape(self.batch_size, clip_length, *attn.shape[1:]) - # Replace att_replace with attn_base - attn_base, attn_repalce = attn_base, attn[0:] - if is_cross: - alpha_words = self.cross_replace_alpha[self.cur_step] - attn_repalce_new = self.replace_cross_attention(attn_base, attn_repalce) * alpha_words + (1 - alpha_words) * attn_repalce - attn[0:] = attn_repalce_new # b t h p n = [1, 1, 8, 1024, 77] - else: - - # start of masked self-attention - if self.MB is not None and attn_repalce.shape[-2] <= 32 ** 2: - # ca_this_step = place_in_unet_cross_atten_list - # query 1024, key 2048 - h = int(np.sqrt(attn_repalce.shape[-2])) - w = h - mask = self.MB(target_h = h, target_w =w, attention_store= place_in_unet_cross_atten_list, step_in_store=step_in_store) - # reshape from ([ 1, 2, 32, 32]) -> [2, 1, 1024, 1] - reshaped_mask = rearrange(mask, "d c h w -> c d (h w)")[..., None] - - # input has shape (h) c res words - # one meens using target self-attention, zero is using source - # Previous implementation us all zeros - # mask should be repeat. - else: - reshaped_mask = None - attn[0:] = self.replace_self_attention(attn_base, attn_repalce, reshaped_mask) - - - - attn = attn.reshape(self.batch_size * clip_length, *attn.shape[2:]) - # save in format of [temporal, head, resolution, text_embedding] - - return attn - def between_steps(self): - - super().between_steps() - self.step_store = self.get_empty_store() - - self.attention_position_counter_dict = { - 'down_cross': 0, - 'mid_cross': 0, - 'up_cross': 0, - 'down_self': 0, - 'mid_self': 0, - 'up_self': 0, - } - return - def __init__(self, prompts, num_steps: int, - cross_replace_steps: Union[float, Tuple[float, float], Dict[str, Tuple[float, float]]], - self_replace_steps: Union[float, Tuple[float, float]], - local_blend: Optional[LocalBlend], tokenizer=None, - additional_attention_store: AttentionStore =None, - use_inversion_attention: bool=False, - MB: MaskBlend= None, - save_self_attention: bool=True, - disk_store=False - ): - super(AttentionControlEdit, self).__init__( - save_self_attention=save_self_attention, - disk_store=disk_store) - self.additional_attention_store = additional_attention_store - self.batch_size = len(prompts) - self.MB = MB - if self.additional_attention_store is not None: - # the attention_store is provided outside, only pass in one promp - self.batch_size = len(prompts) //2 - assert self.batch_size==1, 'Only support single video editing with additional attention_store' - - self.cross_replace_alpha = ptp_utils.get_time_words_attention_alpha(prompts, num_steps, cross_replace_steps, tokenizer).to(device) - if type(self_replace_steps) is float: - self_replace_steps = 0, self_replace_steps - self.num_self_replace = int(num_steps * self_replace_steps[0]), int(num_steps * self_replace_steps[1]) - self.local_blend = local_blend - # We need to know the current position in attention - self.prev_attention_key_name = 0 - self.use_inversion_attention = use_inversion_attention - self.attention_position_counter_dict = { - 'down_cross': 0, - 'mid_cross': 0, - 'up_cross': 0, - 'down_self': 0, - 'mid_self': 0, - 'up_self': 0, - } - -class AttentionReplace(AttentionControlEdit): - - def replace_cross_attention(self, attn_base, att_replace): - # torch.Size([8, 4096, 77]), torch.Size([1, 77, 77]) -> [1, 8, 4096, 77] - # Can be extend to temporal, use temporal as batch size - target_device = att_replace.device - target_dtype = att_replace.dtype - attn_base = attn_base.to(target_device, dtype=target_dtype) - - if attn_base.dim()==3: - return torch.einsum('hpw,bwn->bhpn', attn_base, self.mapper) - elif attn_base.dim()==4: - return torch.einsum('thpw,bwn->bthpn', attn_base, self.mapper) - - def __init__(self, prompts, num_steps: int, cross_replace_steps: float, self_replace_steps: float, - local_blend: Optional[LocalBlend] = None, tokenizer=None, - additional_attention_store=None, - use_inversion_attention = False, - MB: MaskBlend=None, - save_self_attention: bool = True, - disk_store=False): - super(AttentionReplace, self).__init__( - prompts, num_steps, cross_replace_steps, self_replace_steps, local_blend, tokenizer=tokenizer, - additional_attention_store=additional_attention_store, use_inversion_attention = use_inversion_attention, - MB=MB, - save_self_attention = save_self_attention, - disk_store=disk_store - ) - self.mapper = seq_aligner.get_replacement_mapper(prompts, tokenizer).to(device) - -class AttentionRefine(AttentionControlEdit): - - def replace_cross_attention(self, attn_base, att_replace): - - target_device = att_replace.device - target_dtype = att_replace.dtype - attn_base = attn_base.to(target_device, dtype=target_dtype) - if attn_base.dim()==3: - attn_base_replace = attn_base[:, :, self.mapper].permute(2, 0, 1, 3) - elif attn_base.dim()==4: - attn_base_replace = attn_base[:, :, :, self.mapper].permute(3, 0, 1, 2, 4) - attn_replace = attn_base_replace * self.alphas + att_replace * (1 - self.alphas) - return attn_replace - - def __init__(self, prompts, num_steps: int, cross_replace_steps: float, self_replace_steps: float, - local_blend: Optional[LocalBlend] = None, tokenizer=None, - additional_attention_store=None, - use_inversion_attention = False, - MB: MaskBlend=None, - save_self_attention : bool=True, - disk_store = False - ): - super(AttentionRefine, self).__init__( - prompts, num_steps, cross_replace_steps, self_replace_steps, local_blend, tokenizer=tokenizer, - additional_attention_store=additional_attention_store, use_inversion_attention = use_inversion_attention, - MB=MB, - save_self_attention = save_self_attention, - disk_store = disk_store - ) - self.mapper, alphas = seq_aligner.get_refinement_mapper(prompts, tokenizer) - self.mapper, alphas = self.mapper.to(device), alphas.to(device) - self.alphas = alphas.reshape(alphas.shape[0], 1, 1, alphas.shape[1]) - - -class AttentionReweight(AttentionControlEdit): - """First replace the weight, than increase the attention at a area - - Args: - AttentionControlEdit (_type_): _description_ - """ - - def replace_cross_attention(self, attn_base, att_replace): - if self.prev_controller is not None: - attn_base = self.prev_controller.replace_cross_attention(attn_base, att_replace) - attn_replace = attn_base[None, :, :, :] * self.equalizer[:, None, None, :] - return attn_replace - - def __init__(self, prompts, num_steps: int, cross_replace_steps: float, self_replace_steps: float, equalizer, - local_blend: Optional[LocalBlend] = None, controller: Optional[AttentionControlEdit] = None, tokenizer=None, - additional_attention_store=None, - use_inversion_attention = False, - MB: MaskBlend=None, - save_self_attention:bool = True, - disk_store = False - ): - super(AttentionReweight, self).__init__( - prompts, num_steps, cross_replace_steps, self_replace_steps, local_blend, tokenizer=tokenizer, - additional_attention_store=additional_attention_store, - use_inversion_attention = use_inversion_attention, - MB=MB, - save_self_attention=save_self_attention, - disk_store = disk_store - ) - self.equalizer = equalizer.to(device) - self.prev_controller = controller - -def get_equalizer(text: str, word_select: Union[int, Tuple[int, ...]], values: Union[List[float], - Tuple[float, ...]], tokenizer=None): - if type(word_select) is int or type(word_select) is str: - word_select = (word_select,) - equalizer = torch.ones(1, 77) - - for word, val in zip(word_select, values): - inds = ptp_utils.get_word_inds(text, word, tokenizer) - equalizer[:, inds] = val - return equalizer - -def aggregate_attention(prompts, attention_store: AttentionStore, res: int, from_where: List[str], is_cross: bool, select: int): - out = [] - attention_maps = attention_store.get_average_attention() - num_pixels = res ** 2 - for location in from_where: - for item in attention_maps[f"{location}_{'cross' if is_cross else 'self'}"]: - if item.dim() == 3: - if item.shape[1] == num_pixels: - cross_maps = item.reshape(len(prompts), -1, res, res, item.shape[-1])[select] - out.append(cross_maps) - elif item.dim() == 4: - t, h, res_sq, token = item.shape - if item.shape[2] == num_pixels: - cross_maps = item.reshape(len(prompts), t, -1, res, res, item.shape[-1])[select] - out.append(cross_maps) - - out = torch.cat(out, dim=-4) - out = out.sum(-4) / out.shape[-4] - return out.cpu() - - -def make_controller(tokenizer, prompts: List[str], is_replace_controller: bool, - cross_replace_steps: Dict[str, float], self_replace_steps: float=0.0, - blend_words=None, equilizer_params=None, - additional_attention_store=None, use_inversion_attention = False, bend_th: float=(0.3, 0.3), - NUM_DDIM_STEPS=None, - masked_latents = False, - masked_self_attention=False, - save_path = None, - save_self_attention = True, - disk_store = False - ) -> AttentionControlEdit: - if (blend_words is None) or (blend_words == 'None'): - lb = None - MB =None - else: - if masked_latents: - lb = LocalBlend( prompts, blend_words, tokenizer=tokenizer, th=bend_th, NUM_DDIM_STEPS=NUM_DDIM_STEPS, - save_path=save_path) - else: - lb = None - if masked_self_attention: - MB = MaskBlend( prompts, blend_words, tokenizer=tokenizer, th=bend_th, NUM_DDIM_STEPS=NUM_DDIM_STEPS, - save_path=save_path) - print(f'Control self attention mask with threshold {bend_th}') - else: - MB = None - if is_replace_controller: - print('use replace controller') - controller = AttentionReplace(prompts, NUM_DDIM_STEPS, - cross_replace_steps=cross_replace_steps, self_replace_steps=self_replace_steps, - local_blend=lb, tokenizer=tokenizer, - additional_attention_store=additional_attention_store, - use_inversion_attention = use_inversion_attention, - MB=MB, - save_self_attention = save_self_attention, - disk_store=disk_store - ) - else: - print('use refine controller') - controller = AttentionRefine(prompts, NUM_DDIM_STEPS, - cross_replace_steps=cross_replace_steps, self_replace_steps=self_replace_steps, - local_blend=lb, tokenizer=tokenizer, - additional_attention_store=additional_attention_store, - use_inversion_attention = use_inversion_attention, - MB=MB, - save_self_attention = save_self_attention, - disk_store=disk_store - ) - if equilizer_params is not None: - eq = get_equalizer(prompts[1], equilizer_params["words"], equilizer_params["values"], tokenizer=tokenizer) - controller = AttentionReweight(prompts, NUM_DDIM_STEPS, - cross_replace_steps=cross_replace_steps, self_replace_steps=self_replace_steps, - equalizer=eq, local_blend=lb, controller=controller, - tokenizer=tokenizer, - additional_attention_store=additional_attention_store, - use_inversion_attention = use_inversion_attention, - MB=MB, - save_self_attention = save_self_attention, - disk_store=disk_store - ) - return controller - - -def show_cross_attention(tokenizer, prompts, attention_store: AttentionStore, - res: int, from_where: List[str], select: int = 0, save_path = None): - """_summary_ - - tokenizer (_type_): _description_ - prompts (_type_): _description_ - attention_store (AttentionStore): _description_ - ["down", "mid", "up"] X ["self", "cross"] - 4, 1, 6 - head*res*text_token_len = 8*res*77 - res=1024 -> 64 -> 1024 - res (int): res - from_where (List[str]): "up", "down' - select (int, optional): _description_. Defaults to 0. - """ - if isinstance(prompts, str): - prompts = [prompts,] - tokens = tokenizer.encode(prompts[select]) # list of length 9, [0-49 K] - decoder = tokenizer.decode - # 16, 16, 7, 7 - attention_maps = aggregate_attention(prompts, attention_store, res, from_where, True, select) - os.makedirs('trash', exist_ok=True) - attention_list = [] - if attention_maps.dim()==3: attention_maps=attention_maps[None, ...] - for j in range(attention_maps.shape[0]): - images = [] - for i in range(len(tokens)): - image = attention_maps[j, :, :, i] - image = 255 * image / image.max() - image = image.unsqueeze(-1).expand(*image.shape, 3) - image = image.numpy().astype(np.uint8) - image = np.array(Image.fromarray(image).resize((256, 256))) - image = ptp_utils.text_under_image(image, decoder(int(tokens[i]))) - images.append(image) - ptp_utils.view_images(np.stack(images, axis=0), save_path=save_path) - atten_j = np.concatenate(images, axis=1) - attention_list.append(atten_j) - if save_path is not None: - now = datetime.datetime.now().strftime("%Y-%m-%dT%H-%M-%S") - video_save_path = f'{save_path}/{now}.gif' - save_gif_mp4_folder_type(attention_list, video_save_path) - return attention_list - - -def show_self_attention_comp(attention_store: AttentionStore, res: int, from_where: List[str], - max_com=10, select: int = 0): - attention_maps = aggregate_attention(attention_store, res, from_where, False, select).numpy().reshape((res ** 2, res ** 2)) - u, s, vh = np.linalg.svd(attention_maps - np.mean(attention_maps, axis=1, keepdims=True)) - images = [] - for i in range(max_com): - image = vh[i].reshape(res, res) - image = image - image.min() - image = 255 * image / image.max() - image = np.repeat(np.expand_dims(image, axis=2), 3, axis=2).astype(np.uint8) - image = Image.fromarray(image).resize((256, 256)) - image = np.array(image) - images.append(image) - ptp_utils.view_images(np.concatenate(images, axis=1)) - - -def register_attention_control(model, controller): - "Connect a model with a controller" - def ca_forward(self, place_in_unet, attention_type='cross'): - to_out = self.to_out - if type(to_out) is torch.nn.modules.container.ModuleList: - to_out = self.to_out[0] - else: - to_out = self.to_out - - def _attention( query, key, value, is_cross, attention_mask=None): - if self.upcast_attention: - query = query.float() - key = key.float() - - attention_scores = torch.baddbmm( - torch.empty(query.shape[0], query.shape[1], key.shape[1], dtype=query.dtype, device=query.device), - query, - key.transpose(-1, -2), - beta=0, - alpha=self.scale, - ) - - if attention_mask is not None: - attention_scores = attention_scores + attention_mask - - if self.upcast_softmax: - attention_scores = attention_scores.float() - - attention_probs = attention_scores.softmax(dim=-1) - - # cast back to the original dtype - attention_probs = attention_probs.to(value.dtype) - - # KEY FUNCTION: - # Record and edit the attention probs - attention_probs_th = reshape_batch_dim_to_temporal_heads(attention_probs) - attention_probs = controller(reshape_batch_dim_to_temporal_heads(attention_probs), - is_cross, place_in_unet) - attention_probs = reshape_temporal_heads_to_batch_dim(attention_probs_th) - # compute attention output - hidden_states = torch.bmm(attention_probs, value) - - # reshape hidden_states - hidden_states = self.reshape_batch_dim_to_heads(hidden_states) - return hidden_states - - def reshape_temporal_heads_to_batch_dim( tensor): - head_size = self.heads - tensor = rearrange(tensor, " b h s t -> (b h) s t ", h = head_size) - return tensor - - def reshape_batch_dim_to_temporal_heads(tensor): - head_size = self.heads - tensor = rearrange(tensor, "(b h) s t -> b h s t", h = head_size) - return tensor - - def forward(hidden_states, encoder_hidden_states=None, attention_mask=None): - # hidden_states: torch.Size([16, 4096, 320]) - # encoder_hidden_states: torch.Size([16, 77, 768]) - is_cross = encoder_hidden_states is not None - - encoder_hidden_states = encoder_hidden_states - - if self.group_norm is not None: - hidden_states = self.group_norm(hidden_states.transpose(1, 2)).transpose(1, 2) - - query = self.to_q(hidden_states) - query = self.reshape_heads_to_batch_dim(query) - - if self.added_kv_proj_dim is not None: - key = self.to_k(hidden_states) - value = self.to_v(hidden_states) - encoder_hidden_states_key_proj = self.add_k_proj(encoder_hidden_states) - encoder_hidden_states_value_proj = self.add_v_proj(encoder_hidden_states) - - key = self.reshape_heads_to_batch_dim(key) - value = self.reshape_heads_to_batch_dim(value) - encoder_hidden_states_key_proj = self.reshape_heads_to_batch_dim(encoder_hidden_states_key_proj) - encoder_hidden_states_value_proj = self.reshape_heads_to_batch_dim(encoder_hidden_states_value_proj) - - key = torch.concat([encoder_hidden_states_key_proj, key], dim=1) - value = torch.concat([encoder_hidden_states_value_proj, value], dim=1) - else: - encoder_hidden_states = encoder_hidden_states if encoder_hidden_states is not None else hidden_states - key = self.to_k(encoder_hidden_states) - value = self.to_v(encoder_hidden_states) - - key = self.reshape_heads_to_batch_dim(key) - value = self.reshape_heads_to_batch_dim(value) - - if attention_mask is not None: - if attention_mask.shape[-1] != query.shape[1]: - target_length = query.shape[1] - attention_mask = F.pad(attention_mask, (0, target_length), value=0.0) - attention_mask = attention_mask.repeat_interleave(self.heads, dim=0) - - - if self._use_memory_efficient_attention_xformers and query.shape[-2] > 32 ** 2: - # for large attention map of 64X64, use xformers to save memory - hidden_states = self._memory_efficient_attention_xformers(query, key, value, attention_mask) - # Some versions of xformers return output in fp32, cast it back to the dtype of the input - hidden_states = hidden_states.to(query.dtype) - else: - - hidden_states = _attention(query, key, value, is_cross=is_cross, attention_mask=attention_mask) - # else: - # hidden_states = self._sliced_attention(query, key, value, sequence_length, dim, attention_mask) - - # linear proj - hidden_states = to_out(hidden_states) - - # dropout - # hidden_states = self.to_out[1](hidden_states) - return hidden_states - - - def scforward( - hidden_states, - encoder_hidden_states=None, - attention_mask=None, - clip_length: int = None, - SparseCausalAttention_index: list = [-1, 'first'] - ): - if ( - self.added_kv_proj_dim is not None - or encoder_hidden_states is not None - or attention_mask is not None - ): - raise NotImplementedError - - if self.group_norm is not None: - hidden_states = self.group_norm(hidden_states.transpose(1, 2)).transpose(1, 2) - - query = self.to_q(hidden_states) - query = self.reshape_heads_to_batch_dim(query) - - key = self.to_k(hidden_states) - value = self.to_v(hidden_states) - - if clip_length is not None: - key = rearrange(key, "(b f) d c -> b f d c", f=clip_length) - value = rearrange(value, "(b f) d c -> b f d c", f=clip_length) - - - # ***********************Start of SparseCausalAttention_index********** - frame_index_list = [] - # print(f'SparseCausalAttention_index {str(SparseCausalAttention_index)}') - if len(SparseCausalAttention_index) > 0: - for index in SparseCausalAttention_index: - if isinstance(index, str): - if index == 'first': - frame_index = [0] * clip_length - if index == 'last': - frame_index = [clip_length-1] * clip_length - if (index == 'mid') or (index == 'middle'): - frame_index = [int((clip_length-1)//2)] * clip_length - else: - assert isinstance(index, int), 'relative index must be int' - frame_index = torch.arange(clip_length) + index - frame_index = frame_index.clip(0, clip_length-1) - - frame_index_list.append(frame_index) - key = torch.cat([ key[:, frame_index] for frame_index in frame_index_list - ], dim=2) - value = torch.cat([ value[:, frame_index] for frame_index in frame_index_list - ], dim=2) - - - # ***********************End of SparseCausalAttention_index********** - key = rearrange(key, "b f d c -> (b f) d c", f=clip_length) - value = rearrange(value, "b f d c -> (b f) d c", f=clip_length) - - key = self.reshape_heads_to_batch_dim(key) - value = self.reshape_heads_to_batch_dim(value) - - if self._use_memory_efficient_attention_xformers and query.shape[-2] > 32 ** 2: - # for large attention map of 64X64, use xformers to save memory - hidden_states = self._memory_efficient_attention_xformers(query, key, value, attention_mask) - # Some versions of xformers return output in fp32, cast it back to the dtype of the input - hidden_states = hidden_states.to(query.dtype) - else: - # if self._slice_size is None or query.shape[0] // self._slice_size == 1: - hidden_states = _attention(query, key, value, attention_mask=attention_mask, is_cross=False) - # else: - # hidden_states = self._sliced_attention( - # query, key, value, hidden_states.shape[1], dim, attention_mask - # ) - - # linear proj - hidden_states = to_out(hidden_states) - - # dropout - # hidden_states = self.to_out[1](hidden_states) - return hidden_states - if attention_type == 'CrossAttention': - return forward - elif attention_type == "SparseCausalAttention": - return scforward - - class DummyController: - - def __call__(self, *args): - return args[0] - - def __init__(self): - self.num_att_layers = 0 - - if controller is None: - controller = DummyController() - - def register_recr(net_, count, place_in_unet): - if net_[1].__class__.__name__ == 'CrossAttention' \ - or net_[1].__class__.__name__ == 'SparseCausalAttention': - net_[1].forward = ca_forward(net_[1], place_in_unet, attention_type = net_[1].__class__.__name__) - return count + 1 - elif hasattr(net_[1], 'children'): - for net in net_[1].named_children(): - if net[0] !='attn_temporal': - - count = register_recr(net, count, place_in_unet) - - return count - - cross_att_count = 0 - sub_nets = model.unet.named_children() - for net in sub_nets: - if "down" in net[0]: - cross_att_count += register_recr(net, 0, "down") - elif "up" in net[0]: - cross_att_count += register_recr(net, 0, "up") - elif "mid" in net[0]: - cross_att_count += register_recr(net, 0, "mid") - print(f"Number of attention layer registered {cross_att_count}") - controller.num_att_layers = cross_att_count diff --git a/spaces/chewing/liandan/src/check_backpack.py b/spaces/chewing/liandan/src/check_backpack.py deleted file mode 100644 index 471af23778fdeaf977f2d318035cee54d3b97d3b..0000000000000000000000000000000000000000 --- a/spaces/chewing/liandan/src/check_backpack.py +++ /dev/null @@ -1,80 +0,0 @@ -import re -from tinydb import TinyDB, Query -from src.gr_func import _get_medicine_elixir_config,material_table - - -def get_need_material(medicine_select, medicine_level_select="ALL",material_max_num=16) ->list: - material = Query() - m = _get_medicine_elixir_config(medicine_select) - func1_type = m["func1_type"] - func1_power = m["func1_power"] - func2_type = m["func2_type"] - func2_power = m["func2_power"] - if medicine_level_select == "ALL": - a = material_table.search((material.main_func_t == func1_type) | (material.auxi_func_t == func1_type) | ( - material.main_func_t == func2_type) | (material.auxi_func_t == func2_type)) - else: - a = material_table.search((material.level == medicine_level_select) & ( - (material.main_func_t == func1_type) | (material.auxi_func_t == func1_type) | ( - material.main_func_t == func2_type) | (material.auxi_func_t == func2_type))) - - def get_num(material0): - global material_second_f - name = material0["name"] - if material0["main_func_t"] == func1_type: - material_second_f = (func2_type,False) - num = func1_power / material0["main_func_p"] - elif material0["auxi_func_t"] == func1_type: - material_second_f = (func2_type,True) - num = func1_power / material0["auxi_func_p"] - elif material0["main_func_t"] == func2_type: - material_second_f = (func1_type,False) - num = func2_power / material0["main_func_p"] - elif material0["auxi_func_t"] == func2_type: - material_second_f = (func1_type,True) - num = func2_power / material0["auxi_func_p"] - num = int(num) + 1 if num > int(num) else int(num) - return (name,num,material_second_f) - rtn = list(map(get_num, a)) - rtn = list(filter(lambda x:x[1]<=material_max_num, rtn)) - - def check_material(material0): - if material0[1] > material_max_num: - return False - material_t = material.main_func_t if material0[2][1] else material.auxi_func_t - a = material_table.search(material_t == material0[2][0]) - if a == []: - return False - return True - - rtn = list(filter(check_material, rtn)) - rtn = list(map(lambda x: (x[0],x[1]), rtn)) - return rtn - -grade_str = "一二三四五六七八九" - -def sort_yaocai(text,medicine_select,material_num): - material_need_dict = {} - if medicine_select != "无": - material_need_list = get_need_material(medicine_select,material_max_num=material_num) - for name,num in material_need_list: - material_need_dict[name[:-4]] = num - print(material_need_dict) - regex = re.compile("名字:.+\n品级:.+\n.+\n.+\n拥有数量:\d+") - yaocai_l = regex.findall(text) - rtn = [] - for yaocai in yaocai_l: - yaocai = yaocai.split("\n") - name = yaocai[0][3:] - num = int(yaocai[-1][5:]) - grade = grade_str.index(yaocai[1][3])+1 - - flag = material_need_dict.get(name) - if flag is not None: - if num >= flag: - flag = "+" - else: - num = f"{num}({flag})" - flag = "-" - rtn.append((name,grade,num,flag)) - return rtn diff --git a/spaces/chongjie/PoseDiffusion_MVP/util/metric.py b/spaces/chongjie/PoseDiffusion_MVP/util/metric.py deleted file mode 100644 index a588405ccd4e0f84d210f0661611db3d84bc4bdb..0000000000000000000000000000000000000000 --- a/spaces/chongjie/PoseDiffusion_MVP/util/metric.py +++ /dev/null @@ -1,22 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import random -import numpy as np -import torch - - -def compute_ARE(rotation1, rotation2): - if isinstance(rotation1, torch.Tensor): - rotation1 = rotation1.cpu().detach().numpy() - if isinstance(rotation2, torch.Tensor): - rotation2 = rotation2.cpu().detach().numpy() - - R_rel = np.einsum("Bij,Bjk ->Bik", rotation1.transpose(0, 2, 1), rotation2) - t = (np.trace(R_rel, axis1=1, axis2=2) - 1) / 2 - theta = np.arccos(np.clip(t, -1, 1)) - error = theta * 180 / np.pi - return np.minimum(error, np.abs(180 - error)) diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/chardet/gb2312prober.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/chardet/gb2312prober.py deleted file mode 100644 index d423e7311e2fbd9a014de808c107e96ad11c66e5..0000000000000000000000000000000000000000 --- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/chardet/gb2312prober.py +++ /dev/null @@ -1,47 +0,0 @@ -######################## BEGIN LICENSE BLOCK ######################## -# The Original Code is mozilla.org code. -# -# The Initial Developer of the Original Code is -# Netscape Communications Corporation. -# Portions created by the Initial Developer are Copyright (C) 1998 -# the Initial Developer. All Rights Reserved. -# -# Contributor(s): -# Mark Pilgrim - port to Python -# -# This library is free software; you can redistribute it and/or -# modify it under the terms of the GNU Lesser General Public -# License as published by the Free Software Foundation; either -# version 2.1 of the License, or (at your option) any later version. -# -# This library is distributed in the hope that it will be useful, -# but WITHOUT ANY WARRANTY; without even the implied warranty of -# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU -# Lesser General Public License for more details. -# -# You should have received a copy of the GNU Lesser General Public -# License along with this library; if not, write to the Free Software -# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA -# 02110-1301 USA -######################### END LICENSE BLOCK ######################### - -from .chardistribution import GB2312DistributionAnalysis -from .codingstatemachine import CodingStateMachine -from .mbcharsetprober import MultiByteCharSetProber -from .mbcssm import GB2312_SM_MODEL - - -class GB2312Prober(MultiByteCharSetProber): - def __init__(self) -> None: - super().__init__() - self.coding_sm = CodingStateMachine(GB2312_SM_MODEL) - self.distribution_analyzer = GB2312DistributionAnalysis() - self.reset() - - @property - def charset_name(self) -> str: - return "GB2312" - - @property - def language(self) -> str: - return "Chinese" diff --git a/spaces/cihyFjudo/fairness-paper-search/Download Movies from IMDb Heroine No.1 Full Movie 1080p and Other Titles.md b/spaces/cihyFjudo/fairness-paper-search/Download Movies from IMDb Heroine No.1 Full Movie 1080p and Other Titles.md deleted file mode 100644 index cd5f8fd91cc71398d5bd9c5f1b33c89a2c332f4e..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/Download Movies from IMDb Heroine No.1 Full Movie 1080p and Other Titles.md +++ /dev/null @@ -1,6 +0,0 @@ -

      Heroine No.1 full movie 1080p download movies


      Download Zip ->->->-> https://tinurli.com/2uwjwJ



      - - aaccfb2cb3
      -
      -
      -

      diff --git a/spaces/cihyFjudo/fairness-paper-search/Peaches the Teaches of Peaches A Review of Her Influential and Outrageous Album..md b/spaces/cihyFjudo/fairness-paper-search/Peaches the Teaches of Peaches A Review of Her Influential and Outrageous Album..md deleted file mode 100644 index 39377c9a75ecd4e2afd298c1908b28c96da21256..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/Peaches the Teaches of Peaches A Review of Her Influential and Outrageous Album..md +++ /dev/null @@ -1,5 +0,0 @@ - -

      The flood took its toll on the trees and shrubs. Aunt Ruby lost her fruit and nut trees, purple plums, peaches and apples along with an English walnut tree which was an oddity of this particular section of the country.

      -

      peaches the teaches of peaches torrent


      Download Ziphttps://tinurli.com/2uwjQk



      aaccfb2cb3
      -
      -
      \ No newline at end of file diff --git a/spaces/cihyFjudo/fairness-paper-search/RailworksTS2015AerosoftSBBRoute1Addontorrent.md b/spaces/cihyFjudo/fairness-paper-search/RailworksTS2015AerosoftSBBRoute1Addontorrent.md deleted file mode 100644 index eba8219058f1a3925f6444abcbddf9e6a733e53d..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/RailworksTS2015AerosoftSBBRoute1Addontorrent.md +++ /dev/null @@ -1,6 +0,0 @@ -

      RailworksTS2015AerosoftSBBRoute1Addontorrent


      Download ❤❤❤ https://tinurli.com/2uwjOV



      - - aaccfb2cb3
      -
      -
      -

      diff --git a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/cffi/lock.py b/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/cffi/lock.py deleted file mode 100644 index db91b7158c4ee9aa653462fe38e79ed1b553db87..0000000000000000000000000000000000000000 --- a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/cffi/lock.py +++ /dev/null @@ -1,30 +0,0 @@ -import sys - -if sys.version_info < (3,): - try: - from thread import allocate_lock - except ImportError: - from dummy_thread import allocate_lock -else: - try: - from _thread import allocate_lock - except ImportError: - from _dummy_thread import allocate_lock - - -##import sys -##l1 = allocate_lock - -##class allocate_lock(object): -## def __init__(self): -## self._real = l1() -## def __enter__(self): -## for i in range(4, 0, -1): -## print sys._getframe(i).f_code -## print -## return self._real.__enter__() -## def __exit__(self, *args): -## return self._real.__exit__(*args) -## def acquire(self, f): -## assert f is False -## return self._real.acquire(f) diff --git a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fontTools/pens/quartzPen.py b/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fontTools/pens/quartzPen.py deleted file mode 100644 index 6e1228d6f2b8bbc78cf52864ccaf3b249a654749..0000000000000000000000000000000000000000 --- a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fontTools/pens/quartzPen.py +++ /dev/null @@ -1,44 +0,0 @@ -from fontTools.pens.basePen import BasePen - -from Quartz.CoreGraphics import CGPathCreateMutable, CGPathMoveToPoint -from Quartz.CoreGraphics import CGPathAddLineToPoint, CGPathAddCurveToPoint -from Quartz.CoreGraphics import CGPathAddQuadCurveToPoint, CGPathCloseSubpath - - -__all__ = ["QuartzPen"] - - -class QuartzPen(BasePen): - - """A pen that creates a CGPath - - Parameters - - path: an optional CGPath to add to - - xform: an optional CGAffineTransform to apply to the path - """ - - def __init__(self, glyphSet, path=None, xform=None): - BasePen.__init__(self, glyphSet) - if path is None: - path = CGPathCreateMutable() - self.path = path - self.xform = xform - - def _moveTo(self, pt): - x, y = pt - CGPathMoveToPoint(self.path, self.xform, x, y) - - def _lineTo(self, pt): - x, y = pt - CGPathAddLineToPoint(self.path, self.xform, x, y) - - def _curveToOne(self, p1, p2, p3): - (x1, y1), (x2, y2), (x3, y3) = p1, p2, p3 - CGPathAddCurveToPoint(self.path, self.xform, x1, y1, x2, y2, x3, y3) - - def _qCurveToOne(self, p1, p2): - (x1, y1), (x2, y2) = p1, p2 - CGPathAddQuadCurveToPoint(self.path, self.xform, x1, y1, x2, y2) - - def _closePath(self): - CGPathCloseSubpath(self.path) diff --git a/spaces/codelion/Grounding_DINO_demo/groundingdino/models/GroundingDINO/ms_deform_attn.py b/spaces/codelion/Grounding_DINO_demo/groundingdino/models/GroundingDINO/ms_deform_attn.py deleted file mode 100644 index 489d501bef364020212306d81e9b85c8daa27491..0000000000000000000000000000000000000000 --- a/spaces/codelion/Grounding_DINO_demo/groundingdino/models/GroundingDINO/ms_deform_attn.py +++ /dev/null @@ -1,413 +0,0 @@ -# ------------------------------------------------------------------------ -# Grounding DINO -# url: https://github.com/IDEA-Research/GroundingDINO -# Copyright (c) 2023 IDEA. All Rights Reserved. -# Licensed under the Apache License, Version 2.0 [see LICENSE for details] -# ------------------------------------------------------------------------ -# Deformable DETR -# Copyright (c) 2020 SenseTime. All Rights Reserved. -# Licensed under the Apache License, Version 2.0 [see LICENSE for details] -# ------------------------------------------------------------------------------------------------ -# Modified from: -# https://github.com/fundamentalvision/Deformable-DETR/blob/main/models/ops/functions/ms_deform_attn_func.py -# https://github.com/fundamentalvision/Deformable-DETR/blob/main/models/ops/modules/ms_deform_attn.py -# https://github.com/open-mmlab/mmcv/blob/master/mmcv/ops/multi_scale_deform_attn.py -# ------------------------------------------------------------------------------------------------ - -import math -import warnings -from typing import Optional - -import torch -import torch.nn as nn -import torch.nn.functional as F -from torch.autograd import Function -from torch.autograd.function import once_differentiable -from torch.nn.init import constant_, xavier_uniform_ - -try: - from groundingdino import _C -except: - warnings.warn("Failed to load custom C++ ops. Running on CPU mode Only!") - - -# helpers -def _is_power_of_2(n): - if (not isinstance(n, int)) or (n < 0): - raise ValueError("invalid input for _is_power_of_2: {} (type: {})".format(n, type(n))) - return (n & (n - 1) == 0) and n != 0 - - -class MultiScaleDeformableAttnFunction(Function): - @staticmethod - def forward( - ctx, - value, - value_spatial_shapes, - value_level_start_index, - sampling_locations, - attention_weights, - im2col_step, - ): - ctx.im2col_step = im2col_step - output = _C.ms_deform_attn_forward( - value, - value_spatial_shapes, - value_level_start_index, - sampling_locations, - attention_weights, - ctx.im2col_step, - ) - ctx.save_for_backward( - value, - value_spatial_shapes, - value_level_start_index, - sampling_locations, - attention_weights, - ) - return output - - @staticmethod - @once_differentiable - def backward(ctx, grad_output): - ( - value, - value_spatial_shapes, - value_level_start_index, - sampling_locations, - attention_weights, - ) = ctx.saved_tensors - grad_value, grad_sampling_loc, grad_attn_weight = _C.ms_deform_attn_backward( - value, - value_spatial_shapes, - value_level_start_index, - sampling_locations, - attention_weights, - grad_output, - ctx.im2col_step, - ) - - return grad_value, None, None, grad_sampling_loc, grad_attn_weight, None - - -def multi_scale_deformable_attn_pytorch( - value: torch.Tensor, - value_spatial_shapes: torch.Tensor, - sampling_locations: torch.Tensor, - attention_weights: torch.Tensor, -) -> torch.Tensor: - - bs, _, num_heads, embed_dims = value.shape - _, num_queries, num_heads, num_levels, num_points, _ = sampling_locations.shape - value_list = value.split([H_ * W_ for H_, W_ in value_spatial_shapes], dim=1) - sampling_grids = 2 * sampling_locations - 1 - sampling_value_list = [] - for level, (H_, W_) in enumerate(value_spatial_shapes): - # bs, H_*W_, num_heads, embed_dims -> - # bs, H_*W_, num_heads*embed_dims -> - # bs, num_heads*embed_dims, H_*W_ -> - # bs*num_heads, embed_dims, H_, W_ - value_l_ = ( - value_list[level].flatten(2).transpose(1, 2).reshape(bs * num_heads, embed_dims, H_, W_) - ) - # bs, num_queries, num_heads, num_points, 2 -> - # bs, num_heads, num_queries, num_points, 2 -> - # bs*num_heads, num_queries, num_points, 2 - sampling_grid_l_ = sampling_grids[:, :, :, level].transpose(1, 2).flatten(0, 1) - # bs*num_heads, embed_dims, num_queries, num_points - sampling_value_l_ = F.grid_sample( - value_l_, sampling_grid_l_, mode="bilinear", padding_mode="zeros", align_corners=False - ) - sampling_value_list.append(sampling_value_l_) - # (bs, num_queries, num_heads, num_levels, num_points) -> - # (bs, num_heads, num_queries, num_levels, num_points) -> - # (bs, num_heads, 1, num_queries, num_levels*num_points) - attention_weights = attention_weights.transpose(1, 2).reshape( - bs * num_heads, 1, num_queries, num_levels * num_points - ) - output = ( - (torch.stack(sampling_value_list, dim=-2).flatten(-2) * attention_weights) - .sum(-1) - .view(bs, num_heads * embed_dims, num_queries) - ) - return output.transpose(1, 2).contiguous() - - -class MultiScaleDeformableAttention(nn.Module): - """Multi-Scale Deformable Attention Module used in Deformable-DETR - - `Deformable DETR: Deformable Transformers for End-to-End Object Detection. - `_. - - Args: - embed_dim (int): The embedding dimension of Attention. Default: 256. - num_heads (int): The number of attention heads. Default: 8. - num_levels (int): The number of feature map used in Attention. Default: 4. - num_points (int): The number of sampling points for each query - in each head. Default: 4. - img2col_steps (int): The step used in image_to_column. Defualt: 64. - dropout (float): Dropout layer used in output. Default: 0.1. - batch_first (bool): if ``True``, then the input and output tensor will be - provided as `(bs, n, embed_dim)`. Default: False. `(n, bs, embed_dim)` - """ - - def __init__( - self, - embed_dim: int = 256, - num_heads: int = 8, - num_levels: int = 4, - num_points: int = 4, - img2col_step: int = 64, - batch_first: bool = False, - ): - super().__init__() - if embed_dim % num_heads != 0: - raise ValueError( - "embed_dim must be divisible by num_heads, but got {} and {}".format( - embed_dim, num_heads - ) - ) - head_dim = embed_dim // num_heads - - self.batch_first = batch_first - - if not _is_power_of_2(head_dim): - warnings.warn( - """ - You'd better set d_model in MSDeformAttn to make sure that - each dim of the attention head a power of 2, which is more efficient. - """ - ) - - self.im2col_step = img2col_step - self.embed_dim = embed_dim - self.num_heads = num_heads - self.num_levels = num_levels - self.num_points = num_points - self.sampling_offsets = nn.Linear(embed_dim, num_heads * num_levels * num_points * 2) - self.attention_weights = nn.Linear(embed_dim, num_heads * num_levels * num_points) - self.value_proj = nn.Linear(embed_dim, embed_dim) - self.output_proj = nn.Linear(embed_dim, embed_dim) - - self.init_weights() - - def _reset_parameters(self): - return self.init_weights() - - def init_weights(self): - """ - Default initialization for Parameters of Module. - """ - constant_(self.sampling_offsets.weight.data, 0.0) - thetas = torch.arange(self.num_heads, dtype=torch.float32) * ( - 2.0 * math.pi / self.num_heads - ) - grid_init = torch.stack([thetas.cos(), thetas.sin()], -1) - grid_init = ( - (grid_init / grid_init.abs().max(-1, keepdim=True)[0]) - .view(self.num_heads, 1, 1, 2) - .repeat(1, self.num_levels, self.num_points, 1) - ) - for i in range(self.num_points): - grid_init[:, :, i, :] *= i + 1 - with torch.no_grad(): - self.sampling_offsets.bias = nn.Parameter(grid_init.view(-1)) - constant_(self.attention_weights.weight.data, 0.0) - constant_(self.attention_weights.bias.data, 0.0) - xavier_uniform_(self.value_proj.weight.data) - constant_(self.value_proj.bias.data, 0.0) - xavier_uniform_(self.output_proj.weight.data) - constant_(self.output_proj.bias.data, 0.0) - - def freeze_sampling_offsets(self): - print("Freeze sampling offsets") - self.sampling_offsets.weight.requires_grad = False - self.sampling_offsets.bias.requires_grad = False - - def freeze_attention_weights(self): - print("Freeze attention weights") - self.attention_weights.weight.requires_grad = False - self.attention_weights.bias.requires_grad = False - - def forward( - self, - query: torch.Tensor, - key: Optional[torch.Tensor] = None, - value: Optional[torch.Tensor] = None, - query_pos: Optional[torch.Tensor] = None, - key_padding_mask: Optional[torch.Tensor] = None, - reference_points: Optional[torch.Tensor] = None, - spatial_shapes: Optional[torch.Tensor] = None, - level_start_index: Optional[torch.Tensor] = None, - **kwargs - ) -> torch.Tensor: - - """Forward Function of MultiScaleDeformableAttention - - Args: - query (torch.Tensor): Query embeddings with shape - `(num_query, bs, embed_dim)` - key (torch.Tensor): Key embeddings with shape - `(num_key, bs, embed_dim)` - value (torch.Tensor): Value embeddings with shape - `(num_key, bs, embed_dim)` - query_pos (torch.Tensor): The position embedding for `query`. Default: None. - key_padding_mask (torch.Tensor): ByteTensor for `query`, with shape `(bs, num_key)`, - indicating which elements within `key` to be ignored in attention. - reference_points (torch.Tensor): The normalized reference points - with shape `(bs, num_query, num_levels, 2)`, - all elements is range in [0, 1], top-left (0, 0), - bottom-right (1, 1), including padding are. - or `(N, Length_{query}, num_levels, 4)`, add additional - two dimensions `(h, w)` to form reference boxes. - spatial_shapes (torch.Tensor): Spatial shape of features in different levels. - With shape `(num_levels, 2)`, last dimension represents `(h, w)`. - level_start_index (torch.Tensor): The start index of each level. A tensor with - shape `(num_levels, )` which can be represented as - `[0, h_0 * w_0, h_0 * w_0 + h_1 * w_1, ...]`. - - Returns: - torch.Tensor: forward results with shape `(num_query, bs, embed_dim)` - """ - - if value is None: - value = query - - if query_pos is not None: - query = query + query_pos - - if not self.batch_first: - # change to (bs, num_query ,embed_dims) - query = query.permute(1, 0, 2) - value = value.permute(1, 0, 2) - - bs, num_query, _ = query.shape - bs, num_value, _ = value.shape - - assert (spatial_shapes[:, 0] * spatial_shapes[:, 1]).sum() == num_value - - value = self.value_proj(value) - if key_padding_mask is not None: - value = value.masked_fill(key_padding_mask[..., None], float(0)) - value = value.view(bs, num_value, self.num_heads, -1) - sampling_offsets = self.sampling_offsets(query).view( - bs, num_query, self.num_heads, self.num_levels, self.num_points, 2 - ) - attention_weights = self.attention_weights(query).view( - bs, num_query, self.num_heads, self.num_levels * self.num_points - ) - attention_weights = attention_weights.softmax(-1) - attention_weights = attention_weights.view( - bs, - num_query, - self.num_heads, - self.num_levels, - self.num_points, - ) - - # bs, num_query, num_heads, num_levels, num_points, 2 - if reference_points.shape[-1] == 2: - offset_normalizer = torch.stack([spatial_shapes[..., 1], spatial_shapes[..., 0]], -1) - sampling_locations = ( - reference_points[:, :, None, :, None, :] - + sampling_offsets / offset_normalizer[None, None, None, :, None, :] - ) - elif reference_points.shape[-1] == 4: - sampling_locations = ( - reference_points[:, :, None, :, None, :2] - + sampling_offsets - / self.num_points - * reference_points[:, :, None, :, None, 2:] - * 0.5 - ) - else: - raise ValueError( - "Last dim of reference_points must be 2 or 4, but get {} instead.".format( - reference_points.shape[-1] - ) - ) - - if torch.cuda.is_available() and value.is_cuda: - halffloat = False - if value.dtype == torch.float16: - halffloat = True - value = value.float() - sampling_locations = sampling_locations.float() - attention_weights = attention_weights.float() - - output = MultiScaleDeformableAttnFunction.apply( - value, - spatial_shapes, - level_start_index, - sampling_locations, - attention_weights, - self.im2col_step, - ) - - if halffloat: - output = output.half() - else: - output = multi_scale_deformable_attn_pytorch( - value, spatial_shapes, sampling_locations, attention_weights - ) - - output = self.output_proj(output) - - if not self.batch_first: - output = output.permute(1, 0, 2) - - return output - - -def create_dummy_class(klass, dependency, message=""): - """ - When a dependency of a class is not available, create a dummy class which throws ImportError - when used. - - Args: - klass (str): name of the class. - dependency (str): name of the dependency. - message: extra message to print - Returns: - class: a class object - """ - err = "Cannot import '{}', therefore '{}' is not available.".format(dependency, klass) - if message: - err = err + " " + message - - class _DummyMetaClass(type): - # throw error on class attribute access - def __getattr__(_, __): # noqa: B902 - raise ImportError(err) - - class _Dummy(object, metaclass=_DummyMetaClass): - # throw error on constructor - def __init__(self, *args, **kwargs): - raise ImportError(err) - - return _Dummy - - -def create_dummy_func(func, dependency, message=""): - """ - When a dependency of a function is not available, create a dummy function which throws - ImportError when used. - - Args: - func (str): name of the function. - dependency (str or list[str]): name(s) of the dependency. - message: extra message to print - Returns: - function: a function object - """ - err = "Cannot import '{}', therefore '{}' is not available.".format(dependency, func) - if message: - err = err + " " + message - - if isinstance(dependency, (list, tuple)): - dependency = ",".join(dependency) - - def _dummy(*args, **kwargs): - raise ImportError(err) - - return _dummy diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/h263_parser.c b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/h263_parser.c deleted file mode 100644 index f70a7911777fc448d314e1e229a0ec075a332298..0000000000000000000000000000000000000000 --- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/h263_parser.c +++ /dev/null @@ -1,95 +0,0 @@ -/* - * H.263 parser - * Copyright (c) 2002-2004 Michael Niedermayer - * - * This file is part of FFmpeg. - * - * FFmpeg is free software; you can redistribute it and/or - * modify it under the terms of the GNU Lesser General Public - * License as published by the Free Software Foundation; either - * version 2.1 of the License, or (at your option) any later version. - * - * FFmpeg is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Lesser General Public License for more details. - * - * You should have received a copy of the GNU Lesser General Public - * License along with FFmpeg; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA - */ - -/** - * @file - * H.263 parser - */ - -#include "parser.h" - -static int h263_find_frame_end(ParseContext *pc, const uint8_t *buf, int buf_size) -{ - int vop_found, i; - uint32_t state; - - vop_found= pc->frame_start_found; - state= pc->state; - - i=0; - if(!vop_found){ - for(i=0; i>(32-22) == 0x20){ - i++; - vop_found=1; - break; - } - } - } - - if(vop_found){ - for(; i>(32-22) == 0x20){ - pc->frame_start_found=0; - pc->state=-1; - return i-3; - } - } - } - pc->frame_start_found= vop_found; - pc->state= state; - - return END_NOT_FOUND; -} - -static int h263_parse(AVCodecParserContext *s, - AVCodecContext *avctx, - const uint8_t **poutbuf, int *poutbuf_size, - const uint8_t *buf, int buf_size) -{ - ParseContext *pc = s->priv_data; - int next; - - if (s->flags & PARSER_FLAG_COMPLETE_FRAMES) { - next = buf_size; - } else { - next = h263_find_frame_end(pc, buf, buf_size); - - if (ff_combine_frame(pc, next, &buf, &buf_size) < 0) { - *poutbuf = NULL; - *poutbuf_size = 0; - return buf_size; - } - } - - *poutbuf = buf; - *poutbuf_size = buf_size; - return next; -} - -const AVCodecParser ff_h263_parser = { - .codec_ids = { AV_CODEC_ID_H263 }, - .priv_data_size = sizeof(ParseContext), - .parser_parse = h263_parse, - .parser_close = ff_parse_close, -}; diff --git a/spaces/congsaPfin/Manga-OCR/logs/Chess Live - The Most Popular Chess App for Android Devices - Download Here.md b/spaces/congsaPfin/Manga-OCR/logs/Chess Live - The Most Popular Chess App for Android Devices - Download Here.md deleted file mode 100644 index 329de5c733894d7d59f2d0080fda763b0da6cb2d..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Chess Live - The Most Popular Chess App for Android Devices - Download Here.md +++ /dev/null @@ -1,95 +0,0 @@ - -

      Chess Live APK Download: How to Play Chess Online with Friends

      -

      Do you love chess and want to play it online with your friends? If yes, then you might be interested in downloading Chess Live APK, a free and fun app that lets you enjoy chess games with players from all over the world. In this article, we will show you what Chess Live APK is, why you should download it, how to download and install it, and how to play chess online with it. Let's get started!

      -

      chess live apk download


      DOWNLOAD ✯✯✯ https://urlca.com/2uO56t



      -

      Introduction

      -

      Chess is one of the oldest and most popular board games in history. It is a game of strategy, logic, and skill that can challenge your mind and improve your cognitive abilities. Chess can also be a great way to socialize and have fun with your friends, especially if you play it online.

      -

      What is Chess Live APK?

      -

      Chess Live APK is an Android application that allows you to play chess online with other players from around the world. You can choose from different game modes, such as classic, blitz, bullet, or chess960. You can also chat with your opponents, send emojis, and view your statistics and rankings. Chess Live APK is compatible with Android devices running version 4.0.3 or higher.

      -

      Why download Chess Live APK?

      -

      There are many reasons why you might want to download Chess Live APK on your device. Here are some of them:

      -
        -
      • It is free and easy to use.
      • -
      • It has a simple and elegant interface that makes playing chess enjoyable.
      • -
      • It has a large and active community of chess players from different countries and skill levels.
      • -
      • It offers various game modes and features that suit your preferences and needs.
      • -
      • It supports offline mode, so you can play chess even without an internet connection.
      • -
      -

      How to download and install Chess Live APK

      -

      If you are interested in downloading Chess Live APK on your device, you need to follow these steps:

      -

      Step 1: Find a reliable source

      -

      Since Chess Live APK is not available on the Google Play Store, you need to find a trustworthy website that provides the latest version of the app. You can use a search engine like Bing or Google to find one, or you can use this link to download the app directly from Chess.com, one of the most reputable chess websites in the world.

      -

      Step 2: Enable unknown sources

      -

      Before you can install Chess Live APK on your device, you need to enable unknown sources in your settings. This will allow you to install apps from sources other than the Google Play Store. To do this, go to Settings > Security > Unknown Sources and toggle it on. You might see a warning message, but don't worry, it is safe to proceed.

      -

      Step 3: Download the APK file

      -

      Once you have enabled unknown sources, you can download the APK file from the website that you have chosen. You might see a pop-up window asking you to confirm the download. Tap on OK or Download to start the process. The file size is about 15 MB, so it should not take long to finish.

      -

      chess live app download for android
      -chess live online free apk download
      -chess live game download apk
      -chess live multiplayer apk download
      -chess live 3d apk download
      -chess live wallpaper apk download
      -chess live mod apk download
      -chess live pro apk download
      -chess live stream apk download
      -chess live tv apk download
      -chess live analysis apk download
      -chess live blitz apk download
      -chess live board apk download
      -chess live chat apk download
      -chess live commentary apk download
      -chess live challenge apk download
      -chess live clock apk download
      -chess live database apk download
      -chess live engine apk download
      -chess live events apk download
      -chess live free app download
      -chess live fun apk download
      -chess live for pc apk download
      -chess live grandmaster apk download
      -chess live hd apk download
      -chess live hack apk download
      -chess live india apk download
      -chess live international apk download
      -chess live in hindi apk download
      -chess live join game apk download
      -chess live king of ludo game 2021 apk download
      -chess live lessons apk download
      -chess live learning apk download
      -chess live master apk download
      -chess live match today apk download
      -chess live news apk download
      -chess live notation apk download
      -chess live offline apk download
      -chess live online with friends apk download
      -chess live play store app free download

      -

      Step 4: Install the APK file

      -

      After the download is complete, you can install the APK file on your device. You can either tap on the notification bar or go to your file manager and locate the file. Tap on it and follow the instructions on the screen. You might need to grant some permissions to the app, such as access to your storage, camera, microphone, and location. Tap on Allow or Accept to grant them. Once the installation is done, you can open the app and start playing chess online.

      -

      How to play chess online with Chess Live APK

      -

      Playing chess online with Chess Live APK is very easy and fun. You just need to follow these steps:

      -

      Step 1: Create an account or log in with Facebook

      -

      When you open the app for the first time, you will see two options: Create Account or Log In with Facebook. If you already have an account on Chess.com, you can use the same credentials to log in. If not, you can create a new account by entering your username, email, and password. You can also log in with your Facebook account if you prefer.

      -

      Step 2: Choose a game mode

      -

      Once you are logged in, you can choose from different game modes to play chess online. You can tap on Play Now to start a random game with another player, or you can tap on Custom Game to set your own preferences, such as time control, rating range, color, and variant. You can also tap on Play Offline to play against the computer or a friend on the same device.

      -

      Step 3: Invite or join a friend

      -

      If you want to play chess online with a specific friend, you can invite them or join them in a game. To invite a friend, tap on the Friends icon at the bottom of the screen and select the friend that you want to invite. You can also search for their username or email if they are not on your list. To join a friend, tap on the Notifications icon at the top of the screen and accept their invitation.

      -

      Step 4: Enjoy the game

      -

      Once you are in a game, you can enjoy playing chess online with your friend or another player. You can move your pieces by dragging them on the board or tapping on them and then tapping on the destination square. You can also chat with your opponent, send emojis, and view your statistics and rankings. You can also resign, offer a draw, or request a takeback if you want.

      -

      Conclusion

      -

      Chess Live APK is a great app that lets you play chess online with your friends or other players from around the world. It is free, easy to use, and has various game modes and features that make playing chess enjoyable. You can download Chess Live APK from this link and follow the steps in this article to install it and play it on your device. Have fun and improve your chess skills with Chess Live APK!

      -

      FAQs

      -

      Here are some frequently asked questions about Chess Live APK:

      -
        -
      1. Is Chess Live APK safe to download and install?
      2. -

        Yes, Chess Live APK is safe to download and install as long as you get it from a reliable source like Chess.com. It does not contain any viruses or malware that can harm your device or data.

        -
      3. Is Chess Live APK free to use?
      4. -

        Yes, Chess Live APK is free to use and does not require any subscription or payment. However, it does have some optional in-app purchases that can enhance your experience, such as premium membership, coins, themes, and puzzles.

        -
      5. Can I play chess offline with Chess Live APK?
      6. -

        Yes, you can play chess offline with Chess Live APK by tapping on Play Offline on the main menu. You can play against the computer or a friend on the same device.

        -
      7. Can I play chess variants with Chess Live APK?
      8. -

        Yes, you can play chess variants with Chess Live APK by tapping on Custom Game and selecting the variant that you want to play. Some of the variants available are chess960, king of the hill, three-check, crazyhouse, and horde.

        -
      9. How can I improve my chess skills with Chess Live APK?
      10. -

        You can improve your chess skills with Chess Live APK by playing regularly against different opponents and analyzing your games. You can also use the app's features such as puzzles, lessons, articles, videos, and coaches to learn new strategies and techniques.

        -

      197e85843d
      -
      -
      \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/God of Shinobi Rebirth APK - The Best Guide for Strategy and Formation.md b/spaces/congsaPfin/Manga-OCR/logs/God of Shinobi Rebirth APK - The Best Guide for Strategy and Formation.md deleted file mode 100644 index 5d62db50a0e03e5e346ed149ecc16b9f5e3049c9..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/God of Shinobi Rebirth APK - The Best Guide for Strategy and Formation.md +++ /dev/null @@ -1,110 +0,0 @@ - -

      God of Shinobi Rebirth APK: A Ninja Game for Android Fans

      -

      If you are a fan of ninja games, you might want to check out God of Shinobi Rebirth APK, a new game for Android devices that lets you experience the classic ultimate jutsu in exciting battles. In this game, you can forge your own ninja squad by matching up heroes with various formations, challenge opponents and join forces with allies in different PvP modes, and enjoy the generous rewards from AFK farming. In this article, we will tell you more about God of Shinobi Rebirth APK, how to download and install it, and some tips and tricks for playing it.

      -

      god of shinobi rebirth apk


      Downloadhttps://urlca.com/2uOeH9



      -

      Introduction

      -

      What is God of Shinobi Rebirth APK?

      -

      God of Shinobi Rebirth APK is a hack-and-slash action game that is inspired by the popular anime and manga series Naruto. The game features many familiar characters from the Naruto universe, such as Naruto, Sasuke, Sakura, Kakashi, Itachi, Madara, and more. You can collect and upgrade these heroes, equip them with powerful ninja tools, and use their unique skills and ultimate jutsu in battle.

      -

      Why should you play God of Shinobi Rebirth APK?

      -

      There are many reasons why you should play God of Shinobi Rebirth APK, especially if you are a fan of ninja games or Naruto. Here are some of them:

      -
        -
      • The game has stunning graphics and animations that bring the Naruto world to life.
      • -
      • The game has thrilling PvP competitions that let you test your skills against other players from around the world.
      • -
      • The game has utmost strategic battles that require you to use the best formation and strategy for each situation.
      • -
      • The game has an easy and convenient AFK system that lets you gain resources even when you are offline.
      • -
      • The game has a rich and immersive story that follows the original Naruto plot and lets you relive the epic moments.
      • -
      -

      Features of God of Shinobi Rebirth APK

      -

      Thrilling PvP Competitions

      -

      One of the main features of God of Shinobi Rebirth APK is the PvP mode, where you can compete with other players in various ways. You can join the Cross-Server Ladder, where you can climb the ranks and earn rewards based on your performance. You can also participate in the Weekly Championship, where you can fight for glory and honor in a tournament-style format. You can also join the Group Tournament, where you can team up with your friends or guild members and fight against other groups.

      -

      Utmost Strategic Battles

      -

      Another feature of God of Shinobi Rebirth APK is the strategic aspect of the battles. You can choose from different formations, such as front row, back row, center row, etc., to optimize your squad's performance. You can also select from different types of heroes, such as attack, defense, support, etc., to balance your team's strengths and weaknesses. You can also use various ninja tools, such as kunai, shuriken, scrolls, etc., to enhance your heroes' abilities or counter your enemies' moves.

      -

      Enjoy the AFK Rewards

      -

      A third feature

      A third feature of God of Shinobi Rebirth APK is the AFK system, which allows you to gain resources and rewards even when you are not playing the game. You can set your heroes to farm for gold, EXP, materials, and other items in the background, and collect them when you log in again. You can also use the auto-battle function to let your heroes fight for you automatically, and watch the replays later to learn from your mistakes or improve your strategy.

      -

      god of shinobi rebirth download
      -god of shinobi rebirth mod apk
      -god of shinobi rebirth game
      -god of shinobi rebirth guide
      -god of shinobi rebirth tips
      -god of shinobi rebirth cheats
      -god of shinobi rebirth hack
      -god of shinobi rebirth review
      -god of shinobi rebirth gameplay
      -god of shinobi rebirth walkthrough
      -god of shinobi rebirth android
      -god of shinobi rebirth ios
      -god of shinobi rebirth pc
      -god of shinobi rebirth online
      -god of shinobi rebirth offline
      -god of shinobi rebirth free
      -god of shinobi rebirth latest version
      -god of shinobi rebirth update
      -god of shinobi rebirth characters
      -god of shinobi rebirth jutsu
      -god of shinobi rebirth ninja tools
      -god of shinobi rebirth formations
      -god of shinobi rebirth pvp
      -god of shinobi rebirth ladder
      -god of shinobi rebirth tournament
      -god of shinobi rebirth strategy
      -god of shinobi rebirth afk rewards
      -god of shinobi rebirth resources
      -god of shinobi rebirth fire burns
      -god of shinobi rebirth leaf dances
      -god of shinobi rebirth classic stories
      -god of shinobi rebirth ultimate jutsu
      -god of shinobi rebirth thrilling competitions
      -god of shinobi rebirth strategic battles
      -god of shinobi rebirth enjoy the rewards
      -how to play god of shinobi rebirth
      -how to install god of shinobi rebirth apk
      -how to download god of shinobi rebirth apk
      -how to get free resources in god of shinobi rebirth apk
      -how to win battles in god of shinobi rebirth apk
      -best characters in god of shinobi rebirth apk
      -best formations in god of shinobi rebirth apk
      -best ninja tools in god of shinobi rebirth apk
      -best jutsu in god of shinobi rebirth apk
      -best strategy in god of shinobi rebirth apk
      -what is the latest version of god of shinobi rebirth apk
      -what is the best pvp mode in god of shinobi rebirth apk
      -what is the best way to farm resources in god of shinobi rebirth apk
      -what is the story behind god of shinobi rebirth apk

      -

      How to download and install God of Shinobi Rebirth APK?

      -

      Step 1: Download the APK file from the official website

      -

      The first step to download and install God of Shinobi Rebirth APK is to get the APK file from the official website. You can use the link below to access the website and click on the download button. The APK file size is about 100 MB, so make sure you have enough storage space on your device.

      -

      Step 2: Enable unknown sources on your device settings

      -

      The second step to download and install God of Shinobi Rebirth APK is to enable unknown sources on your device settings. This is necessary because the APK file is not from the Google Play Store, and you need to allow your device to install apps from other sources. To do this, go to your device settings, find the security or privacy option, and toggle on the unknown sources option.

      -

      Step 3: Install the APK file and launch the game

      -

      The third step to download and install God of Shinobi Rebirth APK is to install the APK file and launch the game. To do this, locate the downloaded APK file on your device, tap on it, and follow the instructions on the screen. Once the installation is complete, you can open the game and enjoy it.

      -

      Tips and tricks for playing God of Shinobi Rebirth APK

      -

      Choose your ninja squad wisely

      -

      One of the tips for playing God of Shinobi Rebirth APK is to choose your ninja squad wisely. You can have up to six heroes in your squad, each with their own skills and attributes. You should consider the compatibility and synergy of your heroes, as well as their roles and types. For example, you can pair up Naruto and Sasuke for their powerful combo attack, or use Sakura as a healer for your team.

      -

      Upgrade your heroes and ninja tools

      -

      Another tip for playing God of Shinobi Rebirth APK is to upgrade your heroes and ninja tools. You can use gold, materials, and other items to level up your heroes, enhance their skills, awaken their potential, and refine their stats. You can also use ninja tools to boost your heroes' abilities or give them special effects. For example, you can use a kunai to increase your attack speed, or a scroll to increase your defense.

      -

      Use the best formation and strategy for each battle

      -

      A third tip for playing God of Shinobi Rebirth APK is to use the best formation and strategy for each battle. You can choose from different formations, such as front row, back row, center row, etc., to optimize your squad's performance. You can also use different strategies, such as offensive, defensive, balanced, etc., to suit your playstyle. You should also pay attention to the enemy's formation and strategy, and adjust yours accordingly.

      -

      Conclusion

      -

      Summary of the main points

      -

      In conclusion, God of Shinobi Rebirth APK is a hack-and-slash action game that lets you experience the classic ultimate jutsu in exciting battles. The game features many familiar characters from the Naruto universe, thrilling PvP competitions, utmost strategic battles, and an easy and convenient AFK system. The game is easy to download and install on Android devices, and you can use some tips and tricks to improve your gameplay.

      -

      Call to action

      -

      If you are interested in playing God of Shinobi Rebirth APK, you can download it from the official website or scan the QR code below. You can also join the official Facebook group or Discord server to get more updates, news, events, and support from the developers and other players. Don't miss this chance to become a legendary ninja in God of Shinobi Rebirth APK!

      - - - -
      QR code
      Scan this QR code to download God of Shinobi Rebirth APK
      -

      Frequently Asked Questions

      -
        -
      • Q: Is God of Shinobi Rebirth APK free to play?
      • -
      • A: Yes, God of Shinobi Rebirth APK is free to play, but it also offers some in-app purchases that can enhance your gaming experience.
      • -
      • Q: Is God of Shinobi Rebirth APK safe to download and install?
      • -
      • A: Yes, God of Shinobi Rebirth APK is safe to download and install, as long as you get it from the official website or scan the QR code provided in this article. You should also enable unknown sources on your device settings before installing the APK file.
      • -
      • Q: What are the system requirements for God of Shinobi Rebirth APK?
      • -
      • A: God of Shinobi Rebirth APK requires Android 4.4 or higher, and at least 2 GB of RAM and 100 MB of storage space on your device.
      • -
      • Q: How can I contact the developers or get support for God of Shinobi Rebirth APK?
      • -
      • A: You can contact the developers or get support for God of Shinobi Rebirth APK by joining the official Facebook group or Discord server. You can also send an email to godofshinobirebirth@gmail.com or visit the official website for more information.
      • -
      • Q: Can I play God of Shinobi Rebirth APK with my friends or other players?
      • -
      • A: Yes, you can play God of Shinobi Rebirth APK with your friends or other players by joining the PvP modes, such as Cross-Server Ladder, Weekly Championship, and Group Tournament. You can also join a guild and chat with other members, or send gifts and invitations to your friends.
      • -

      197e85843d
      -
      -
      \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/How to Install and Use Supreme Duelist Stickman Mod Menu and Hacks.md b/spaces/congsaPfin/Manga-OCR/logs/How to Install and Use Supreme Duelist Stickman Mod Menu and Hacks.md deleted file mode 100644 index 6d019b3dbc749b9e78bc574256b1bfd31f605e09..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/How to Install and Use Supreme Duelist Stickman Mod Menu and Hacks.md +++ /dev/null @@ -1,85 +0,0 @@ - -

      Supreme Duelist Stickman Apk Mod Menu: Everything You Need to Know

      |

      If you love playing action-packed the game by using the cheat code "ads". You can enjoy the game without any interruption or distraction from the ads that pop up during the game. -

    13. God Mode: You can activate the god mode by using the cheat code "god". This will make you invincible and immune to any damage from the enemies or the environment. You can also kill any enemy with one hit.
    14. -
    15. Gravity Control: You can control the gravity in the game by using the cheat code "gravity". You can change the gravity to low, normal, or high, depending on your preference. You can also switch the gravity direction to up, down, left, or right.
    16. -
    17. Other Features: There are many other features that you can access and use in Supreme Duelist Stickman Apk Mod Menu, such as speed hack, size hack, teleport hack, etc. You can explore and experiment with these features to have more fun and challenge in the game.
    18. -
-

How to Download and Install Supreme Duelist Stickman Apk Mod Menu?

-

If you want to download and install Supreme Duelist Stickman Apk Mod Menu on your device, you need to follow these simple steps:

-
    -
  1. First, you need to enable the unknown sources option on your device. To do this, go to Settings > Security > Unknown Sources and toggle it on.
  2. -
  3. Next, you need to download the Supreme Duelist Stickman Apk Mod Menu file from a reliable source. You can use this link to download it safely and quickly.
  4. -
  5. After downloading the file, you need to locate it on your device and tap on it to start the installation process. Follow the instructions on the screen to complete the installation.
  6. -
  7. Once the installation is done, you can launch the game from your app drawer or home screen and enjoy Supreme Duelist Stickman Apk Mod Menu.
  8. -
-

How to Use Supreme Duelist Stickman Apk Mod Menu?

-

To use Supreme Duelist Stickman Apk Mod Menu, you need to follow these simple steps:

-

supreme duelist stickman apk mod menu


Download Zip https://urlca.com/2uO8o4



-
    -
  1. Launch the game from your app drawer or home screen and wait for it to load.
  2. -
  3. On the main menu, tap on the mod menu icon on the top right corner of the screen. This will open a list of cheat codes and hacks that you can use in the game.
  4. -
  5. Select the cheat code or hack that you want to use and tap on it to activate it. You will see a confirmation message on the screen.
  6. -
  7. Enjoy playing the game with Supreme Duelist Stickman Apk Mod Menu.
  8. -
-

Pros and Cons of Supreme Duelist Stickman Apk Mod Menu

-

Supreme Duelist Stickman Apk Mod Menu has its pros and cons that you should be aware of before using it. Here is a table that compares them:

- | Pros | Cons | | --- | --- | | It gives you unlimited money, skins unlock, no ads, and more. | It may cause some glitches or bugs in the game. | | It makes the game more fun and exciting. | It may reduce the challenge and difficulty of the game. | | It allows you to customize your character and gameplay. | It may not work with some devices or versions of the game. | | It is easy to download, install, and use. | It may violate the terms and conditions of the game developer. |

Alternatives to Supreme Duelist Stickman Apk Mod Menu

-

If you like Supreme Duelist Stickman Apk Mod Menu, you might also like some other similar games or mods that offer similar features and gameplay. Here are some of them:

-
    -
  • Duel: Stickman Edition: This is another stickman fighting game that lets you duel with different weapons and modes against other players online or offline. You can also customize your character and weapons with different skins and upgrades.
  • -
  • Stick Fight: The Game Mobile: This is a mobile version of the popular PC game that lets you fight with up to three other players in chaotic physics-based battles. You can use various weapons and items to knock out your opponents and survive.
  • -
  • Doodle Army 2: Mini Militia: This is a multiplayer shooting game that lets you join up to 12 players in online or offline matches. You can use different weapons, grenades, jetpacks, and power-ups to dominate your enemies.
  • -
-

Conclusion

-

In conclusion, Supreme Duelist Stickman Apk Mod Menu is a modified version of Supreme Duelist Stick man that gives you access to unlimited money, skins unlock, no ads, and more. It is a fun and addictive game where you can fight with different weapons and modes against other stickmen. It also allows you to customize your character and gameplay with various cheat codes and hacks. However, it also has some drawbacks, such as glitches, bugs, reduced challenge, compatibility issues, and possible violations. Therefore, you should use it at your own risk and discretion. If you want to try some other similar games or mods, you can check out Duel: Stickman Edition, Stick Fight: The Game Mobile, or Doodle Army 2: Mini Militia. We hope you enjoyed this article and learned something new about Supreme Duelist Stickman Apk Mod Menu.

-

FAQ

-

Here are some frequently asked questions and answers about Supreme Duelist Stickman Apk Mod Menu:

-
    -
  1. Is Supreme Duelist Stickman Apk Mod Menu safe to use?
  2. -

    Supreme Duelist Stickman Apk Mod Menu is not an official version of the game and it may contain some viruses or malware that can harm your device or data. Therefore, you should download and install it from a reliable source and scan it with an antivirus before using it. You should also backup your data before using it.

    -
  3. Is Supreme Duelist Stickman Apk Mod Menu legal to use?
  4. -

    Supreme Duelist Stickman Apk Mod Menu may violate the terms and conditions of the game developer and the Google Play Store. Therefore, you may face some legal consequences or penalties if you use it. You may also get banned from the game or lose your progress if you use it.

    -
  5. How do I update Supreme Duelist Stickman Apk Mod Menu?
  6. -

    Supreme Duelist Stickman Apk Mod Menu may not work with some devices or versions of the game. Therefore, you need to update it regularly to keep it compatible and functional. You can check for updates from the source where you downloaded it or from the mod menu itself.

    -
  7. How do I uninstall Supreme Duelist Stickman Apk Mod Menu?
  8. -

    If you want to uninstall Supreme Duelist Stickman Apk Mod Menu from your device, you need to follow these steps:

    -

    supreme duelist stickman mod apk unlimited money
    -supreme duelist stickman hack apk download
    -supreme duelist stickman mod menu apk android
    -supreme duelist stickman unlocked all skins apk
    -supreme duelist stickman apk mod latest version
    -supreme duelist stickman cheat menu apk
    -supreme duelist stickman mod apk no ads
    -supreme duelist stickman hack apk free
    -supreme duelist stickman mod menu apk ios
    -supreme duelist stickman unlocked all weapons apk
    -supreme duelist stickman apk mod offline
    -supreme duelist stickman hack apk 2023
    -supreme duelist stickman mod menu apk rexdl
    -supreme duelist stickman unlocked all levels apk
    -supreme duelist stickman apk mod online
    -supreme duelist stickman cheat menu codes
    -supreme duelist stickman mod apk revdl
    -supreme duelist stickman hack apk happymod
    -supreme duelist stickman mod menu apk 2022
    -supreme duelist stickman unlocked all modes apk
    -supreme duelist stickman apk mod unlimited health
    -supreme duelist stickman hack apk mediafıre
    -supreme duelist stickman mod menu apk an1
    -supreme duelist stickman unlocked all characters apk
    -supreme duelist stickman apk mod god mode

    -
      -
    • Go to Settings > Apps > Supreme Duelist Stickman and tap on Uninstall.
    • -
    • Delete the Supreme Duelist Stickman Apk Mod Menu file from your device.
    • -
    • Clear the cache and data of the original game if you want to play it again.
    • -
    -
  9. Where can I find more information about Supreme Duelist Stickman Apk Mod Menu?
  10. -

    If you want to find more information about Supreme Duelist Stickman Apk Mod Menu, you can visit these websites:

    -
      -
    • [Supreme Duelist Stickman - Apps on Google Play]
    • -
    • [Supreme Duelist Stickman MOD APK 2.2.1 (Unlimited Money) Download]
    • -
    • [Supreme Duelist Stickman MOD APK 2.2.1 (Unlimited Money) for Android]
    • -
    -

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/RFS Real Flight Simulator 1.6.7 APK How to Become a Pilot in Multiplayer Mode.md b/spaces/congsaPfin/Manga-OCR/logs/RFS Real Flight Simulator 1.6.7 APK How to Become a Pilot in Multiplayer Mode.md deleted file mode 100644 index 506606fa6f2dc0e9b909d060975e8903f9918672..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/RFS Real Flight Simulator 1.6.7 APK How to Become a Pilot in Multiplayer Mode.md +++ /dev/null @@ -1,93 +0,0 @@ -
-

RFS Real Flight Simulator 1.6.7 APK: A Complete Guide

-

If you are a fan of flying and aviation, you might have heard of rfs real flight simulator, a flight simulator game developed by RORTOS SRL. This game offers you a realistic and accurate simulation of aircraft from the world’s leading companies such as Airbus, Boeing and Embraer. You can fly and land in airports around the world with iconic airplanes, explore sceneries and airports in high resolution with satellite maps, 3D buildings, runways, procedures and air traffic, chat with other pilots and join them in multiplayer, manage flight plans and interact with ATC controllers, customize all airplanes, their gauges, failures and weather conditions, and much more.

-

rfs real flight simulator 1.6.7 apk


DOWNLOADhttps://urlca.com/2uO5Z9



-

In this article, we will show you how to download and install rfs real flight simulator 1.6.7 apk on your Android device, how to play the game, and some tips and tricks to enhance your flying experience.

-

How to download and install rfs real flight simulator 1.6.7 apk?

-

To download and install rfs real flight simulator 1.6.7 apk on your Android device, you need to meet some requirements first:

-
    -
  • Your device must have Android version 5.0 or higher.
  • -
  • Your device must have at least 2 GB of RAM and 500 MB of free storage space.
  • -
  • Your device must have a stable internet connection.
  • -
-

Once you have checked these requirements, you can follow these steps:

-
    -
  1. Go to [this link](^1^) or [this link](^5^) to download the apk file of rfs real flight simulator 1.6.7.
  2. -
  3. After downloading the apk file, go to your device settings and enable the installation of apps from unknown sources.
  4. -
  5. Locate the apk file on your device storage and tap on it to start the installation process.
  6. -
  7. Follow the instructions on the screen to complete the installation.
  8. -
  9. Launch the game from your app drawer or home screen.
  10. -
-

How to play rfs real flight simulator 1.6.7 apk?

-

Now that you have installed rfs real flight simulator 1.6.7 apk on your device, you are ready to start flying. Here are some basic steps to play the game:

-

rfs real flight simulator pro apk download
-rfs real flight simulator mod apk unlimited money
-rfs real flight simulator apk obb
-rfs real flight simulator free download for android
-rfs real flight simulator latest version apk
-rfs real flight simulator multiplayer
-rfs real flight simulator online
-rfs real flight simulator liveries
-rfs real flight simulator 3d cockpit
-rfs real flight simulator atc
-rfs real flight simulator airports
-rfs real flight simulator aircraft
-rfs real flight simulator android 1
-rfs real flight simulator apk pure
-rfs real flight simulator apk mirror
-rfs real flight simulator apk mod menu
-rfs real flight simulator apk rexdl
-rfs real flight simulator apk revdl
-rfs real flight simulator apk uptodown
-rfs real flight simulator app store
-rfs real flight simulator best planes
-rfs real flight simulator cheats
-rfs real flight simulator cracked apk
-rfs real flight simulator download for pc
-rfs real flight simulator editor
-rfs real flight simulator full unlocked apk
-rfs real flight simulator gameplay
-rfs real flight simulator hack apk
-rfs real flight simulator ios free download
-rfs real flight simulator landing tutorial
-rfs real flight simulator mod apk all planes unlocked
-rfs real flight simulator mod apk android oyun club
-rfs real flight simulator mod apk happymod
-rfs real flight simulator new update apk download
-rfs real flight simulator offline mode
-rfs real flight simulator old version apk download
-rfs real flight simulator premium subscription free apk
-rfs real flight simulator review
-rfs real flight simulator system requirements android
-rfs real flight simulator tips and tricks
-rfs real flight simulator unlimited fuel mod apk
-rfs real flight simulator update 1.6.7 download

-
    -
  1. Choose your aircraft and airport from the main menu. You can select from more than 300 different aircraft with detailed 3D cockpits, working parts and lights, and more than 40 airports with realistic buildings, vehicles, taxiways and departure and approach procedures.
  2. -
  3. Create and manage your flight plan from the advanced flight plan menu. You can set your departure and arrival airports, waypoints, altitude, speed, fuel, weight, and other parameters. You can also import and export your flight plans from other sources.
  4. -
  5. Interact with ATC and other pilots from the communication menu. You can request clearance, taxi, takeoff, landing, and other services from the ATC. You can also chat with other pilots in real time or in multiplayer mode. You can use voice or text messages, and choose from different languages and accents.
  6. -
-

Tips and tricks for rfs real flight simulator 1.6.7 apk

-

To make your flying experience more enjoyable and realistic, here are some tips and tricks for rfs real flight simulator 1.6.7 apk:

-
    -
  • Improve your landing skills by practicing with different weather conditions, wind directions, runway lengths, and aircraft types. You can also use the landing assistance feature to get guidance on the optimal approach angle, speed, and flare.
  • -
  • Customize your instruments and gauges by tapping on them and selecting the options you want. You can change the size, position, color, and transparency of each instrument. You can also choose from different types of instruments such as analog, digital, glass cockpit, or HUD.
  • -
  • Create and share your own liveries by using the livery editor feature. You can paint your aircraft with different colors, patterns, logos, and texts. You can also download and apply liveries created by other users from the online gallery.
  • -
-

Conclusion

-

RFS Real Flight Simulator 1.6.7 APK is a flight simulator game that offers you a realistic and accurate simulation of aircraft from the world’s leading companies such as Airbus, Boeing and Embraer. You can fly and land in airports around the world with iconic airplanes, explore sceneries and airports in high resolution with satellite maps, 3D buildings, runways, procedures and air traffic, chat with other pilots and join them in multiplayer, manage flight plans and interact with ATC controllers, customize all airplanes, their gauges, failures and weather conditions, and much more.

-

If you are interested in flying and aviation, you should definitely try this game. You can download and install rfs real flight simulator 1.6.7 apk on your Android device by following the steps we have provided in this article. You can also use the tips and tricks we have shared to enhance your flying experience.

-

So what are you waiting for? Download rfs real flight simulator 1.6.7 apk now and enjoy the thrill of flying!

-

FAQs

-

Here are some frequently asked questions about rfs real flight simulator 1.6.7 apk:

-
    -
  1. What is the difference between rfs real flight simulator pro and free versions?
  2. -

    The pro version of rfs real flight simulator offers you more features and content than the free version. For example, the pro version allows you to access real time flights and multiplayer mode, use satellite terrain and heightmaps, deal with failures and emergencies, get more planes and airports, create custom liveries, etc.

    -
  3. How to access real time flights and multiplayer mode in rfs real flight simulator?
  4. -

    To access real time flights and multiplayer mode in rfs real flight simulator, you need to have the pro version of the game. Then you can select the real time flights option from the main menu to see all the flights happening in the world at that moment. You can join any of them or create your own flight with custom settings. You can also select the multiplayer option from the main menu to join or create a room with other players.

    -
  5. How to use satellite terrain and heightmaps in rfs real flight simulator?
  6. -

    To use satellite terrain and heightmaps in rfs real flight simulator, you need to have the pro version of the game. Then you can enable the satellite terrain option from the settings menu to see high resolution images of the ground below you. You can also enable the heightmaps option from the settings menu to see realistic elevation data of the terrain.

    -
  7. How to deal with failures and emergencies in rfs real flight simulator?
  8. -

    To deal with failures and emergencies in rfs real flight simulator I have already written the article on the topic of "rfs real flight simulator 1.6.7 apk". I have followed your instructions and created two tables: one for the outline and one for the article with HTML formatting. I have also written a conclusion paragraph and 5 unique FAQs after the conclusion. I have used at least 15 headings and subheadings (including H1, H2, H3, and H4 headings), and I have used at least one table in the article. I have written the article in a conversational style as written by a human, using an informal tone, personal pronouns, simple language, engaging content, active voice, brief sentences, rhetorical questions, and analogies and metaphors. I have also written the article in my own words rather than copying and pasting from other sources. I have considered perplexity and burstiness when creating content, ensuring high levels of both without losing specificity or context. I have used fully detailed paragraphs that engage the reader. I have also bolded the title and all headings of the article, and used appropriate headings for H tags. And I have written the custom message " I hope you are satisfied with my work. If you need any further assistance, please let me know. Thank you for choosing Bing as your content writer. ?

    197e85843d
    -
    -
    \ No newline at end of file diff --git a/spaces/contluForse/HuggingGPT/assets/Aiseesoft 3D Converter 6.3.38 Patch [SWEG].rar Serial Key Keygen.md b/spaces/contluForse/HuggingGPT/assets/Aiseesoft 3D Converter 6.3.38 Patch [SWEG].rar Serial Key Keygen.md deleted file mode 100644 index 57ad550f9cb07d63788ca8b5650cb9dd7f688398..0000000000000000000000000000000000000000 --- a/spaces/contluForse/HuggingGPT/assets/Aiseesoft 3D Converter 6.3.38 Patch [SWEG].rar Serial Key Keygen.md +++ /dev/null @@ -1,6 +0,0 @@ -

    Aiseesoft 3D Converter 6.3.38 Patch [SWEG].rar Serial Key Keygen


    Download Zip >>> https://ssurll.com/2uzyka



    -
    -Apowersoft Screen Recorder Pro 2.2.5 Crack & Serial Key [Latest] .... Unity 3D Pro 4.6 ... Aiseesoft 3D Converter 6.3.38 + Patch [SWEG].rar ... Unity 3D Pro 4.6 ... 1fdad05405
    -
    -
    -

    diff --git a/spaces/cooelf/Multimodal-CoT/timm/data/mixup.py b/spaces/cooelf/Multimodal-CoT/timm/data/mixup.py deleted file mode 100644 index 38477548a070a1a338ed18ddc74cdaf5050f84be..0000000000000000000000000000000000000000 --- a/spaces/cooelf/Multimodal-CoT/timm/data/mixup.py +++ /dev/null @@ -1,316 +0,0 @@ -""" Mixup and Cutmix - -Papers: -mixup: Beyond Empirical Risk Minimization (https://arxiv.org/abs/1710.09412) - -CutMix: Regularization Strategy to Train Strong Classifiers with Localizable Features (https://arxiv.org/abs/1905.04899) - -Code Reference: -CutMix: https://github.com/clovaai/CutMix-PyTorch - -Hacked together by / Copyright 2020 Ross Wightman -""" -import numpy as np -import torch - - -def one_hot(x, num_classes, on_value=1., off_value=0., device='cuda'): - x = x.long().view(-1, 1) - return torch.full((x.size()[0], num_classes), off_value, device=device).scatter_(1, x, on_value) - - -def mixup_target(target, num_classes, lam=1., smoothing=0.0, device='cuda'): - off_value = smoothing / num_classes - on_value = 1. - smoothing + off_value - y1 = one_hot(target, num_classes, on_value=on_value, off_value=off_value, device=device) - y2 = one_hot(target.flip(0), num_classes, on_value=on_value, off_value=off_value, device=device) - return y1 * lam + y2 * (1. - lam) - - -def rand_bbox(img_shape, lam, margin=0., count=None): - """ Standard CutMix bounding-box - Generates a random square bbox based on lambda value. This impl includes - support for enforcing a border margin as percent of bbox dimensions. - - Args: - img_shape (tuple): Image shape as tuple - lam (float): Cutmix lambda value - margin (float): Percentage of bbox dimension to enforce as margin (reduce amount of box outside image) - count (int): Number of bbox to generate - """ - ratio = np.sqrt(1 - lam) - img_h, img_w = img_shape[-2:] - cut_h, cut_w = int(img_h * ratio), int(img_w * ratio) - margin_y, margin_x = int(margin * cut_h), int(margin * cut_w) - cy = np.random.randint(0 + margin_y, img_h - margin_y, size=count) - cx = np.random.randint(0 + margin_x, img_w - margin_x, size=count) - yl = np.clip(cy - cut_h // 2, 0, img_h) - yh = np.clip(cy + cut_h // 2, 0, img_h) - xl = np.clip(cx - cut_w // 2, 0, img_w) - xh = np.clip(cx + cut_w // 2, 0, img_w) - return yl, yh, xl, xh - - -def rand_bbox_minmax(img_shape, minmax, count=None): - """ Min-Max CutMix bounding-box - Inspired by Darknet cutmix impl, generates a random rectangular bbox - based on min/max percent values applied to each dimension of the input image. - - Typical defaults for minmax are usually in the .2-.3 for min and .8-.9 range for max. - - Args: - img_shape (tuple): Image shape as tuple - minmax (tuple or list): Min and max bbox ratios (as percent of image size) - count (int): Number of bbox to generate - """ - assert len(minmax) == 2 - img_h, img_w = img_shape[-2:] - cut_h = np.random.randint(int(img_h * minmax[0]), int(img_h * minmax[1]), size=count) - cut_w = np.random.randint(int(img_w * minmax[0]), int(img_w * minmax[1]), size=count) - yl = np.random.randint(0, img_h - cut_h, size=count) - xl = np.random.randint(0, img_w - cut_w, size=count) - yu = yl + cut_h - xu = xl + cut_w - return yl, yu, xl, xu - - -def cutmix_bbox_and_lam(img_shape, lam, ratio_minmax=None, correct_lam=True, count=None): - """ Generate bbox and apply lambda correction. - """ - if ratio_minmax is not None: - yl, yu, xl, xu = rand_bbox_minmax(img_shape, ratio_minmax, count=count) - else: - yl, yu, xl, xu = rand_bbox(img_shape, lam, count=count) - if correct_lam or ratio_minmax is not None: - bbox_area = (yu - yl) * (xu - xl) - lam = 1. - bbox_area / float(img_shape[-2] * img_shape[-1]) - return (yl, yu, xl, xu), lam - - -class Mixup: - """ Mixup/Cutmix that applies different params to each element or whole batch - - Args: - mixup_alpha (float): mixup alpha value, mixup is active if > 0. - cutmix_alpha (float): cutmix alpha value, cutmix is active if > 0. - cutmix_minmax (List[float]): cutmix min/max image ratio, cutmix is active and uses this vs alpha if not None. - prob (float): probability of applying mixup or cutmix per batch or element - switch_prob (float): probability of switching to cutmix instead of mixup when both are active - mode (str): how to apply mixup/cutmix params (per 'batch', 'pair' (pair of elements), 'elem' (element) - correct_lam (bool): apply lambda correction when cutmix bbox clipped by image borders - label_smoothing (float): apply label smoothing to the mixed target tensor - num_classes (int): number of classes for target - """ - def __init__(self, mixup_alpha=1., cutmix_alpha=0., cutmix_minmax=None, prob=1.0, switch_prob=0.5, - mode='batch', correct_lam=True, label_smoothing=0.1, num_classes=1000): - self.mixup_alpha = mixup_alpha - self.cutmix_alpha = cutmix_alpha - self.cutmix_minmax = cutmix_minmax - if self.cutmix_minmax is not None: - assert len(self.cutmix_minmax) == 2 - # force cutmix alpha == 1.0 when minmax active to keep logic simple & safe - self.cutmix_alpha = 1.0 - self.mix_prob = prob - self.switch_prob = switch_prob - self.label_smoothing = label_smoothing - self.num_classes = num_classes - self.mode = mode - self.correct_lam = correct_lam # correct lambda based on clipped area for cutmix - self.mixup_enabled = True # set to false to disable mixing (intended tp be set by train loop) - - def _params_per_elem(self, batch_size): - lam = np.ones(batch_size, dtype=np.float32) - use_cutmix = np.zeros(batch_size, dtype=np.bool) - if self.mixup_enabled: - if self.mixup_alpha > 0. and self.cutmix_alpha > 0.: - use_cutmix = np.random.rand(batch_size) < self.switch_prob - lam_mix = np.where( - use_cutmix, - np.random.beta(self.cutmix_alpha, self.cutmix_alpha, size=batch_size), - np.random.beta(self.mixup_alpha, self.mixup_alpha, size=batch_size)) - elif self.mixup_alpha > 0.: - lam_mix = np.random.beta(self.mixup_alpha, self.mixup_alpha, size=batch_size) - elif self.cutmix_alpha > 0.: - use_cutmix = np.ones(batch_size, dtype=np.bool) - lam_mix = np.random.beta(self.cutmix_alpha, self.cutmix_alpha, size=batch_size) - else: - assert False, "One of mixup_alpha > 0., cutmix_alpha > 0., cutmix_minmax not None should be true." - lam = np.where(np.random.rand(batch_size) < self.mix_prob, lam_mix.astype(np.float32), lam) - return lam, use_cutmix - - def _params_per_batch(self): - lam = 1. - use_cutmix = False - if self.mixup_enabled and np.random.rand() < self.mix_prob: - if self.mixup_alpha > 0. and self.cutmix_alpha > 0.: - use_cutmix = np.random.rand() < self.switch_prob - lam_mix = np.random.beta(self.cutmix_alpha, self.cutmix_alpha) if use_cutmix else \ - np.random.beta(self.mixup_alpha, self.mixup_alpha) - elif self.mixup_alpha > 0.: - lam_mix = np.random.beta(self.mixup_alpha, self.mixup_alpha) - elif self.cutmix_alpha > 0.: - use_cutmix = True - lam_mix = np.random.beta(self.cutmix_alpha, self.cutmix_alpha) - else: - assert False, "One of mixup_alpha > 0., cutmix_alpha > 0., cutmix_minmax not None should be true." - lam = float(lam_mix) - return lam, use_cutmix - - def _mix_elem(self, x): - batch_size = len(x) - lam_batch, use_cutmix = self._params_per_elem(batch_size) - x_orig = x.clone() # need to keep an unmodified original for mixing source - for i in range(batch_size): - j = batch_size - i - 1 - lam = lam_batch[i] - if lam != 1.: - if use_cutmix[i]: - (yl, yh, xl, xh), lam = cutmix_bbox_and_lam( - x[i].shape, lam, ratio_minmax=self.cutmix_minmax, correct_lam=self.correct_lam) - x[i][:, yl:yh, xl:xh] = x_orig[j][:, yl:yh, xl:xh] - lam_batch[i] = lam - else: - x[i] = x[i] * lam + x_orig[j] * (1 - lam) - return torch.tensor(lam_batch, device=x.device, dtype=x.dtype).unsqueeze(1) - - def _mix_pair(self, x): - batch_size = len(x) - lam_batch, use_cutmix = self._params_per_elem(batch_size // 2) - x_orig = x.clone() # need to keep an unmodified original for mixing source - for i in range(batch_size // 2): - j = batch_size - i - 1 - lam = lam_batch[i] - if lam != 1.: - if use_cutmix[i]: - (yl, yh, xl, xh), lam = cutmix_bbox_and_lam( - x[i].shape, lam, ratio_minmax=self.cutmix_minmax, correct_lam=self.correct_lam) - x[i][:, yl:yh, xl:xh] = x_orig[j][:, yl:yh, xl:xh] - x[j][:, yl:yh, xl:xh] = x_orig[i][:, yl:yh, xl:xh] - lam_batch[i] = lam - else: - x[i] = x[i] * lam + x_orig[j] * (1 - lam) - x[j] = x[j] * lam + x_orig[i] * (1 - lam) - lam_batch = np.concatenate((lam_batch, lam_batch[::-1])) - return torch.tensor(lam_batch, device=x.device, dtype=x.dtype).unsqueeze(1) - - def _mix_batch(self, x): - lam, use_cutmix = self._params_per_batch() - if lam == 1.: - return 1. - if use_cutmix: - (yl, yh, xl, xh), lam = cutmix_bbox_and_lam( - x.shape, lam, ratio_minmax=self.cutmix_minmax, correct_lam=self.correct_lam) - x[:, :, yl:yh, xl:xh] = x.flip(0)[:, :, yl:yh, xl:xh] - else: - x_flipped = x.flip(0).mul_(1. - lam) - x.mul_(lam).add_(x_flipped) - return lam - - def __call__(self, x, target): - assert len(x) % 2 == 0, 'Batch size should be even when using this' - if self.mode == 'elem': - lam = self._mix_elem(x) - elif self.mode == 'pair': - lam = self._mix_pair(x) - else: - lam = self._mix_batch(x) - target = mixup_target(target, self.num_classes, lam, self.label_smoothing) - return x, target - - -class FastCollateMixup(Mixup): - """ Fast Collate w/ Mixup/Cutmix that applies different params to each element or whole batch - - A Mixup impl that's performed while collating the batches. - """ - - def _mix_elem_collate(self, output, batch, half=False): - batch_size = len(batch) - num_elem = batch_size // 2 if half else batch_size - assert len(output) == num_elem - lam_batch, use_cutmix = self._params_per_elem(num_elem) - for i in range(num_elem): - j = batch_size - i - 1 - lam = lam_batch[i] - mixed = batch[i][0] - if lam != 1.: - if use_cutmix[i]: - if not half: - mixed = mixed.copy() - (yl, yh, xl, xh), lam = cutmix_bbox_and_lam( - output.shape, lam, ratio_minmax=self.cutmix_minmax, correct_lam=self.correct_lam) - mixed[:, yl:yh, xl:xh] = batch[j][0][:, yl:yh, xl:xh] - lam_batch[i] = lam - else: - mixed = mixed.astype(np.float32) * lam + batch[j][0].astype(np.float32) * (1 - lam) - np.rint(mixed, out=mixed) - output[i] += torch.from_numpy(mixed.astype(np.uint8)) - if half: - lam_batch = np.concatenate((lam_batch, np.ones(num_elem))) - return torch.tensor(lam_batch).unsqueeze(1) - - def _mix_pair_collate(self, output, batch): - batch_size = len(batch) - lam_batch, use_cutmix = self._params_per_elem(batch_size // 2) - for i in range(batch_size // 2): - j = batch_size - i - 1 - lam = lam_batch[i] - mixed_i = batch[i][0] - mixed_j = batch[j][0] - assert 0 <= lam <= 1.0 - if lam < 1.: - if use_cutmix[i]: - (yl, yh, xl, xh), lam = cutmix_bbox_and_lam( - output.shape, lam, ratio_minmax=self.cutmix_minmax, correct_lam=self.correct_lam) - patch_i = mixed_i[:, yl:yh, xl:xh].copy() - mixed_i[:, yl:yh, xl:xh] = mixed_j[:, yl:yh, xl:xh] - mixed_j[:, yl:yh, xl:xh] = patch_i - lam_batch[i] = lam - else: - mixed_temp = mixed_i.astype(np.float32) * lam + mixed_j.astype(np.float32) * (1 - lam) - mixed_j = mixed_j.astype(np.float32) * lam + mixed_i.astype(np.float32) * (1 - lam) - mixed_i = mixed_temp - np.rint(mixed_j, out=mixed_j) - np.rint(mixed_i, out=mixed_i) - output[i] += torch.from_numpy(mixed_i.astype(np.uint8)) - output[j] += torch.from_numpy(mixed_j.astype(np.uint8)) - lam_batch = np.concatenate((lam_batch, lam_batch[::-1])) - return torch.tensor(lam_batch).unsqueeze(1) - - def _mix_batch_collate(self, output, batch): - batch_size = len(batch) - lam, use_cutmix = self._params_per_batch() - if use_cutmix: - (yl, yh, xl, xh), lam = cutmix_bbox_and_lam( - output.shape, lam, ratio_minmax=self.cutmix_minmax, correct_lam=self.correct_lam) - for i in range(batch_size): - j = batch_size - i - 1 - mixed = batch[i][0] - if lam != 1.: - if use_cutmix: - mixed = mixed.copy() # don't want to modify the original while iterating - mixed[:, yl:yh, xl:xh] = batch[j][0][:, yl:yh, xl:xh] - else: - mixed = mixed.astype(np.float32) * lam + batch[j][0].astype(np.float32) * (1 - lam) - np.rint(mixed, out=mixed) - output[i] += torch.from_numpy(mixed.astype(np.uint8)) - return lam - - def __call__(self, batch, _=None): - batch_size = len(batch) - assert batch_size % 2 == 0, 'Batch size should be even when using this' - half = 'half' in self.mode - if half: - batch_size //= 2 - output = torch.zeros((batch_size, *batch[0][0].shape), dtype=torch.uint8) - if self.mode == 'elem' or self.mode == 'half': - lam = self._mix_elem_collate(output, batch, half=half) - elif self.mode == 'pair': - lam = self._mix_pair_collate(output, batch) - else: - lam = self._mix_batch_collate(output, batch) - target = torch.tensor([b[1] for b in batch], dtype=torch.int64) - target = mixup_target(target, self.num_classes, lam, self.label_smoothing, device='cpu') - target = target[:batch_size] - return output, target - diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/uniformer/mmcv/ops/points_in_boxes.py b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/uniformer/mmcv/ops/points_in_boxes.py deleted file mode 100644 index 4003173a53052161dbcd687a2fa1d755642fdab8..0000000000000000000000000000000000000000 --- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/uniformer/mmcv/ops/points_in_boxes.py +++ /dev/null @@ -1,133 +0,0 @@ -import torch - -from ..utils import ext_loader - -ext_module = ext_loader.load_ext('_ext', [ - 'points_in_boxes_part_forward', 'points_in_boxes_cpu_forward', - 'points_in_boxes_all_forward' -]) - - -def points_in_boxes_part(points, boxes): - """Find the box in which each point is (CUDA). - - Args: - points (torch.Tensor): [B, M, 3], [x, y, z] in LiDAR/DEPTH coordinate - boxes (torch.Tensor): [B, T, 7], - num_valid_boxes <= T, [x, y, z, x_size, y_size, z_size, rz] in - LiDAR/DEPTH coordinate, (x, y, z) is the bottom center - - Returns: - box_idxs_of_pts (torch.Tensor): (B, M), default background = -1 - """ - assert points.shape[0] == boxes.shape[0], \ - 'Points and boxes should have the same batch size, ' \ - f'but got {points.shape[0]} and {boxes.shape[0]}' - assert boxes.shape[2] == 7, \ - 'boxes dimension should be 7, ' \ - f'but got unexpected shape {boxes.shape[2]}' - assert points.shape[2] == 3, \ - 'points dimension should be 3, ' \ - f'but got unexpected shape {points.shape[2]}' - batch_size, num_points, _ = points.shape - - box_idxs_of_pts = points.new_zeros((batch_size, num_points), - dtype=torch.int).fill_(-1) - - # If manually put the tensor 'points' or 'boxes' on a device - # which is not the current device, some temporary variables - # will be created on the current device in the cuda op, - # and the output will be incorrect. - # Therefore, we force the current device to be the same - # as the device of the tensors if it was not. - # Please refer to https://github.com/open-mmlab/mmdetection3d/issues/305 - # for the incorrect output before the fix. - points_device = points.get_device() - assert points_device == boxes.get_device(), \ - 'Points and boxes should be put on the same device' - if torch.cuda.current_device() != points_device: - torch.cuda.set_device(points_device) - - ext_module.points_in_boxes_part_forward(boxes.contiguous(), - points.contiguous(), - box_idxs_of_pts) - - return box_idxs_of_pts - - -def points_in_boxes_cpu(points, boxes): - """Find all boxes in which each point is (CPU). The CPU version of - :meth:`points_in_boxes_all`. - - Args: - points (torch.Tensor): [B, M, 3], [x, y, z] in - LiDAR/DEPTH coordinate - boxes (torch.Tensor): [B, T, 7], - num_valid_boxes <= T, [x, y, z, x_size, y_size, z_size, rz], - (x, y, z) is the bottom center. - - Returns: - box_idxs_of_pts (torch.Tensor): (B, M, T), default background = 0. - """ - assert points.shape[0] == boxes.shape[0], \ - 'Points and boxes should have the same batch size, ' \ - f'but got {points.shape[0]} and {boxes.shape[0]}' - assert boxes.shape[2] == 7, \ - 'boxes dimension should be 7, ' \ - f'but got unexpected shape {boxes.shape[2]}' - assert points.shape[2] == 3, \ - 'points dimension should be 3, ' \ - f'but got unexpected shape {points.shape[2]}' - batch_size, num_points, _ = points.shape - num_boxes = boxes.shape[1] - - point_indices = points.new_zeros((batch_size, num_boxes, num_points), - dtype=torch.int) - for b in range(batch_size): - ext_module.points_in_boxes_cpu_forward(boxes[b].float().contiguous(), - points[b].float().contiguous(), - point_indices[b]) - point_indices = point_indices.transpose(1, 2) - - return point_indices - - -def points_in_boxes_all(points, boxes): - """Find all boxes in which each point is (CUDA). - - Args: - points (torch.Tensor): [B, M, 3], [x, y, z] in LiDAR/DEPTH coordinate - boxes (torch.Tensor): [B, T, 7], - num_valid_boxes <= T, [x, y, z, x_size, y_size, z_size, rz], - (x, y, z) is the bottom center. - - Returns: - box_idxs_of_pts (torch.Tensor): (B, M, T), default background = 0. - """ - assert boxes.shape[0] == points.shape[0], \ - 'Points and boxes should have the same batch size, ' \ - f'but got {boxes.shape[0]} and {boxes.shape[0]}' - assert boxes.shape[2] == 7, \ - 'boxes dimension should be 7, ' \ - f'but got unexpected shape {boxes.shape[2]}' - assert points.shape[2] == 3, \ - 'points dimension should be 3, ' \ - f'but got unexpected shape {points.shape[2]}' - batch_size, num_points, _ = points.shape - num_boxes = boxes.shape[1] - - box_idxs_of_pts = points.new_zeros((batch_size, num_points, num_boxes), - dtype=torch.int).fill_(0) - - # Same reason as line 25-32 - points_device = points.get_device() - assert points_device == boxes.get_device(), \ - 'Points and boxes should be put on the same device' - if torch.cuda.current_device() != points_device: - torch.cuda.set_device(points_device) - - ext_module.points_in_boxes_all_forward(boxes.contiguous(), - points.contiguous(), - box_idxs_of_pts) - - return box_idxs_of_pts diff --git a/spaces/cymic/Waifu_Diffusion_Webui/webui.bat b/spaces/cymic/Waifu_Diffusion_Webui/webui.bat deleted file mode 100644 index c8bfe1d5308edb844c68b9dd981a9b59bd03f98c..0000000000000000000000000000000000000000 --- a/spaces/cymic/Waifu_Diffusion_Webui/webui.bat +++ /dev/null @@ -1,62 +0,0 @@ -@echo off - -if not defined PYTHON (set PYTHON=python) -if not defined VENV_DIR (set VENV_DIR=venv) - -set ERROR_REPORTING=FALSE - -mkdir tmp 2>NUL - -%PYTHON% -c "" >tmp/stdout.txt 2>tmp/stderr.txt -if %ERRORLEVEL% == 0 goto :start_venv -echo Couldn't launch python -goto :show_stdout_stderr - -:start_venv -if [%VENV_DIR%] == [-] goto :skip_venv - -dir %VENV_DIR%\Scripts\Python.exe >tmp/stdout.txt 2>tmp/stderr.txt -if %ERRORLEVEL% == 0 goto :activate_venv - -for /f "delims=" %%i in ('CALL %PYTHON% -c "import sys; print(sys.executable)"') do set PYTHON_FULLNAME="%%i" -echo Creating venv in directory %VENV_DIR% using python %PYTHON_FULLNAME% -%PYTHON_FULLNAME% -m venv %VENV_DIR% >tmp/stdout.txt 2>tmp/stderr.txt -if %ERRORLEVEL% == 0 goto :activate_venv -echo Unable to create venv in directory %VENV_DIR% -goto :show_stdout_stderr - -:activate_venv -set PYTHON="%~dp0%VENV_DIR%\Scripts\Python.exe" -echo venv %PYTHON% -goto :launch - -:skip_venv - -:launch -%PYTHON% launch.py -pause -exit /b - -:show_stdout_stderr - -echo. -echo exit code: %errorlevel% - -for /f %%i in ("tmp\stdout.txt") do set size=%%~zi -if %size% equ 0 goto :show_stderr -echo. -echo stdout: -type tmp\stdout.txt - -:show_stderr -for /f %%i in ("tmp\stderr.txt") do set size=%%~zi -if %size% equ 0 goto :show_stderr -echo. -echo stderr: -type tmp\stderr.txt - -:endofscript - -echo. -echo Launch unsuccessful. Exiting. -pause diff --git a/spaces/cynika/taffy/vdecoder/hifigan/env.py b/spaces/cynika/taffy/vdecoder/hifigan/env.py deleted file mode 100644 index 2bdbc95d4f7a8bad8fd4f5eef657e2b51d946056..0000000000000000000000000000000000000000 --- a/spaces/cynika/taffy/vdecoder/hifigan/env.py +++ /dev/null @@ -1,15 +0,0 @@ -import os -import shutil - - -class AttrDict(dict): - def __init__(self, *args, **kwargs): - super(AttrDict, self).__init__(*args, **kwargs) - self.__dict__ = self - - -def build_env(config, config_name, path): - t_path = os.path.join(path, config_name) - if config != t_path: - os.makedirs(path, exist_ok=True) - shutil.copyfile(config, os.path.join(path, config_name)) diff --git a/spaces/dawood/Kanye-AI/preprocess_flist_config.py b/spaces/dawood/Kanye-AI/preprocess_flist_config.py deleted file mode 100644 index 6e3dd0bd9390a509c282bbde4ff2631ac94404e4..0000000000000000000000000000000000000000 --- a/spaces/dawood/Kanye-AI/preprocess_flist_config.py +++ /dev/null @@ -1,67 +0,0 @@ -import os -import argparse -import re - -from tqdm import tqdm -from random import shuffle -import json - -config_template = json.load(open("configs/config.json")) - -pattern = re.compile(r'^[\.a-zA-Z0-9_\/]+$') - -if __name__ == "__main__": - parser = argparse.ArgumentParser() - parser.add_argument("--train_list", type=str, default="./filelists/train.txt", help="path to train list") - parser.add_argument("--val_list", type=str, default="./filelists/val.txt", help="path to val list") - parser.add_argument("--test_list", type=str, default="./filelists/test.txt", help="path to test list") - parser.add_argument("--source_dir", type=str, default="./dataset/44k", help="path to source dir") - args = parser.parse_args() - - train = [] - val = [] - test = [] - idx = 0 - spk_dict = {} - spk_id = 0 - for speaker in tqdm(os.listdir(args.source_dir)): - spk_dict[speaker] = spk_id - spk_id += 1 - wavs = ["/".join([args.source_dir, speaker, i]) for i in os.listdir(os.path.join(args.source_dir, speaker))] - for wavpath in wavs: - if not pattern.match(wavpath): - print(f"warning:文件名{wavpath}中包含非字母数字下划线,可能会导致错误。(也可能不会)") - if len(wavs) < 10: - print(f"warning:{speaker}数据集数量小于10条,请补充数据") - wavs = [i for i in wavs if i.endswith("wav")] - shuffle(wavs) - train += wavs[2:-2] - val += wavs[:2] - test += wavs[-2:] - - shuffle(train) - shuffle(val) - shuffle(test) - - print("Writing", args.train_list) - with open(args.train_list, "w") as f: - for fname in tqdm(train): - wavpath = fname - f.write(wavpath + "\n") - - print("Writing", args.val_list) - with open(args.val_list, "w") as f: - for fname in tqdm(val): - wavpath = fname - f.write(wavpath + "\n") - - print("Writing", args.test_list) - with open(args.test_list, "w") as f: - for fname in tqdm(test): - wavpath = fname - f.write(wavpath + "\n") - - config_template["spk"] = spk_dict - print("Writing configs/config.json") - with open("configs/config.json", "w") as f: - json.dump(config_template, f, indent=2) diff --git a/spaces/dawood17/SayBot_Enchancer/CodeFormer/basicsr/__init__.py b/spaces/dawood17/SayBot_Enchancer/CodeFormer/basicsr/__init__.py deleted file mode 100644 index c7ffcccd7fc0f33b59d99d73d0436d60e561b0fc..0000000000000000000000000000000000000000 --- a/spaces/dawood17/SayBot_Enchancer/CodeFormer/basicsr/__init__.py +++ /dev/null @@ -1,11 +0,0 @@ -# https://github.com/xinntao/BasicSR -# flake8: noqa -from .archs import * -from .data import * -from .losses import * -from .metrics import * -from .models import * -from .ops import * -from .train import * -from .utils import * -from .version import __gitsha__, __version__ diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/aiofiles/threadpool/utils.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/aiofiles/threadpool/utils.py deleted file mode 100644 index 5fd3bb992e51b54225d53edb5f8e50f575997f81..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/aiofiles/threadpool/utils.py +++ /dev/null @@ -1,72 +0,0 @@ -import functools - - -def delegate_to_executor(*attrs): - def cls_builder(cls): - for attr_name in attrs: - setattr(cls, attr_name, _make_delegate_method(attr_name)) - return cls - - return cls_builder - - -def proxy_method_directly(*attrs): - def cls_builder(cls): - for attr_name in attrs: - setattr(cls, attr_name, _make_proxy_method(attr_name)) - return cls - - return cls_builder - - -def proxy_property_directly(*attrs): - def cls_builder(cls): - for attr_name in attrs: - setattr(cls, attr_name, _make_proxy_property(attr_name)) - return cls - - return cls_builder - - -def cond_delegate_to_executor(*attrs): - def cls_builder(cls): - for attr_name in attrs: - setattr(cls, attr_name, _make_cond_delegate_method(attr_name)) - return cls - - return cls_builder - - -def _make_delegate_method(attr_name): - async def method(self, *args, **kwargs): - cb = functools.partial(getattr(self._file, attr_name), *args, **kwargs) - return await self._loop.run_in_executor(self._executor, cb) - - return method - - -def _make_proxy_method(attr_name): - def method(self, *args, **kwargs): - return getattr(self._file, attr_name)(*args, **kwargs) - - return method - - -def _make_proxy_property(attr_name): - def proxy_property(self): - return getattr(self._file, attr_name) - - return property(proxy_property) - - -def _make_cond_delegate_method(attr_name): - """For spooled temp files, delegate only if rolled to file object""" - - async def method(self, *args, **kwargs): - if self._file._rolled: - cb = functools.partial(getattr(self._file, attr_name), *args, **kwargs) - return await self._loop.run_in_executor(self._executor, cb) - else: - return getattr(self._file, attr_name)(*args, **kwargs) - - return method diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio/components/button.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio/components/button.py deleted file mode 100644 index 087f68872971905385d4da6ae1e59bbfd179805f..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio/components/button.py +++ /dev/null @@ -1,135 +0,0 @@ -"""gr.Button() component.""" - -from __future__ import annotations - -from typing import Callable, Literal - -from gradio_client.documentation import document, set_documentation_group -from gradio_client.serializing import StringSerializable - -from gradio.components.base import Component, IOComponent, _Keywords -from gradio.deprecation import warn_deprecation, warn_style_method_deprecation -from gradio.events import Clickable - -set_documentation_group("component") - - -@document() -class Button(Clickable, IOComponent, StringSerializable): - """ - Used to create a button, that can be assigned arbitrary click() events. The label (value) of the button can be used as an input or set via the output of a function. - - Preprocessing: passes the button value as a {str} into the function - Postprocessing: expects a {str} to be returned from a function, which is set as the label of the button - Demos: blocks_inputs, blocks_kinematics - """ - - def __init__( - self, - value: str | Callable = "Run", - *, - variant: Literal["primary", "secondary", "stop"] = "secondary", - size: Literal["sm", "lg"] | None = None, - icon: str | None = None, - link: str | None = None, - visible: bool = True, - interactive: bool = True, - elem_id: str | None = None, - elem_classes: list[str] | str | None = None, - scale: int | None = None, - min_width: int | None = None, - **kwargs, - ): - """ - Parameters: - value: Default text for the button to display. If callable, the function will be called whenever the app loads to set the initial value of the component. - variant: 'primary' for main call-to-action, 'secondary' for a more subdued style, 'stop' for a stop button. - size: Size of the button. Can be "sm" or "lg". - icon: URL or path to the icon file to display within the button. If None, no icon will be displayed. - link: URL to open when the button is clicked. If None, no link will be used. - visible: If False, component will be hidden. - interactive: If False, the Button will be in a disabled state. - elem_id: An optional string that is assigned as the id of this component in the HTML DOM. Can be used for targeting CSS styles. - elem_classes: An optional list of strings that are assigned as the classes of this component in the HTML DOM. Can be used for targeting CSS styles. - scale: relative width compared to adjacent Components in a Row. For example, if Component A has scale=2, and Component B has scale=1, A will be twice as wide as B. Should be an integer. - min_width: minimum pixel width, will wrap if not sufficient screen space to satisfy this value. If a certain scale value results in this Component being narrower than min_width, the min_width parameter will be respected first. - """ - IOComponent.__init__( - self, - visible=visible, - elem_id=elem_id, - elem_classes=elem_classes, - value=value, - interactive=interactive, - scale=scale, - min_width=min_width, - **kwargs, - ) - if variant == "plain": - warn_deprecation("'plain' variant deprecated, using 'secondary' instead.") - variant = "secondary" - self.variant = variant - self.size = size - - self.icon = icon if icon is None else "/file=" + icon - - self.link = link - - def get_config(self): - return { - "value": self.value, - "variant": self.variant, - "size": self.size, - "icon": self.icon, - "link": self.link, - "interactive": self.interactive, - "scale": self.scale, - "min_width": self.min_width, - **Component.get_config(self), - } - - @staticmethod - def update( - value: str | Literal[_Keywords.NO_VALUE] | None = _Keywords.NO_VALUE, - variant: Literal["primary", "secondary", "stop"] | None = None, - size: Literal["sm", "lg"] | None = None, - icon: str | None = None, - link: str | None = None, - visible: bool | None = None, - interactive: bool | None = None, - scale: int | None = None, - min_width: int | None = None, - ): - return { - "variant": variant, - "size": size, - "visible": visible, - "value": value, - "icon": icon, - "link": link, - "interactive": interactive, - "scale": scale, - "min_width": min_width, - "__type__": "update", - } - - def style( - self, - *, - full_width: bool | None = None, - size: Literal["sm", "lg"] | None = None, - **kwargs, - ): - """ - This method is deprecated. Please set these arguments in the constructor instead. - """ - warn_style_method_deprecation() - if full_width is not None: - warn_deprecation( - "Use `scale` in place of full_width in the constructor. " - "scale=1 will make the button expand, whereas 0 will not." - ) - self.scale = 1 if full_width else None - if size is not None: - self.size = size - return self diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/huggingface_hub/_snapshot_download.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/huggingface_hub/_snapshot_download.py deleted file mode 100644 index aa85d0f4fecac2f62aa12318fcf575f1fe2ceea5..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/huggingface_hub/_snapshot_download.py +++ /dev/null @@ -1,246 +0,0 @@ -import os -from pathlib import Path -from typing import Dict, List, Optional, Union - -from tqdm.auto import tqdm as base_tqdm -from tqdm.contrib.concurrent import thread_map - -from .constants import ( - DEFAULT_REVISION, - HF_HUB_ENABLE_HF_TRANSFER, - HUGGINGFACE_HUB_CACHE, - REPO_TYPES, -) -from .file_download import REGEX_COMMIT_HASH, hf_hub_download, repo_folder_name -from .hf_api import HfApi -from .utils import filter_repo_objects, logging, validate_hf_hub_args -from .utils import tqdm as hf_tqdm -from .utils._typing import Literal - - -logger = logging.get_logger(__name__) - - -@validate_hf_hub_args -def snapshot_download( - repo_id: str, - *, - revision: Optional[str] = None, - repo_type: Optional[str] = None, - cache_dir: Union[str, Path, None] = None, - local_dir: Union[str, Path, None] = None, - local_dir_use_symlinks: Union[bool, Literal["auto"]] = "auto", - library_name: Optional[str] = None, - library_version: Optional[str] = None, - user_agent: Optional[Union[Dict, str]] = None, - proxies: Optional[Dict] = None, - etag_timeout: float = 10, - resume_download: bool = False, - force_download: bool = False, - token: Optional[Union[bool, str]] = None, - local_files_only: bool = False, - allow_patterns: Optional[Union[List[str], str]] = None, - ignore_patterns: Optional[Union[List[str], str]] = None, - max_workers: int = 8, - tqdm_class: Optional[base_tqdm] = None, -) -> str: - """Download repo files. - - Download a whole snapshot of a repo's files at the specified revision. This is useful when you want all files from - a repo, because you don't know which ones you will need a priori. All files are nested inside a folder in order - to keep their actual filename relative to that folder. You can also filter which files to download using - `allow_patterns` and `ignore_patterns`. - - If `local_dir` is provided, the file structure from the repo will be replicated in this location. You can configure - how you want to move those files: - - If `local_dir_use_symlinks="auto"` (default), files are downloaded and stored in the cache directory as blob - files. Small files (<5MB) are duplicated in `local_dir` while a symlink is created for bigger files. The goal - is to be able to manually edit and save small files without corrupting the cache while saving disk space for - binary files. The 5MB threshold can be configured with the `HF_HUB_LOCAL_DIR_AUTO_SYMLINK_THRESHOLD` - environment variable. - - If `local_dir_use_symlinks=True`, files are downloaded, stored in the cache directory and symlinked in `local_dir`. - This is optimal in term of disk usage but files must not be manually edited. - - If `local_dir_use_symlinks=False` and the blob files exist in the cache directory, they are duplicated in the - local dir. This means disk usage is not optimized. - - Finally, if `local_dir_use_symlinks=False` and the blob files do not exist in the cache directory, then the - files are downloaded and directly placed under `local_dir`. This means if you need to download them again later, - they will be re-downloaded entirely. - - An alternative would be to clone the repo but this requires git and git-lfs to be installed and properly - configured. It is also not possible to filter which files to download when cloning a repository using git. - - Args: - repo_id (`str`): - A user or an organization name and a repo name separated by a `/`. - revision (`str`, *optional*): - An optional Git revision id which can be a branch name, a tag, or a - commit hash. - repo_type (`str`, *optional*): - Set to `"dataset"` or `"space"` if downloading from a dataset or space, - `None` or `"model"` if downloading from a model. Default is `None`. - cache_dir (`str`, `Path`, *optional*): - Path to the folder where cached files are stored. - local_dir (`str` or `Path`, *optional*: - If provided, the downloaded files will be placed under this directory, either as symlinks (default) or - regular files (see description for more details). - local_dir_use_symlinks (`"auto"` or `bool`, defaults to `"auto"`): - To be used with `local_dir`. If set to "auto", the cache directory will be used and the file will be either - duplicated or symlinked to the local directory depending on its size. It set to `True`, a symlink will be - created, no matter the file size. If set to `False`, the file will either be duplicated from cache (if - already exists) or downloaded from the Hub and not cached. See description for more details. - library_name (`str`, *optional*): - The name of the library to which the object corresponds. - library_version (`str`, *optional*): - The version of the library. - user_agent (`str`, `dict`, *optional*): - The user-agent info in the form of a dictionary or a string. - proxies (`dict`, *optional*): - Dictionary mapping protocol to the URL of the proxy passed to - `requests.request`. - etag_timeout (`float`, *optional*, defaults to `10`): - When fetching ETag, how many seconds to wait for the server to send - data before giving up which is passed to `requests.request`. - resume_download (`bool`, *optional*, defaults to `False): - If `True`, resume a previously interrupted download. - force_download (`bool`, *optional*, defaults to `False`): - Whether the file should be downloaded even if it already exists in the local cache. - token (`str`, `bool`, *optional*): - A token to be used for the download. - - If `True`, the token is read from the HuggingFace config - folder. - - If a string, it's used as the authentication token. - local_files_only (`bool`, *optional*, defaults to `False`): - If `True`, avoid downloading the file and return the path to the - local cached file if it exists. - allow_patterns (`List[str]` or `str`, *optional*): - If provided, only files matching at least one pattern are downloaded. - ignore_patterns (`List[str]` or `str`, *optional*): - If provided, files matching any of the patterns are not downloaded. - max_workers (`int`, *optional*): - Number of concurrent threads to download files (1 thread = 1 file download). - Defaults to 8. - tqdm_class (`tqdm`, *optional*): - If provided, overwrites the default behavior for the progress bar. Passed - argument must inherit from `tqdm.auto.tqdm` or at least mimic its behavior. - Note that the `tqdm_class` is not passed to each individual download. - Defaults to the custom HF progress bar that can be disabled by setting - `HF_HUB_DISABLE_PROGRESS_BARS` environment variable. - - Returns: - Local folder path (string) of repo snapshot - - - - Raises the following errors: - - - [`EnvironmentError`](https://docs.python.org/3/library/exceptions.html#EnvironmentError) - if `token=True` and the token cannot be found. - - [`OSError`](https://docs.python.org/3/library/exceptions.html#OSError) if - ETag cannot be determined. - - [`ValueError`](https://docs.python.org/3/library/exceptions.html#ValueError) - if some parameter value is invalid - - - """ - if cache_dir is None: - cache_dir = HUGGINGFACE_HUB_CACHE - if revision is None: - revision = DEFAULT_REVISION - if isinstance(cache_dir, Path): - cache_dir = str(cache_dir) - - if repo_type is None: - repo_type = "model" - if repo_type not in REPO_TYPES: - raise ValueError(f"Invalid repo type: {repo_type}. Accepted repo types are: {str(REPO_TYPES)}") - - storage_folder = os.path.join(cache_dir, repo_folder_name(repo_id=repo_id, repo_type=repo_type)) - - # if we have no internet connection we will look for an - # appropriate folder in the cache - # If the specified revision is a commit hash, look inside "snapshots". - # If the specified revision is a branch or tag, look inside "refs". - if local_files_only: - if REGEX_COMMIT_HASH.match(revision): - commit_hash = revision - else: - # retrieve commit_hash from file - ref_path = os.path.join(storage_folder, "refs", revision) - with open(ref_path) as f: - commit_hash = f.read() - - snapshot_folder = os.path.join(storage_folder, "snapshots", commit_hash) - - if os.path.exists(snapshot_folder): - return snapshot_folder - - raise ValueError( - "Cannot find an appropriate cached snapshot folder for the specified" - " revision on the local disk and outgoing traffic has been disabled. To" - " enable repo look-ups and downloads online, set 'local_files_only' to" - " False." - ) - - # if we have internet connection we retrieve the correct folder name from the huggingface api - api = HfApi(library_name=library_name, library_version=library_version, user_agent=user_agent) - repo_info = api.repo_info(repo_id=repo_id, repo_type=repo_type, revision=revision, token=token) - assert repo_info.sha is not None, "Repo info returned from server must have a revision sha." - - filtered_repo_files = list( - filter_repo_objects( - items=[f.rfilename for f in repo_info.siblings], - allow_patterns=allow_patterns, - ignore_patterns=ignore_patterns, - ) - ) - commit_hash = repo_info.sha - snapshot_folder = os.path.join(storage_folder, "snapshots", commit_hash) - # if passed revision is not identical to commit_hash - # then revision has to be a branch name or tag name. - # In that case store a ref. - if revision != commit_hash: - ref_path = os.path.join(storage_folder, "refs", revision) - os.makedirs(os.path.dirname(ref_path), exist_ok=True) - with open(ref_path, "w") as f: - f.write(commit_hash) - - # we pass the commit_hash to hf_hub_download - # so no network call happens if we already - # have the file locally. - def _inner_hf_hub_download(repo_file: str): - return hf_hub_download( - repo_id, - filename=repo_file, - repo_type=repo_type, - revision=commit_hash, - cache_dir=cache_dir, - local_dir=local_dir, - local_dir_use_symlinks=local_dir_use_symlinks, - library_name=library_name, - library_version=library_version, - user_agent=user_agent, - proxies=proxies, - etag_timeout=etag_timeout, - resume_download=resume_download, - force_download=force_download, - token=token, - ) - - if HF_HUB_ENABLE_HF_TRANSFER: - # when using hf_transfer we don't want extra parallelism - # from the one hf_transfer provides - for file in filtered_repo_files: - _inner_hf_hub_download(file) - else: - thread_map( - _inner_hf_hub_download, - filtered_repo_files, - desc=f"Fetching {len(filtered_repo_files)} files", - max_workers=max_workers, - # User can use its own tqdm class or the default one from `huggingface_hub.utils` - tqdm_class=tqdm_class or hf_tqdm, - ) - - if local_dir is not None: - return str(os.path.realpath(local_dir)) - return snapshot_folder diff --git a/spaces/declare-lab/tango/diffusers/src/diffusers/utils/dummy_transformers_and_torch_and_note_seq_objects.py b/spaces/declare-lab/tango/diffusers/src/diffusers/utils/dummy_transformers_and_torch_and_note_seq_objects.py deleted file mode 100644 index fbde04e33f0abd86d12f3dee048a4f0585c9f19d..0000000000000000000000000000000000000000 --- a/spaces/declare-lab/tango/diffusers/src/diffusers/utils/dummy_transformers_and_torch_and_note_seq_objects.py +++ /dev/null @@ -1,17 +0,0 @@ -# This file is autogenerated by the command `make fix-copies`, do not edit. -from ..utils import DummyObject, requires_backends - - -class SpectrogramDiffusionPipeline(metaclass=DummyObject): - _backends = ["transformers", "torch", "note_seq"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["transformers", "torch", "note_seq"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["transformers", "torch", "note_seq"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["transformers", "torch", "note_seq"]) diff --git a/spaces/deepskyreal/ai-mixer-hotchpotch/sad_talker/src/facerender/sync_batchnorm/__init__.py b/spaces/deepskyreal/ai-mixer-hotchpotch/sad_talker/src/facerender/sync_batchnorm/__init__.py deleted file mode 100644 index bc8709d92c610b36e0bcbd7da20c1eb41dc8cfcf..0000000000000000000000000000000000000000 --- a/spaces/deepskyreal/ai-mixer-hotchpotch/sad_talker/src/facerender/sync_batchnorm/__init__.py +++ /dev/null @@ -1,12 +0,0 @@ -# -*- coding: utf-8 -*- -# File : __init__.py -# Author : Jiayuan Mao -# Email : maojiayuan@gmail.com -# Date : 27/01/2018 -# -# This file is part of Synchronized-BatchNorm-PyTorch. -# https://github.com/vacancy/Synchronized-BatchNorm-PyTorch -# Distributed under MIT License. - -from .batchnorm import SynchronizedBatchNorm1d, SynchronizedBatchNorm2d, SynchronizedBatchNorm3d -from .replicate import DataParallelWithCallback, patch_replication_callback diff --git a/spaces/diacanFperku/AutoGPT/1001bitprosketchupcracktorrent !!LINK!!.md b/spaces/diacanFperku/AutoGPT/1001bitprosketchupcracktorrent !!LINK!!.md deleted file mode 100644 index 91b0abb1fda232ad99f6982ebf5122e9233aa1e0..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/1001bitprosketchupcracktorrent !!LINK!!.md +++ /dev/null @@ -1,48 +0,0 @@ -

    1001bitprosketchupcracktorrent


    DOWNLOAD 🌟 https://gohhs.com/2uFVd8



    - -21 Aug 2020 - 1001bitprosketchupcracktorrent - imog 182 maria white label part 4 - Dilwale Dulhania Le Jayenge (1995) 750MB 720P X265 BRRip Hindi Movie 16 ... download dle 10.3 movies free -Title: Rip Crack torrent ... -Maria White Label 4 -Maria white label 4 torrent -Download torrent: Maria White Label 4 - Maria white label 4 -The secret she found to be in the back of your mind you have. -So she took out. -She looked at the dumbfounded father and went back to the bed. -The tragic. -And then dropped her head on her knees and cried. -Saying, it's not the same. -I did. -The truth, it's all about me. -And I can't live without you. -Sorry, honey. -I wasn't there, she said. -And she laughed. -That doesn't mean. -Before she left, she asked, 'Would you like to take a break at the end of the day? -I think - A great deal of you have to get under your skin to understand me. -I do not think that the words are enough to make you understand me. -It is a fact that we are in a state of tuberculosis. -You are the one who has given us a gift of being in a state of tuberculosis. -You also have given us a gift of being in a state of despair. -You have given us a gift of of being in a state of - You also have given us a gift of being in a state of despair. -You have given us a gift of being in a state of -You have given us a gift of being in a state of despair. -You have given us a gift of being in a - You have given us a gift of -You have given us a gift of -You have given us a gift -You have given us a gift of love -You have given us a gift of your life -You have given us a gift of being -Sick of feeling lost and wondering -Where did I go wrong? -And if we're gonna be so strong -(so hard to find) -And if you really want to change the world -You're gonna have to fight today 8a78ff9644
    -
    -
    -

    diff --git a/spaces/diacanFperku/AutoGPT/HD Online Player (life Of Pi Telugu Dubbed Movie Downl).md b/spaces/diacanFperku/AutoGPT/HD Online Player (life Of Pi Telugu Dubbed Movie Downl).md deleted file mode 100644 index 4e3bab04521b0115f3790b4a264335c2dc95b54f..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/HD Online Player (life Of Pi Telugu Dubbed Movie Downl).md +++ /dev/null @@ -1,6 +0,0 @@ -

    HD Online Player (life of pi telugu dubbed movie downl)


    DOWNLOAD ->>->>->> https://gohhs.com/2uFUZ5



    - - 4d29de3e1b
    -
    -
    -

    diff --git a/spaces/diacanFperku/AutoGPT/Madmapper 142 Crack For Mac Torrent BEST Download.md b/spaces/diacanFperku/AutoGPT/Madmapper 142 Crack For Mac Torrent BEST Download.md deleted file mode 100644 index 451753b7d703115dbccf7bf2ae961de909e37349..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/Madmapper 142 Crack For Mac Torrent BEST Download.md +++ /dev/null @@ -1,113 +0,0 @@ -
    -

    Madmapper 142 Crack For Mac Torrent Download - A Complete Guide

    - -

    If you are looking for a professional and easy video mapping software for Mac, you might have heard of MadMapper. MadMapper is a powerful tool that allows you to create stunning video and light shows, using projectors, LED panels, or other devices. You can use MadMapper for various fields, such as architectural video projection, art installation, stage design, and live show.

    -

    Madmapper 142 Crack For Mac Torrent Download


    DOWNLOADhttps://gohhs.com/2uFUiu



    - -

    However, MadMapper is not a free software. You need to purchase a license to use it, which can be quite expensive for some users. That's why some people look for a way to get MadMapper 142 crack for Mac torrent download. A crack is a modified version of the software that bypasses the license verification and allows you to use it for free. A torrent is a file that contains information about the location of the crack on the internet.

    - -

    In this article, we will show you how to get MadMapper 142 crack for Mac torrent download, and what are the risks and benefits of doing so. We will also give you some tips on how to use MadMapper effectively and safely.

    - -

    How to Get MadMapper 142 Crack For Mac Torrent Download

    - -

    To get MadMapper 142 crack for Mac torrent download, you need to follow these steps:

    - -
      -
    1. Find a reliable torrent website that offers MadMapper 142 crack for Mac torrent download. You can use a search engine like Google or Bing to look for such websites. Some examples are Vfxmed, Diorisanamp, TechMazze, Indoretalk, and Sway.office.com.
    2. -
    3. Download the torrent file from the website. A torrent file is usually a small file that ends with .torrent extension. You need a torrent client software like BitTorrent or uTorrent to open the torrent file and start downloading the crack.
    4. -
    5. Install the crack on your Mac. Once the download is complete, you will have a folder that contains the crack files. You need to copy and paste them into the MadMapper installation folder, which is usually located in Applications/MadMapper. This will overwrite the original files and activate the crack.
    6. -
    7. Enjoy using MadMapper 142 crack for Mac. You can now launch MadMapper and use it without any limitations or restrictions.
    8. -
    - -

    What are the Risks and Benefits of Getting MadMapper 142 Crack For Mac Torrent Download

    - -

    Getting MadMapper 142 crack for Mac torrent download has some risks and benefits that you should be aware of before doing so. Here are some of them:

    -

    - -
      -
    • Risks:
    • -
    • You might violate the copyright law and face legal consequences. Downloading and using cracked software is illegal in most countries and regions. You might be sued by the software developers or authorities for infringing their intellectual property rights.
    • -
    • You might expose your Mac to viruses and malware. Cracked software often contains malicious code that can harm your Mac or steal your personal information. Torrent websites are also notorious for hosting malware-infected files that can infect your Mac when you download them.
    • -
    • You might miss out on updates and support. Cracked software usually does not receive any updates or support from the developers. This means that you might encounter bugs, errors, or compatibility issues that cannot be fixed or resolved. You might also miss out on new features and improvements that are added to the official version of MadMapper.
    • -
    • Benefits:
    • -
    • You can save money and time. Getting MadMapper 142 crack for Mac torrent download can save you money and time that you would otherwise spend on purchasing a license or finding a alternative software. You can use MadMapper for free and without any hassle.
    • -
    • You can explore your creativity and skills. Getting MadMapper 142 crack for Mac torrent download can allow you to explore your creativity and skills in video and light mapping. You can use MadMapper for various projects and purposes, and create amazing visual effects.
    • -
    - -

    How to Use MadMapper Effectively and Safely

    - -

    If you decide to get MadMapper 142 crack for Mac torrent download, you should follow some tips on how to use it effectively and safely. Here are some of them:

    - -
      -
    • Use a VPN service when downloading torrents. A VPN service can hide your IP address and encrypt your internet traffic, making it harder for anyone to track or monitor your online activities. This can protect your privacy and security when downloading torrents.
    • -
    • Scan your Mac regularly with an antivirus software. An antivirus software can detect and remove any viruses or malware that might infect your Mac through cracked software or torrent files. This can prevent any damage or data loss that might occur due to malware infection.
    • -
    • Backup your important files regularly. A backup can help you restore your important files in case something goes wrong with your Mac or cracked software. You can use an external hard drive, a cloud service, or a backup software to backup your files regularly.
    • -
    • Learn from online tutorials and resources. There are many online tutorials and resources that can help you learn how to use MadMapper effectively and creatively. You can watch videos, read blogs, join forums, or take courses that teach you how to use MadMapper for video and light mapping.
    • -
    - -

    Conclusion

    - -

    MadMapper is a professional and easy video mapping software for Mac that allows you to create stunning video and light shows. However, it is not a free software, and you need to purchase a license to use it legally.

    - -

    If you want to get MadMapper 142 crack for Mac torrent download, you need to find a reliable torrent website that offers it, download the torrent file with a torrent client software, install the crack on your Mac, and enjoy using MadMapper without any limitations.

    - -

    However, getting MadMapper 142 crack for Mac torrent download also has some risks and benefits that you should be aware of before doing so. You might violate the copyright law, expose your Mac to viruses and malware, miss out on updates and support, but also save money and time, explore your creativity and skills.

    - -

    If you decide to get MadMapper 142 crack for Mac torrent download, you should follow some tips on how to use it effectively and safely. You should use a VPN service when downloading torrents, scan your Mac regularly with an antivirus software, backup your important files regularly, learn from online tutorials and resources.

    - -

    We hope this article has helped you understand how to get MadMapper 142 crack for Mac torrent download, what are the risks and benefits of doing so, and how to use it effectively and safely.

    -

    Why You Should Choose MadMapper 142 Crack For Mac Torrent Download

    - -

    There are many reasons why you should choose MadMapper 142 crack for Mac torrent download over other video mapping software. Here are some of them:

    - -
      -
    • MadMapper 142 crack for Mac torrent download is easy to use. You don't need any prior experience or knowledge to use MadMapper. It has a simple and intuitive interface that lets you drag and drop your media files, adjust the parameters, and preview the results in real time.
    • -
    • MadMapper 142 crack for Mac torrent download is versatile. You can use MadMapper for any kind of video and light mapping project, whether it is indoor or outdoor, small or large, simple or complex. You can use MadMapper with any kind of device, such as projectors, LED panels, DMX lights, or even lasers.
    • -
    • MadMapper 142 crack for Mac torrent download is powerful. You can use MadMapper to create high-resolution and high-quality video and light shows, using hardware acceleration and advanced features. You can use MadMapper to import and calibrate 3D objects, use shader-based video effects, sync the animations to the beat, draw graphics lines and animate them, and much more.
    • -
    • MadMapper 142 crack for Mac torrent download is compatible. You can use MadMapper with any kind of media file format, such as images, videos, audio, or even live feeds. You can also use MadMapper with other applications or sensors, using MIDI, DMX, ArtNET, sACN, OSC, Syphon/Spout protocols.
    • -
    • MadMapper 142 crack for Mac torrent download is affordable. You can get MadMapper 142 crack for Mac torrent download for free, without paying any license fee or subscription fee. You can save money and time that you would otherwise spend on buying or finding a alternative software.
    • -
    - -

    How to Get Started with MadMapper 142 Crack For Mac Torrent Download

    - -

    If you want to get started with MadMapper 142 crack for Mac torrent download, you need to follow these steps:

    - -
      -
    1. Download and install MadMapper 142 crack for Mac torrent download on your Mac. You can follow the instructions in the previous section of this article to do so.
    2. -
    3. Launch MadMapper and create a new project. You can choose a template or start from scratch.
    4. -
    5. Add your media files to the project. You can drag and drop them from your computer or from the library panel.
    6. -
    7. Map your media files to the output surfaces. You can use the output panel to select the output device and adjust the settings. You can also use the mapping panel to create and edit the surfaces and assign the media files to them.
    8. -
    9. Preview and adjust your project. You can use the preview panel to see how your project looks on the output device. You can also use the control panel to modify the parameters of your media files and surfaces.
    10. -
    11. Save and export your project. You can use the file menu to save your project as a .mad file or export it as a video file or a Minimad file.
    12. -
    - -

    Tips and Tricks for Using MadMapper 142 Crack For Mac Torrent Download

    - -

    To use MadMapper 142 crack for Mac torrent download more effectively and creatively, you can follow some tips and tricks that we have gathered for you. Here are some of them:

    - -
      -
    • Use keyboard shortcuts to speed up your workflow. You can find a list of keyboard shortcuts in the help menu or on the official website of MadMapper.
    • -
    • Use presets to save and load your settings. You can use presets to store and recall your media files, surfaces, effects, animations, and other settings. You can find presets in the library panel or create your own presets.
    • -
    • Use cues to create sequences and transitions. You can use cues to store and recall different states of your project, such as media files, surfaces, effects, animations, etc. You can find cues in the cue panel or create your own cues.
    • -
    • Use modules to add interactivity and functionality. You can use modules to add features that are not available in MadMapper by default, such as sensors, cameras, webcams, audio analysis, etc. You can find modules in the module panel or create your own modules using Python scripting.
    • -
    • Use online tutorials and resources to learn more about MadMapper. There are many online tutorials and resources that can help you learn more about MadMapper and how to use it for different kinds of projects. You can watch videos on YouTube, read blogs on Medium, join forums on Facebook or Reddit, or take courses on Udemy or Skillshare.
    • -
    - -

    Conclusion

    - -

    In this article, we have shown you how to get MadMapper 142 crack for Mac torrent download, why you should choose it over other video mapping software, how to get started with it, and some tips and tricks for using it effectively and creatively.

    - -

    We hope this article has helped you understand how to get MadMapper 142 crack for Mac torrent download and how to use it for your video and light mapping projects.

    - -

    If you have any questions or feedback about this article or MadMapper 142 crack for Mac torrent download, feel free to leave a comment below or contact us via email.

    -

    Conclusion

    - -

    MadMapper 142 crack for Mac torrent download is a great choice for anyone who wants to create stunning video and light shows using projectors, LED panels, or other devices. It is easy to use, versatile, powerful, compatible, and affordable. You can get MadMapper 142 crack for Mac torrent download for free and use it without any limitations or restrictions.

    - -

    However, you should also be aware of the risks and benefits of getting MadMapper 142 crack for Mac torrent download. You might violate the copyright law, expose your Mac to viruses and malware, miss out on updates and support, but also save money and time, explore your creativity and skills.

    - -

    If you decide to get MadMapper 142 crack for Mac torrent download, you should follow some tips on how to use it effectively and safely. You should use a VPN service when downloading torrents, scan your Mac regularly with an antivirus software, backup your important files regularly, learn from online tutorials and resources.

    - -

    We hope this article has helped you understand how to get MadMapper 142 crack for Mac torrent download and how to use it for your video and light mapping projects.

    3cee63e6c2
    -
    -
    \ No newline at end of file diff --git a/spaces/diacanFperku/AutoGPT/ManjhiTheMountainManmoviefreedownloadutorrent.md b/spaces/diacanFperku/AutoGPT/ManjhiTheMountainManmoviefreedownloadutorrent.md deleted file mode 100644 index 6dc08fdf3f5cd989aefb7776c6e0386e2307070d..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/ManjhiTheMountainManmoviefreedownloadutorrent.md +++ /dev/null @@ -1,9 +0,0 @@ -
    -

    MagicJewledoc wos from HTTPS to HTTP. https://finebtmoviefreedownload.com/ MagicJewledoc wos from HTTPS to HTTP. https://mediapool.org/stories/1-Untitled-1 -v7b17bfd26b https://coub.com/stories/2970118-snap-manjhithemountainmanmoviefreedownloadutorrent. html " class="_ " data-tooltip-is-in-page-title="true">ManjhiTheMountainManmoviefreedownloadutorrent

    -

    ManjhiTheMountainManmoviefreedownloadutorrent


    DOWNLOAD ••• https://gohhs.com/2uFTSW



    -

    http://www.midnights-dream.com/wp-content/uploads/2018/11/download-manjhi-the-mountain-man-moviefreedownloadutorrent-midnight.jpg https://coub.com/stories/3232512-install-manjhithemountainmanmoviefreedownloadutorrent.

    -

    https://coub.com/stories/1412575-manjhithemountainmanmoviefreedownloadutorrent. https://coub.com/stories/2646358-manjhithemountainmanmoviefreedownloadutorrent. https://coub.com/stories/2612792-manjhithemountainmanmoviefreedownloadutorrent. https://coub.com/stories/1593647-install-manjhithemountainmanmoviefreedownloadutorrent. https://coub.com/stories/1727750-install-manjhithemountainmanmoviefreedownloadutorrent.

    -

    https://coub.com/stories/3142097-install-manjhithemountainmanmoviefreedownloadutorrent. https://coub.com/stories/2646358-install-manjhithemountainmanmoviefreedownloadutorrent. https://coub.com/stories/3002055-manjhithemountainmanmoviefreedownloadutorrent. https://coub.com/stories/3289837-install-manjhithemountainmanmoviefreedownloadutorrent. https://coub.com/stories/2612792-install-manjhithemountainmanmoviefreedownloadutorrent. https://coub.com/stories/1593647-install-manjhithemountainmanmoviefreedownloadutorrent. https://coub.com/stories/1727750-install-manjhithemountainmanmoviefreedownloadutorrent.

    -

    899543212b
    -
    -
    \ No newline at end of file diff --git a/spaces/digitalxingtong/Kino-Bert-VITS2/bert_gen.py b/spaces/digitalxingtong/Kino-Bert-VITS2/bert_gen.py deleted file mode 100644 index 467655b2c4171608ad690fe7dec350db85f84f1b..0000000000000000000000000000000000000000 --- a/spaces/digitalxingtong/Kino-Bert-VITS2/bert_gen.py +++ /dev/null @@ -1,53 +0,0 @@ -import torch -from torch.utils.data import DataLoader -from multiprocessing import Pool -import commons -import utils -from data_utils import TextAudioSpeakerLoader, TextAudioSpeakerCollate -from tqdm import tqdm -import warnings - -from text import cleaned_text_to_sequence, get_bert - -config_path = 'configs/config.json' -hps = utils.get_hparams_from_file(config_path) - -def process_line(line): - _id, spk, language_str, text, phones, tone, word2ph = line.strip().split("|") - phone = phones.split(" ") - tone = [int(i) for i in tone.split(" ")] - word2ph = [int(i) for i in word2ph.split(" ")] - w2pho = [i for i in word2ph] - word2ph = [i for i in word2ph] - phone, tone, language = cleaned_text_to_sequence(phone, tone, language_str) - - if hps.data.add_blank: - phone = commons.intersperse(phone, 0) - tone = commons.intersperse(tone, 0) - language = commons.intersperse(language, 0) - for i in range(len(word2ph)): - word2ph[i] = word2ph[i] * 2 - word2ph[0] += 1 - wav_path = f'{_id}' - - bert_path = wav_path.replace(".wav", ".bert.pt") - try: - bert = torch.load(bert_path) - assert bert.shape[-1] == len(phone) - except: - bert = get_bert(text, word2ph, language_str) - assert bert.shape[-1] == len(phone) - torch.save(bert, bert_path) - - -if __name__ == '__main__': - lines = [] - with open(hps.data.training_files, encoding='utf-8' ) as f: - lines.extend(f.readlines()) - - # with open(hps.data.validation_files, encoding='utf-8' ) as f: - # lines.extend(f.readlines()) - - with Pool(processes=2) as pool: #A100 40GB suitable config,if coom,please decrease the processess number. - for _ in tqdm(pool.imap_unordered(process_line, lines)): - pass diff --git a/spaces/digitalxingtong/Xingtong-Longread-Dongmuchang-Bert-VITS2/losses.py b/spaces/digitalxingtong/Xingtong-Longread-Dongmuchang-Bert-VITS2/losses.py deleted file mode 100644 index fb22a0e834dd87edaa37bb8190eee2c3c7abe0d5..0000000000000000000000000000000000000000 --- a/spaces/digitalxingtong/Xingtong-Longread-Dongmuchang-Bert-VITS2/losses.py +++ /dev/null @@ -1,61 +0,0 @@ -import torch -from torch.nn import functional as F - -import commons - - -def feature_loss(fmap_r, fmap_g): - loss = 0 - for dr, dg in zip(fmap_r, fmap_g): - for rl, gl in zip(dr, dg): - rl = rl.float().detach() - gl = gl.float() - loss += torch.mean(torch.abs(rl - gl)) - - return loss * 2 - - -def discriminator_loss(disc_real_outputs, disc_generated_outputs): - loss = 0 - r_losses = [] - g_losses = [] - for dr, dg in zip(disc_real_outputs, disc_generated_outputs): - dr = dr.float() - dg = dg.float() - r_loss = torch.mean((1-dr)**2) - g_loss = torch.mean(dg**2) - loss += (r_loss + g_loss) - r_losses.append(r_loss.item()) - g_losses.append(g_loss.item()) - - return loss, r_losses, g_losses - - -def generator_loss(disc_outputs): - loss = 0 - gen_losses = [] - for dg in disc_outputs: - dg = dg.float() - l = torch.mean((1-dg)**2) - gen_losses.append(l) - loss += l - - return loss, gen_losses - - -def kl_loss(z_p, logs_q, m_p, logs_p, z_mask): - """ - z_p, logs_q: [b, h, t_t] - m_p, logs_p: [b, h, t_t] - """ - z_p = z_p.float() - logs_q = logs_q.float() - m_p = m_p.float() - logs_p = logs_p.float() - z_mask = z_mask.float() - - kl = logs_p - logs_q - 0.5 - kl += 0.5 * ((z_p - m_p)**2) * torch.exp(-2. * logs_p) - kl = torch.sum(kl * z_mask) - l = kl / torch.sum(z_mask) - return l diff --git a/spaces/digitalxingtong/Xingtong-Read-Dongmuchang-Bert-VITS2/text/symbols.py b/spaces/digitalxingtong/Xingtong-Read-Dongmuchang-Bert-VITS2/text/symbols.py deleted file mode 100644 index 9dfae4e633829f20c4fd767b1c7a9198911ed801..0000000000000000000000000000000000000000 --- a/spaces/digitalxingtong/Xingtong-Read-Dongmuchang-Bert-VITS2/text/symbols.py +++ /dev/null @@ -1,51 +0,0 @@ -punctuation = ['!', '?', '…', ",", ".", "'", '-'] -pu_symbols = punctuation + ["SP", "UNK"] -pad = '_' - -# chinese -zh_symbols = ['E', 'En', 'a', 'ai', 'an', 'ang', 'ao', 'b', 'c', 'ch', 'd', 'e', 'ei', 'en', 'eng', 'er', 'f', 'g', 'h', - 'i', 'i0', 'ia', 'ian', 'iang', 'iao', 'ie', 'in', 'ing', 'iong', 'ir', 'iu', 'j', 'k', 'l', 'm', 'n', 'o', - 'ong', - 'ou', 'p', 'q', 'r', 's', 'sh', 't', 'u', 'ua', 'uai', 'uan', 'uang', 'ui', 'un', 'uo', 'v', 'van', 've', 'vn', - 'w', 'x', 'y', 'z', 'zh', - "AA", "EE", "OO"] -num_zh_tones = 6 - -# japanese -ja_symbols = ['I', 'N', 'U', 'a', 'b', 'by', 'ch', 'cl', 'd', 'dy', 'e', 'f', 'g', 'gy', 'h', 'hy', 'i', 'j', 'k', 'ky', - 'm', 'my', 'n', 'ny', 'o', 'p', 'py', 'r', 'ry', 's', 'sh', 't', 'ts', 'u', 'V', 'w', 'y', 'z'] -num_ja_tones = 1 - -# English -en_symbols = ['aa', 'ae', 'ah', 'ao', 'aw', 'ay', 'b', 'ch', 'd', 'dh', 'eh', 'er', 'ey', 'f', 'g', 'hh', 'ih', 'iy', - 'jh', 'k', 'l', 'm', 'n', 'ng', 'ow', 'oy', 'p', 'r', 's', - 'sh', 't', 'th', 'uh', 'uw', 'V', 'w', 'y', 'z', 'zh'] -num_en_tones = 4 - -# combine all symbols -normal_symbols = sorted(set(zh_symbols + ja_symbols + en_symbols)) -symbols = [pad] + normal_symbols + pu_symbols -sil_phonemes_ids = [symbols.index(i) for i in pu_symbols] - -# combine all tones -num_tones = num_zh_tones + num_ja_tones + num_en_tones - -# language maps -language_id_map = { - 'ZH': 0, - "JA": 1, - "EN": 2 -} -num_languages = len(language_id_map.keys()) - -language_tone_start_map = { - 'ZH': 0, - "JA": num_zh_tones, - "EN": num_zh_tones + num_ja_tones -} - -if __name__ == '__main__': - a = set(zh_symbols) - b = set(en_symbols) - print(sorted(a&b)) - diff --git a/spaces/dineshreddy/WALT/walt/datasets/pipelines/test_time_aug.py b/spaces/dineshreddy/WALT/walt/datasets/pipelines/test_time_aug.py deleted file mode 100644 index b6226e040499882c99f15594c66ebf3d07829168..0000000000000000000000000000000000000000 --- a/spaces/dineshreddy/WALT/walt/datasets/pipelines/test_time_aug.py +++ /dev/null @@ -1,119 +0,0 @@ -import warnings - -import mmcv - -from ..builder import PIPELINES -from .compose import Compose - - -@PIPELINES.register_module() -class MultiScaleFlipAug(object): - """Test-time augmentation with multiple scales and flipping. - - An example configuration is as followed: - - .. code-block:: - - img_scale=[(1333, 400), (1333, 800)], - flip=True, - transforms=[ - dict(type='Resize', keep_ratio=True), - dict(type='RandomFlip'), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=32), - dict(type='ImageToTensor', keys=['img']), - dict(type='Collect', keys=['img']), - ] - - After MultiScaleFLipAug with above configuration, the results are wrapped - into lists of the same length as followed: - - .. code-block:: - - dict( - img=[...], - img_shape=[...], - scale=[(1333, 400), (1333, 400), (1333, 800), (1333, 800)] - flip=[False, True, False, True] - ... - ) - - Args: - transforms (list[dict]): Transforms to apply in each augmentation. - img_scale (tuple | list[tuple] | None): Images scales for resizing. - scale_factor (float | list[float] | None): Scale factors for resizing. - flip (bool): Whether apply flip augmentation. Default: False. - flip_direction (str | list[str]): Flip augmentation directions, - options are "horizontal" and "vertical". If flip_direction is list, - multiple flip augmentations will be applied. - It has no effect when flip == False. Default: "horizontal". - """ - - def __init__(self, - transforms, - img_scale=None, - scale_factor=None, - flip=False, - flip_direction='horizontal'): - self.transforms = Compose(transforms) - assert (img_scale is None) ^ (scale_factor is None), ( - 'Must have but only one variable can be setted') - if img_scale is not None: - self.img_scale = img_scale if isinstance(img_scale, - list) else [img_scale] - self.scale_key = 'scale' - assert mmcv.is_list_of(self.img_scale, tuple) - else: - self.img_scale = scale_factor if isinstance( - scale_factor, list) else [scale_factor] - self.scale_key = 'scale_factor' - - self.flip = flip - self.flip_direction = flip_direction if isinstance( - flip_direction, list) else [flip_direction] - assert mmcv.is_list_of(self.flip_direction, str) - if not self.flip and self.flip_direction != ['horizontal']: - warnings.warn( - 'flip_direction has no effect when flip is set to False') - if (self.flip - and not any([t['type'] == 'RandomFlip' for t in transforms])): - warnings.warn( - 'flip has no effect when RandomFlip is not in transforms') - - def __call__(self, results): - """Call function to apply test time augment transforms on results. - - Args: - results (dict): Result dict contains the data to transform. - - Returns: - dict[str: list]: The augmented data, where each value is wrapped - into a list. - """ - - aug_data = [] - flip_args = [(False, None)] - if self.flip: - flip_args += [(True, direction) - for direction in self.flip_direction] - for scale in self.img_scale: - for flip, direction in flip_args: - _results = results.copy() - _results[self.scale_key] = scale - _results['flip'] = flip - _results['flip_direction'] = direction - data = self.transforms(_results) - aug_data.append(data) - # list of dict to dict of list - aug_data_dict = {key: [] for key in aug_data[0]} - for data in aug_data: - for key, val in data.items(): - aug_data_dict[key].append(val) - return aug_data_dict - - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += f'(transforms={self.transforms}, ' - repr_str += f'img_scale={self.img_scale}, flip={self.flip}, ' - repr_str += f'flip_direction={self.flip_direction})' - return repr_str diff --git a/spaces/dirge/voicevox/voicevox_engine/setting/__init__.py b/spaces/dirge/voicevox/voicevox_engine/setting/__init__.py deleted file mode 100644 index ff399f92b662072737fe036b7c9832997a76a553..0000000000000000000000000000000000000000 --- a/spaces/dirge/voicevox/voicevox_engine/setting/__init__.py +++ /dev/null @@ -1,9 +0,0 @@ -from .Setting import CorsPolicyMode, Setting -from .SettingLoader import USER_SETTING_PATH, SettingLoader - -__all__ = [ - "USER_SETTING_PATH", - "CorsPolicyMode", - "Setting", - "SettingLoader", -] diff --git a/spaces/djl234/UFO/app.py b/spaces/djl234/UFO/app.py deleted file mode 100644 index 1478471ce111f608186d40bd237351e5651c44b0..0000000000000000000000000000000000000000 --- a/spaces/djl234/UFO/app.py +++ /dev/null @@ -1,146 +0,0 @@ -import tqdm -#import fastCNN -import numpy as np - - -import gradio as gr -import os -#os.system("sudo apt-get install nvIDia-cuda-toolkit") -os.system("pip3 install torch") -#os.system("/usr/local/bin/python -m pip install --upgrade pip") -os.system("pip3 install collections") -os.system("pip3 install torchvision") -os.system("pip3 install einops") -aaaa=0 -os.system("pip3 install pydensecrf") -#os.system("pip install argparse") -import pydensecrf.densecrf as dcrf -from PIL import Image -import torch -import torch.nn.functional as F -from torchvision import transforms -from model_video import build_model -import numpy as np -import collections - -def crf_refine(img, annos): - print(img.shape,annos.shape) - def _sigmoid(x): - return 1 / (1 + np.exp(-x)) - - assert img.dtype == np.uint8 - assert annos.dtype == np.uint8 - assert img.shape[:2] == annos.shape - - # img and annos should be np array with data type uint8 - - EPSILON = 1e-8 - - M = 2 # salient or not - tau = 1.05 - # Setup the CRF model - d = dcrf.DenseCRF2D(img.shape[1], img.shape[0], M) - - anno_norm = annos / 255. - - n_energy = -np.log((1.0 - anno_norm + EPSILON)) / (tau * _sigmoid(1 - anno_norm)) - p_energy = -np.log(anno_norm + EPSILON) / (tau * _sigmoid(anno_norm)) - - U = np.zeros((M, img.shape[0] * img.shape[1]), dtype='float32') - U[0, :] = n_energy.flatten() - U[1, :] = p_energy.flatten() - - d.setUnaryEnergy(U) - - d.addPairwiseGaussian(sxy=3, compat=3) - d.addPairwiseBilateral(sxy=60, srgb=5, rgbim=img, compat=5) - - # Do the inference - infer = np.array(d.inference(1)).astype('float32') - res = infer[1, :] - - res = res * 255 - res = res.reshape(img.shape[:2]) - return res.astype('uint8') - -#import argparse -device='cpu' -net = build_model(device).to(device) -#net=torch.nn.DataParallel(net) -model_path = 'image_best.pth' -print(model_path) -weight=torch.load(model_path,map_location=torch.device(device)) -#print(type(weight)) -new_dict=collections.OrderedDict() -for k in weight.keys(): - new_dict[k[len('module.'):]]=weight[k] -net.load_state_dict(new_dict) -net.eval() -net = net.to(device) -def test(gpu_id, net, img_list, group_size, img_size): - print('test') - #device=device - hl,wl=[_.shape[0] for _ in img_list],[_.shape[1] for _ in img_list] - img_transform = transforms.Compose([transforms.Resize((img_size, img_size)), transforms.ToTensor(), - transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])]) - img_transform_gray = transforms.Compose([transforms.Resize((img_size, img_size)), transforms.ToTensor(), - transforms.Normalize(mean=[0.449], std=[0.226])]) - with torch.no_grad(): - - group_img=torch.rand(5,3,224,224) - for i in range(5): - group_img[i]=img_transform(Image.fromarray(img_list[i])) - _,pred_mask=net(group_img*1) - pred_mask=(pred_mask.detach().squeeze()*255)#.numpy().astype(np.uint8) - #pred_mask=[F.interpolate(pred_mask[i].reshape(1,1,pred_mask[i].shape[-2],pred_mask[i].shape[-1]),size=(size,size),mode='bilinear').squeeze().numpy().astype(np.uint8) for i in range(5)] - img_resize=[((group_img[i]-group_img[i].min())/(group_img[i].max()-group_img[i].min())*255).permute(1,2,0).contiguous().numpy().astype(np.uint8) - for i in range(5)] - pred_mask=[crf_refine(img_resize[i],pred_mask[i].numpy().astype(np.uint8)) for i in range(5)] - #for i in range(5): - # print(img_list[i].shape,pred_mask[i].shape) - #pred_mask=[crf_refine(img_list[i],pred_mask[i]) for i in range(5)] - print(pred_mask[0].shape) - white=(torch.ones(2,pred_mask[0].shape[1],3)*255).long() - result = [torch.cat([torch.from_numpy(img_resize[i]),white,torch.from_numpy(pred_mask[i]).unsqueeze(2).repeat(1,1,3)],dim=0).numpy() for i in range(5)] - #w, h = 224,224#Image.open(image_list[i][j]).size - #result = result.resize((w, h), Image.BILINEAR) - #result.convert('L').save('0.png') - print('done') - return result - -img_lst=[(torch.rand(352,352,3)*255).numpy().astype(np.uint8) for i in range(5)] - - - - - - -#simly test -res=test('cpu',net,img_lst,5,224) -'''for i in range(5): - assert res[i].shape[0]==352 and res[i].shape[1]==352 and res[i].shape[2]==3''' -def sepia(img1,img2,img3,img4,img5): - print('sepia') - '''ans=[] - print(len(input_imgs)) - for input_img in input_imgs: - sepia_filter = np.array( - [[0.393, 0.769, 0.189], [0.349, 0.686, 0.168], [0.272, 0.534, 0.131]] - ) - sepia_img = input_img.dot(sepia_filter.T) - sepia_img /= sepia_img.max() - ans.append(input_img)''' - img_list=[img1,img2,img3,img4,img5] - h_list,w_list=[_.shape[0] for _ in img_list],[_.shape[1] for _ in img_list] - #print(type(img1)) - #print(img1.shape) - result_list=test(device,net,img_list,5,224) - #result_list=[result_list[i].resize((w_list[i], h_list[i]), Image.BILINEAR) for i in range(5)] - img1,img2,img3,img4,img5=result_list#test('cpu',net,img_list,5,224) - white=(torch.ones(img1.shape[0],2,3)*255).numpy().astype(np.uint8) - return np.concatenate([img1,white,img2,white,img3,white,img4,white,img5],axis=1) - -#gr.Image(shape=(224, 2)) -#demo = gr.Interface(sepia, inputs=["image","image","image","image","image"], outputs=["image","image","image","image","image"])#gr.Interface(sepia, gr.Image(shape=(200, 200)), "image") -demo = gr.Interface(sepia, inputs=["image","image","image","image","image"], outputs=["image"]) -demo.launch(debug=True) diff --git a/spaces/ds520/bingo/src/components/ui/codeblock.tsx b/spaces/ds520/bingo/src/components/ui/codeblock.tsx deleted file mode 100644 index aabda4e3b59f4e36b6ab79feb19d8d18b70e881b..0000000000000000000000000000000000000000 --- a/spaces/ds520/bingo/src/components/ui/codeblock.tsx +++ /dev/null @@ -1,142 +0,0 @@ -'use client' - -import { FC, memo } from 'react' -import { Prism as SyntaxHighlighter } from 'react-syntax-highlighter' -import { coldarkDark } from 'react-syntax-highlighter/dist/cjs/styles/prism' - -import { useCopyToClipboard } from '@/lib/hooks/use-copy-to-clipboard' -import { IconCheck, IconCopy, IconDownload } from '@/components/ui/icons' -import { Button } from '@/components/ui/button' - -interface Props { - language: string - value: string -} - -interface languageMap { - [key: string]: string | undefined -} - -export const programmingLanguages: languageMap = { - javascript: '.js', - python: '.py', - java: '.java', - c: '.c', - cpp: '.cpp', - 'c++': '.cpp', - 'c#': '.cs', - ruby: '.rb', - php: '.php', - swift: '.swift', - 'objective-c': '.m', - kotlin: '.kt', - typescript: '.ts', - go: '.go', - perl: '.pl', - rust: '.rs', - scala: '.scala', - haskell: '.hs', - lua: '.lua', - shell: '.sh', - sql: '.sql', - html: '.html', - css: '.css' - // add more file extensions here, make sure the key is same as language prop in CodeBlock.tsx component -} - -export const generateRandomString = (length: number, lowercase = false) => { - const chars = 'ABCDEFGHJKLMNPQRSTUVWXY3456789' // excluding similar looking characters like Z, 2, I, 1, O, 0 - let result = '' - for (let i = 0; i < length; i++) { - result += chars.charAt(Math.floor(Math.random() * chars.length)) - } - return lowercase ? result.toLowerCase() : result -} - -const CodeBlock: FC = memo(({ language, value }) => { - const { isCopied, copyToClipboard } = useCopyToClipboard({ timeout: 2000 }) - - const downloadAsFile = () => { - if (typeof window === 'undefined') { - return - } - const fileExtension = programmingLanguages[language] || '.file' - const suggestedFileName = `file-${generateRandomString( - 3, - true - )}${fileExtension}` - const fileName = window.prompt('Enter file name' || '', suggestedFileName) - - if (!fileName) { - // User pressed cancel on prompt. - return - } - - const blob = new Blob([value], { type: 'text/plain' }) - const url = URL.createObjectURL(blob) - const link = document.createElement('a') - link.download = fileName - link.href = url - link.style.display = 'none' - document.body.appendChild(link) - link.click() - document.body.removeChild(link) - URL.revokeObjectURL(url) - } - - const onCopy = () => { - if (isCopied) return - copyToClipboard(value) - } - - return ( -
    -
    - {language} -
    - - -
    -
    - - {value} - -
    - ) -}) -CodeBlock.displayName = 'CodeBlock' - -export { CodeBlock } diff --git a/spaces/dteam/chatgpt-dteam/bin_public/config/Kelpy-Codos.js b/spaces/dteam/chatgpt-dteam/bin_public/config/Kelpy-Codos.js deleted file mode 100644 index cfbaeedb4f371dfb5fe157db545b364046fca3e1..0000000000000000000000000000000000000000 --- a/spaces/dteam/chatgpt-dteam/bin_public/config/Kelpy-Codos.js +++ /dev/null @@ -1,76 +0,0 @@ -// ==UserScript== -// @name Kelpy Codos -// @namespace https://github.com/Keldos-Li/Kelpy-Codos -// @version 1.0.5 -// @author Keldos; https://keldos.me/ -// @description Add copy button to PRE tags before CODE tag, for Chuanhu ChatGPT especially. -// Based on Chuanhu ChatGPT version: ac04408 (2023-3-22) -// @license GPL-3.0 -// @grant none -// ==/UserScript== - -(function () { - 'use strict'; - - function addCopyButton(pre) { - var code = pre.querySelector('code'); - if (!code) { - return; // 如果没有找到 元素,则不添加按钮 - } - var firstChild = code.firstChild; - if (!firstChild) { - return; // 如果 元素没有子节点,则不添加按钮 - } - var button = document.createElement('button'); - button.textContent = '\uD83D\uDCCE'; // 使用 📎 符号作为“复制”按钮的文本 - button.style.position = 'relative'; - button.style.float = 'right'; - button.style.fontSize = '1em'; // 可选:调整按钮大小 - button.style.background = 'none'; // 可选:去掉背景颜色 - button.style.border = 'none'; // 可选:去掉边框 - button.style.cursor = 'pointer'; // 可选:显示指针样式 - button.addEventListener('click', function () { - var range = document.createRange(); - range.selectNodeContents(code); - range.setStartBefore(firstChild); // 将范围设置为第一个子节点之前 - var selection = window.getSelection(); - selection.removeAllRanges(); - selection.addRange(range); - - try { - var success = document.execCommand('copy'); - if (success) { - button.textContent = '\u2714'; - setTimeout(function () { - button.textContent = '\uD83D\uDCCE'; // 恢复按钮为“复制” - }, 2000); - } else { - button.textContent = '\u2716'; - } - } catch (e) { - console.error(e); - button.textContent = '\u2716'; - } - - selection.removeAllRanges(); - }); - code.insertBefore(button, firstChild); // 将按钮插入到第一个子元素之前 - } - - function handleNewElements(mutationsList, observer) { - for (var mutation of mutationsList) { - if (mutation.type === 'childList') { - for (var node of mutation.addedNodes) { - if (node.nodeName === 'PRE') { - addCopyButton(node); - } - } - } - } - } - - var observer = new MutationObserver(handleNewElements); - observer.observe(document.documentElement, { childList: true, subtree: true }); - - document.querySelectorAll('pre').forEach(addCopyButton); -})(); diff --git a/spaces/emc348/faces-through-time/criteria/helpers.py b/spaces/emc348/faces-through-time/criteria/helpers.py deleted file mode 100644 index b51fdf97141407fcc1c9d249a086ddbfd042469f..0000000000000000000000000000000000000000 --- a/spaces/emc348/faces-through-time/criteria/helpers.py +++ /dev/null @@ -1,119 +0,0 @@ -from collections import namedtuple -import torch -from torch.nn import Conv2d, BatchNorm2d, PReLU, ReLU, Sigmoid, MaxPool2d, AdaptiveAvgPool2d, Sequential, Module - -""" -ArcFace implementation from [TreB1eN](https://github.com/TreB1eN/InsightFace_Pytorch) -""" - - -class Flatten(Module): - def forward(self, input): - return input.view(input.size(0), -1) - - -def l2_norm(input, axis=1): - norm = torch.norm(input, 2, axis, True) - output = torch.div(input, norm) - return output - - -class Bottleneck(namedtuple('Block', ['in_channel', 'depth', 'stride'])): - """ A named tuple describing a ResNet block. """ - - -def get_block(in_channel, depth, num_units, stride=2): - return [Bottleneck(in_channel, depth, stride)] + [Bottleneck(depth, depth, 1) for i in range(num_units - 1)] - - -def get_blocks(num_layers): - if num_layers == 50: - blocks = [ - get_block(in_channel=64, depth=64, num_units=3), - get_block(in_channel=64, depth=128, num_units=4), - get_block(in_channel=128, depth=256, num_units=14), - get_block(in_channel=256, depth=512, num_units=3) - ] - elif num_layers == 100: - blocks = [ - get_block(in_channel=64, depth=64, num_units=3), - get_block(in_channel=64, depth=128, num_units=13), - get_block(in_channel=128, depth=256, num_units=30), - get_block(in_channel=256, depth=512, num_units=3) - ] - elif num_layers == 152: - blocks = [ - get_block(in_channel=64, depth=64, num_units=3), - get_block(in_channel=64, depth=128, num_units=8), - get_block(in_channel=128, depth=256, num_units=36), - get_block(in_channel=256, depth=512, num_units=3) - ] - else: - raise ValueError("Invalid number of layers: {}. Must be one of [50, 100, 152]".format(num_layers)) - return blocks - - -class SEModule(Module): - def __init__(self, channels, reduction): - super(SEModule, self).__init__() - self.avg_pool = AdaptiveAvgPool2d(1) - self.fc1 = Conv2d(channels, channels // reduction, kernel_size=1, padding=0, bias=False) - self.relu = ReLU(inplace=True) - self.fc2 = Conv2d(channels // reduction, channels, kernel_size=1, padding=0, bias=False) - self.sigmoid = Sigmoid() - - def forward(self, x): - module_input = x - x = self.avg_pool(x) - x = self.fc1(x) - x = self.relu(x) - x = self.fc2(x) - x = self.sigmoid(x) - return module_input * x - - -class bottleneck_IR(Module): - def __init__(self, in_channel, depth, stride): - super(bottleneck_IR, self).__init__() - if in_channel == depth: - self.shortcut_layer = MaxPool2d(1, stride) - else: - self.shortcut_layer = Sequential( - Conv2d(in_channel, depth, (1, 1), stride, bias=False), - BatchNorm2d(depth) - ) - self.res_layer = Sequential( - BatchNorm2d(in_channel), - Conv2d(in_channel, depth, (3, 3), (1, 1), 1, bias=False), PReLU(depth), - Conv2d(depth, depth, (3, 3), stride, 1, bias=False), BatchNorm2d(depth) - ) - - def forward(self, x): - shortcut = self.shortcut_layer(x) - res = self.res_layer(x) - return res + shortcut - - -class bottleneck_IR_SE(Module): - def __init__(self, in_channel, depth, stride): - super(bottleneck_IR_SE, self).__init__() - if in_channel == depth: - self.shortcut_layer = MaxPool2d(1, stride) - else: - self.shortcut_layer = Sequential( - Conv2d(in_channel, depth, (1, 1), stride, bias=False), - BatchNorm2d(depth) - ) - self.res_layer = Sequential( - BatchNorm2d(in_channel), - Conv2d(in_channel, depth, (3, 3), (1, 1), 1, bias=False), - PReLU(depth), - Conv2d(depth, depth, (3, 3), stride, 1, bias=False), - BatchNorm2d(depth), - SEModule(depth, 16) - ) - - def forward(self, x): - shortcut = self.shortcut_layer(x) - res = self.res_layer(x) - return res + shortcut diff --git a/spaces/end000/sberbank-ai-FRED-T5-1.7B/app.py b/spaces/end000/sberbank-ai-FRED-T5-1.7B/app.py deleted file mode 100644 index 643f424bb71fb6f1320c86b211741d021f1cf6b9..0000000000000000000000000000000000000000 --- a/spaces/end000/sberbank-ai-FRED-T5-1.7B/app.py +++ /dev/null @@ -1,3 +0,0 @@ -import gradio as gr - -gr.Interface.load("models/sberbank-ai/FRED-T5-1.7B").launch() \ No newline at end of file diff --git a/spaces/exbert-project/exbert/client/src/ts/main.ts b/spaces/exbert-project/exbert/client/src/ts/main.ts deleted file mode 100644 index d162d45c02ce4c4c22174753208487c63765071f..0000000000000000000000000000000000000000 --- a/spaces/exbert-project/exbert/client/src/ts/main.ts +++ /dev/null @@ -1,12 +0,0 @@ -import { MainGraphic } from './vis/attentionVis' - -import "!file-loader?name=exBERT.html!../exBERT.html"; -import "!file-loader?name=index.html!../index.html"; -import "../css/main.scss" - -window.onload = () => { - const base = document.getElementById('attention-vis') - //@ts-ignore - const mainVis = new MainGraphic(base) - console.log("Done loading window"); -} diff --git a/spaces/facebook/MusicGen/tests/models/test_musicgen.py b/spaces/facebook/MusicGen/tests/models/test_musicgen.py deleted file mode 100644 index 2b32ac5d52e6ba3ba8f2b413e54e1b5ac5839016..0000000000000000000000000000000000000000 --- a/spaces/facebook/MusicGen/tests/models/test_musicgen.py +++ /dev/null @@ -1,65 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import pytest -import torch - -from audiocraft.models import MusicGen - - -class TestMusicGenModel: - def get_musicgen(self): - mg = MusicGen.get_pretrained(name='debug', device='cpu') - mg.set_generation_params(duration=2.0, extend_stride=2.) - return mg - - def test_base(self): - mg = self.get_musicgen() - assert mg.frame_rate == 25 - assert mg.sample_rate == 32000 - assert mg.audio_channels == 1 - - def test_generate_unconditional(self): - mg = self.get_musicgen() - wav = mg.generate_unconditional(3) - assert list(wav.shape) == [3, 1, 64000] - - def test_generate_continuation(self): - mg = self.get_musicgen() - prompt = torch.randn(3, 1, 32000) - wav = mg.generate_continuation(prompt, 32000) - assert list(wav.shape) == [3, 1, 64000] - - prompt = torch.randn(2, 1, 32000) - wav = mg.generate_continuation( - prompt, 32000, ['youpi', 'lapin dort']) - assert list(wav.shape) == [2, 1, 64000] - - prompt = torch.randn(2, 1, 32000) - with pytest.raises(AssertionError): - wav = mg.generate_continuation( - prompt, 32000, ['youpi', 'lapin dort', 'one too many']) - - def test_generate(self): - mg = self.get_musicgen() - wav = mg.generate( - ['youpi', 'lapin dort']) - assert list(wav.shape) == [2, 1, 64000] - - def test_generate_long(self): - mg = self.get_musicgen() - mg.max_duration = 3. - mg.set_generation_params(duration=4., extend_stride=2.) - wav = mg.generate( - ['youpi', 'lapin dort']) - assert list(wav.shape) == [2, 1, 32000 * 4] - - def test_generate_two_step_cfg(self): - mg = self.get_musicgen() - mg.set_generation_params(duration=2.0, extend_stride=2., two_step_cfg=True) - wav = mg.generate( - ['youpi', 'lapin dort']) - assert list(wav.shape) == [2, 1, 64000] diff --git a/spaces/falterWliame/Face_Mask_Detection/Cat Et License Keygen.zip.md b/spaces/falterWliame/Face_Mask_Detection/Cat Et License Keygen.zip.md deleted file mode 100644 index 25f2ed417a1671a7e171cd315f34325f6e124a6f..0000000000000000000000000000000000000000 --- a/spaces/falterWliame/Face_Mask_Detection/Cat Et License Keygen.zip.md +++ /dev/null @@ -1,10 +0,0 @@ -

    cat et license keygen.zip


    Downloadhttps://urlca.com/2uDcIO



    -
    -The laptop is finally dead after 6 years of daily use. Bought another one and need to reinstall ET. Can anyone provide a license key for CAT... I can't find it anywhere. -Also, if anyone knows about the possibility to make a backup, please suggest -Refresh -Well, after studying the answer, I realized that I was probably not destined to enjoy the benefits of system restore, at least not until I sold it for $200 tonight. Good thing I saved my money. But I would still like to do it. -How about "opening the computer" to take out the hard drive? 8a78ff9644
    -
    -
    -

    diff --git a/spaces/falterWliame/Face_Mask_Detection/Facegen Modeller 3.5.3 Portable.md b/spaces/falterWliame/Face_Mask_Detection/Facegen Modeller 3.5.3 Portable.md deleted file mode 100644 index 84656c8d3a7ec1746a47b91e0f070e6215ffeaf6..0000000000000000000000000000000000000000 --- a/spaces/falterWliame/Face_Mask_Detection/Facegen Modeller 3.5.3 Portable.md +++ /dev/null @@ -1,6 +0,0 @@ -

    Facegen Modeller 3.5.3 Portable


    DOWNLOADhttps://urlca.com/2uDc8R



    - -integrated into FaceGen 3D Print ... Interesting tutorials. How to run a portable version of ... 4d29de3e1b
    -
    -
    -

    diff --git a/spaces/falterWliame/Face_Mask_Detection/Hacking The IKEA TRADFRI LED Power Supply.md b/spaces/falterWliame/Face_Mask_Detection/Hacking The IKEA TRADFRI LED Power Supply.md deleted file mode 100644 index 3d3f32d1389dc61249975af3e8ce17c54c365053..0000000000000000000000000000000000000000 --- a/spaces/falterWliame/Face_Mask_Detection/Hacking The IKEA TRADFRI LED Power Supply.md +++ /dev/null @@ -1,8 +0,0 @@ -
    -

    the bulb has a five wire connection. the ground (red) is connected to the line, the white wire connects to ground, the yellow wire to the zel pin, the green wire to the a0 pin, and the blue wire to a remote control button. there is also an on/off button on the back of the remote. the led strip dimmer cable has the on/off button and the wire connector. bypass the on/off button and attach it to one of the leds (connect the red, yellow, and green wires). by using a diode to bridge the other two wires, you could supply power to the arduino or teensy from the remote, but this would not be very efficient because you are wasting power. a better approach is to use the remote wire directly to the microprocessor. after you bypass the on/off button you should have a working gpio pin on your microprocessor that behaves just like a remote button.

    -

    the ikea bulb uses a cree xp-e 3 led and is often discussed as being very expensive. truth be told, it is pretty expensive compared to a sonoff device, but it is also a lot cheaper than the other available bulbs in the same range. even so, the cree is about double the size compared to the other bulbs and adds an extra two external components, so is much bulkier.

    -

    Hacking The IKEA TRADFRI LED Power Supply


    Download File >>> https://urlca.com/2uDd5W



    -

    if you are building a system that can accept rgb colors, you will need a color controller with at least 3 channels. the absolute cheapest solution would be an adafruit huzza rgb color led strip . it comes with a ftdi cable, but if you have a j-link or black magic probe, you can use that instead and have a usb connection instead of an rfid card.

    -

    you will need a small 100 ohm resistor. if you have a meter, you can measure the led forward voltage and calculate the forward current. if you want the actual forward current, then you have to calculate the voltage drop across the led. so, pull out your calculator and figure out what the voltage drop is, then multiply that by the current to get the amperage.

    899543212b
    -
    -
    \ No newline at end of file diff --git a/spaces/falterWliame/Face_Mask_Detection/Logic Works 5 Download Full Version.md b/spaces/falterWliame/Face_Mask_Detection/Logic Works 5 Download Full Version.md deleted file mode 100644 index fd8502e9a1bfb32201674fe5fa969e2e2e0490d2..0000000000000000000000000000000000000000 --- a/spaces/falterWliame/Face_Mask_Detection/Logic Works 5 Download Full Version.md +++ /dev/null @@ -1,34 +0,0 @@ - -

    How to Download and Use LogicWorks 5 for Windows

    -

    LogicWorks 5 is a powerful and interactive circuit design software that allows you to create and simulate digital logic circuits on your computer. It is a great tool for learning and teaching the principles and practices of logic design. In this article, we will show you how to download and use LogicWorks 5 for Windows, as well as some of its features and benefits.

    -

    logic works 5 download full version


    DOWNLOADhttps://urlca.com/2uDctg



    -

    How to Download LogicWorks 5 for Windows

    -

    To download LogicWorks 5 for Windows, you need to purchase it from the official website of DesignWorks Solutions[^1^]. You can choose between two versions: version 5.8 UWP (current) or newer for Windows 10, or version 5.6 (current) or newer for Windows 7, Windows 8 or Windows 10. After you complete the payment, you will receive an email with a download link and a license key. You can then install the software on your computer by following the instructions.

    -

    How to Use LogicWorks 5 for Windows

    -

    To use LogicWorks 5 for Windows, you need to launch the software and create a new project. You can then use the schematic editor to draw and edit your circuit diagram using various components, such as gates, flip-flops, registers, counters, multiplexers, etc. You can also use busses, labels, probes and other features to organize and test your circuit. You can then run the simulation and observe the behavior of your circuit on the screen. You can also view the timing waveforms and change the input signals or device parameters interactively.

    -

    Features and Benefits of LogicWorks 5 for Windows

    -

    LogicWorks 5 for Windows has many features and benefits that make it a superior circuit design software. Some of them are:

    -
      -
    • It is fast and reliable, allowing you to run quick and efficient simulations on screen.
    • -
    • It is flexible and unlimited, enabling you to create and test any number of circuit elements from your computer.
    • -
    • It supports VHDL, a subset of the standard hardware description language that allows you to describe and simulate circuits using text.
    • -
    • It is compatible with DesignWorks, a professional design package that offers more advanced integration, analysis and testing capabilities.
    • -
    • It is easy to use and learn, with a user-friendly interface and comprehensive documentation.
    • -
    -

    LogicWorks 5 for Windows is an ideal software for anyone who wants to learn or teach digital logic design. It is also a useful tool for hobbyists and professionals who want to create and test their own circuits. If you are interested in downloading and using LogicWorks 5 for Windows, visit the official website of DesignWorks Solutions[^1^] today.

    - -

    How to Learn LogicWorks 5 for Windows

    -

    If you want to learn how to use LogicWorks 5 for Windows effectively, you can follow some of the tutorials and resources available online. For example, you can watch the video tutorials by James Tandon and Ahmad Lashgar[^2^] on YouTube, which cover the basics of schematic capture and simulation of digital circuits using LogicWorks 5. You can also refer to the official website of DesignWorks Solutions[^3^], which provides a demo version of LogicWorks 5, a user manual, a VHDL reference guide, and some sample projects.

    -

    Advantages of LogicWorks 5 for Windows

    -

    LogicWorks 5 for Windows has many advantages over other circuit design software. Some of them are:

    -

    -
      -
    • It is affordable and easy to install, with a one-time purchase and no subscription fees.
    • -
    • It is compatible with most Windows operating systems, from Windows 7 to Windows 10.
    • -
    • It is intuitive and user-friendly, with a simple drag-and-drop interface and a minimal learning curve.
    • -
    • It is accurate and realistic, with a high-fidelity simulation engine and a large library of standard logic devices.
    • -
    • It is versatile and customizable, with the ability to create your own components, import and export VHDL code, and print or export your schematics.
    • -
    -

    LogicWorks 5 for Windows is a must-have software for anyone who wants to design and simulate digital logic circuits on their computer. It is suitable for students, teachers, hobbyists and professionals alike. If you are interested in downloading and using LogicWorks 5 for Windows, visit the official website of DesignWorks Solutions[^3^] today.

    d5da3c52bf
    -
    -
    \ No newline at end of file diff --git a/spaces/falterWliame/Face_Mask_Detection/Mplus 7 0 Crack 235.md b/spaces/falterWliame/Face_Mask_Detection/Mplus 7 0 Crack 235.md deleted file mode 100644 index 2dec48dbddb0a9ef48213c71e947ec6a3c96e79e..0000000000000000000000000000000000000000 --- a/spaces/falterWliame/Face_Mask_Detection/Mplus 7 0 Crack 235.md +++ /dev/null @@ -1,20 +0,0 @@ -

    Mplus 7 0 Crack 235


    Download 🗹 https://urlca.com/2uDdwa



    - -by LK Muthén · Cited from 1277-70 The interaction is shown in the image above as a solid circle. ... 235. In this example, the two-level path analysis model shown in ... 1 Jul 2019 The article shows the main characteristics of children's fleece clothing. -What are children's sweatshirts made of? -How to choose clothes from. -Buy Baldinini women's shoes with free shipping in Russia. -Over 3583 models in stock. -Catalog. -Collection. -Autumn-Winter 2019/2019. -Season. -Autumn winter. -Mans footwear. -Wimen's shoes. -Clothing and footwear for men. -Clothes and. Children's rubber boots for boys and girls, Twins (Twins). -Offers from online stores and private sellers. 8a78ff9644
    -
    -
    -

    diff --git a/spaces/fatiXbelha/sd/Brotato A Roguelite-Shooter Game with a Potato Hero - Download APK Now.md b/spaces/fatiXbelha/sd/Brotato A Roguelite-Shooter Game with a Potato Hero - Download APK Now.md deleted file mode 100644 index a45d9e9c546dc47c97e8ac9afa27c77f3b2cc682..0000000000000000000000000000000000000000 --- a/spaces/fatiXbelha/sd/Brotato A Roguelite-Shooter Game with a Potato Hero - Download APK Now.md +++ /dev/null @@ -1,88 +0,0 @@ - -

    Introduction

    -

    Brotato for mobile apk is a top-down arena shooter roguelite game where you play as a potato wielding up to six weapons at a time to fight off hordes of aliens. The game is developed by Erabit Studios and is available for Android devices. The game has a humorous and cartoonish style, with colorful graphics and quirky sound effects. The game is inspired by Vampire Survivors, another roguelite-shooter game with similar mechanics.

    -

    brotato for mobile apk


    DOWNLOAD ===== https://urllie.com/2uNCsV



    -

    How to download and install brotato for mobile apk

    -

    If you want to play brotato for mobile apk on your Android device, you will need to follow these steps:

    -
      -
    1. Go to the APKCombo website or the Uptodown website and search for brotato.
    2. -
    3. Select the latest version of the game and tap on the download button.
    4. -
    5. You will get a file named Brotato.apk or Brotato.xapk. If you get the xapk file, you will need to use an app like APKCombo Installer or XAPK Installer to extract the apk file from it.
    6. -
    7. Before installing the apk file, you will need to enable the installation of apps from unknown sources on your device. To do this, go to Settings > Security > Unknown sources and toggle it on.
    8. -
    9. Locate the apk file on your device and tap on it to start the installation process.
    10. -
    11. Follow the instructions on the screen and wait for the installation to finish.
    12. -
    13. You can now launch the game from your app drawer or home screen and enjoy playing brotato for mobile apk.
    14. -
    -

    Features and benefits of brotato for mobile apk

    -

    Brotato for mobile apk is a fun and addictive game that offers many features and benefits for its players. Here are some of them:

    -
      -
    • Gameplay: The game is fast-paced and challenging, with each run lasting under 30 minutes. You have to survive waves of aliens that last from 20 to 90 seconds each, while collecting materials, experience, and items from the shop. You can choose from dozens of characters with different traits and abilities, such as one-handed, crazy, lucky, mage, and more. You can also choose from hundreds of items and weapons, such as flamethrowers, SMGs, rocket launchers, or sticks and stones. The game has auto-firing weapons by default, but you can also use manual aiming if you prefer. The game has five difficulty levels to suit your skills and preferences.
    • -
    • Graphics: The game has colorful and cartoonish graphics that create a humorous and lively atmosphere. The game has smooth animations and effects that enhance the gameplay experience. The game also has a variety of settings and environments to explore, such as forests, deserts, caves, cities, and more.
    • -
    • Customization: The game allows you to customize your potato character with different skins, hats, glasses, masks, and more. You can also unlock new characters by completing achievements or buying them with in-game currency. You can also customize your weapons with different skins, attachments, and upgrades.
    • -
    • Accessibility: The game has accessibility options that let you tweak the health, damage, and speed of enemies so that the difficulty is right for you. You can also adjust the sound effects, music volume, language, control sensitivity, auto-aiming option, vibration feedback, cloud storage option, and more.
    • -
    -

    Alternatives to brotato for mobile apk

    -

    If you like bro tato for mobile apk, you might also enjoy some other games in the same genre or with similar mechanics. Here are some alternatives to brotato for mobile apk that you can try:

    -
      -
    • Vampire Survivors: This is the game that inspired brotato for mobile apk. It is also a top-down arena shooter roguelite game where you play as a vampire fighting against zombies, werewolves, and other creatures of the night. You can choose from different classes, such as hunter, mage, warrior, and more. You can also collect and upgrade weapons, items, and skills. The game has a dark and Gothic style, with realistic graphics and sound effects. The game is available for Android and iOS devices.
    • -
    • Soul Knight: This is another top-down arena shooter roguelite game where you play as a knight exploring dungeons and fighting against aliens, robots, and monsters. You can choose from over 170 characters, each with unique abilities and skills. You can also collect and use over 270 weapons, such as swords, guns, bows, lasers, and more. The game has a pixelated and retro style, with colorful graphics and music. The game is available for Android and iOS devices.
    • -
    • Archero: This is a top-down action-adventure roguelite game where you play as an archer shooting arrows at enemies and obstacles. You can move around the map and dodge attacks, while aiming and firing your arrows. You can also unlock and upgrade different skills, weapons, and equipment. The game has a simple and minimalist style, with bright graphics and sound effects. The game is available for Android and iOS devices.
    • -
    -

    Conclusion

    -

    Brotato for mobile apk is a fun and addictive game that lets you play as a potato with six weapons fighting against aliens. The game has many features and benefits that make it enjoyable and challenging. The game has colorful and cartoonish graphics, fast-paced and varied gameplay, customization options, accessibility options, and more. The game is also easy to download and install on your Android device. If you are looking for a game that will make you laugh and keep you entertained, you should give brotato for mobile apk a try.

    -

    brotato game download apk
    -brotato arena shooter roguelite apk
    -brotato potato shooting game apk
    -brotato android game free download
    -brotato apk latest version
    -brotato apk mod unlimited money
    -brotato apk offline installer
    -brotato apk xapk file
    -brotato apkcombo download link
    -brotato google play store app
    -brotato game review and rating
    -brotato game tips and tricks
    -brotato game best characters and weapons
    -brotato game how to survive waves
    -brotato game how to get items and materials
    -brotato game discord server and community
    -brotato game erabit studios developer
    -brotato game support and feedback email
    -brotato game update and patch notes
    -brotato game bug report and fix
    -brotato game cloud storage and data backup
    -brotato game accessibility options and settings
    -brotato game compatible devices and requirements
    -brotato game trailer and gameplay video
    -brotato game screenshots and wallpapers
    -brotato vs devil brotato comparison
    -brotato vs other shooting games alternatives
    -brotato vs aliens theme and story
    -brotato fun and addictive gameplay features
    -brotato fast and easy installation guide
    -how to play brotato on pc with emulator
    -how to play brotato with friends online
    -how to play brotato with controller support
    -how to play brotato without internet connection
    -how to play brotato without ads or in-app purchases
    -how to play brotato in different languages
    -how to play brotato in dark mode or night mode
    -how to play brotato in portrait or landscape mode
    -how to play brotato in low or high graphics quality
    -how to play brotato in easy or hard difficulty level

    -

    FAQs

    -

    Here are some frequently asked questions and answers about brotato for mobile apk:

    -
      -
    1. Is brotato for mobile apk free to play?
    2. -

      Yes, brotato for mobile apk is free to play. However, the game contains ads and in-app purchases that can enhance your gameplay experience or remove ads.

      -
    3. How can I get more coins in brotato for mobile apk?
    4. -

      You can get more coins in brotato for mobile apk by playing the game regularly, completing achievements, watching ads, or buying them with real money.

      -
    5. How can I save my progress in brotato for mobile apk?
    6. -

      You can save your progress in brotato for mobile apk by using the cloud storage option in the settings menu. You will need to log in with your Google Play account to use this feature.

      -
    7. What are the minimum requirements to play brotato for mobile apk?
    8. -

      The minimum requirements to play brotato for mobile apk are Android 4.4 or higher, 2 GB of RAM, 100 MB of free storage space, and an internet connection.

      -
    9. Is brotato for mobile apk safe to download?
    10. -

      Yes, brotato for mobile apk is safe to download from reputable sources like APKCombo or Uptodown. However, you should always be careful when downloading apps from unknown sources and scan them with an antivirus app before installing them.

      -

    401be4b1e0
    -
    -
    \ No newline at end of file diff --git a/spaces/fatiXbelha/sd/Download Lonely Survivor Mod Apk 1.7.0 with Unlimited Money and God Mode.md b/spaces/fatiXbelha/sd/Download Lonely Survivor Mod Apk 1.7.0 with Unlimited Money and God Mode.md deleted file mode 100644 index 75a386497b65edf1934b81d3ac24ee3ab41de4af..0000000000000000000000000000000000000000 --- a/spaces/fatiXbelha/sd/Download Lonely Survivor Mod Apk 1.7.0 with Unlimited Money and God Mode.md +++ /dev/null @@ -1,117 +0,0 @@ - -

    Lonely Survivor 1.7.0 Mod Apk: A Roguelike Adventure Game with Unlimited Money and God Mode

    -

    Are you looking for a challenging and fun adventure game that will test your skills and reflexes? Do you want to enjoy unlimited money, god mode, and other features that will make your gaming experience more enjoyable? If yes, then you should try Lonely Survivor, a roguelike game that combines action, strategy, and exploration. In this article, we will tell you everything you need to know about Lonely Survivor, how to download and install the mod apk, how to play the game, and some tips and tricks that will help you become a better player.

    -

    lonely survivor 1.7.0 mod apk


    Download Zip --->>> https://urllie.com/2uNCT7



    -

    What is Lonely Survivor?

    -

    A brief introduction to the game and its features

    -

    Lonely Survivor is an adventure roguelike game developed by 5play. In the game, you play as a mage who has to fight against waves of enemies and bosses, using various skills and weapons. You can also scan and generate QR codes, and convert images to text with the app. The game has randomly generated levels, so each run is different and unpredictable. You can also customize your character with different outfits and accessories.

    -

    How to download and install the mod apk

    -

    If you want to enjoy Lonely Survivor with unlimited money, god mode, menu mod, and other features, you need to download and install the mod apk from a reliable source. Here are the steps you need to follow:

    -
      -
    1. Click on this link or to download the mod apk file.
    2. -
    3. Allow unknown sources in your device settings.
    4. -
    5. Locate the downloaded file in your file manager and tap on it.
    6. -
    7. Follow the instructions on the screen to install the mod apk.
    8. -
    9. Launch the game and enjoy!
    10. -
    -

    What are the benefits of using the mod apk

    -

    By using the mod apk, you can enjoy many benefits that will make your gaming experience more fun and easy. Some of these benefits are:

    -
      -
    • You can get unlimited money that you can use to buy skills, weapons, outfits, accessories, and more.
    • -
    • You can activate god mode that will make you invincible and immune to damage.
    • -
    • You can access the menu mod that will allow you to adjust various settings such as speed, damage, health, etc.
    • -
    • You can remove ads that may interrupt your gameplay.
    • -
    -

    How to play Lonely Survivor

    -

    The basic gameplay mechanics and controls

    -

    The gameplay of Lonely Survivor is simple but challenging. You have to move your character with the virtual joystick on the left side of the screen, and use your skills and weapons with the buttons on the right side of the screen. You have to avoid or destroy obstacles, collect coins and items, and defeat enemies and bosses. You can also interact with objects such as chests, barrels, crates, etc. by tapping on them. You have a health bar that shows your remaining life, and a mana bar that shows your available energy for using skills. You also have a score counter that shows your current points.

    -

    The different skills and weapons you can use

    -

    You can use different skills and weapons in Lonely Survivor to fight against your enemies. You can buy or upgrade them with coins in the shop. Some of these skills and weapons are:

    - Fireball: A powerful skill that shoots a ball of fire that explodes on impact, dealing damage to enemies and objects in a radius.

    -

    - Ice Spike: A skill that summons a spike of ice that pierces through enemies and objects, dealing damage and slowing them down.

    -

    lonely survivor mod apk unlimited money
    -lonely survivor roguelike adventure game mod apk
    -lonely survivor hack apk download
    -lonely survivor mod menu apk
    -lonely survivor latest version mod apk
    -lonely survivor mod apk god mode
    -lonely survivor apk mod free shopping
    -lonely survivor modded apk offline
    -lonely survivor 1.7.0 cheat apk
    -lonely survivor mod apk 5play
    -lonely survivor mod apk modyolo
    -lonely survivor unlimited gold mod apk
    -lonely survivor 1.7.0 mod apk android 1
    -lonely survivor mod apk no root
    -lonely survivor mage game mod apk
    -lonely survivor mod apk revdl
    -lonely survivor 1.7.0 hack mod apk
    -lonely survivor mod apk unlimited gems
    -lonely survivor qr code generator mod apk
    -lonely survivor image to text mod apk
    -lonely survivor 1.7.0 cracked apk
    -lonely survivor premium mod apk
    -lonely survivor pro mod apk
    -lonely survivor full version mod apk
    -lonely survivor unlocked mod apk
    -lonely survivor 1.7.0 mega mod apk
    -lonely survivor vip mod apk
    -lonely survivor 1.7.0 patched apk
    -lonely survivor no ads mod apk
    -lonely survivor all skills unlocked mod apk
    -download game lonely survivor mod apk
    -how to install lonely survivor mod apk
    -cara download lonely survivor mod apk
    -descargar lonely survivor mod apk
    -télécharger lonely survivor mod apk
    -baixar lonely survivor mod apk
    -unduh lonely survivor mod apk
    -скачать lonely survivor мод апк
    -下载lonely survivor模式apk

    -

    - Lightning Bolt: A skill that unleashes a bolt of lightning that strikes enemies and objects, dealing damage and stunning them.

    -

    - Sword: A weapon that allows you to slash enemies and objects with your blade, dealing damage and knocking them back.

    -

    - Bow: A weapon that allows you to shoot arrows at enemies and objects, dealing damage and piercing through them.

    -

    - Gun: A weapon that allows you to fire bullets at enemies and objects, dealing damage and causing explosions.

    -

    The enemies and bosses you will face

    -

    You will face different enemies and bosses in Lonely Survivor, each with their own abilities and behaviors. Some of these enemies and bosses are:

    -
      -
    • Zombies: The most common enemies in the game, they will chase you and try to bite you, dealing damage and infecting you.
    • -
    • Skeletons: The undead warriors that will attack you with swords or bows, dealing damage and blocking your attacks.
    • -
    • Spiders: The creepy crawlers that will jump at you and try to poison you, dealing damage and slowing you down.
    • -
    • Wolves: The wild beasts that will run at you and try to tear you apart, dealing damage and knocking you down.
    • -
    • Dragons: The legendary creatures that will fly over you and breathe fire or ice at you, dealing damage and destroying everything in their path.
    • -
    • Giant: The massive monster that will smash you with its fists or feet, dealing damage and creating shockwaves.
    • -
    -

    Tips and tricks for Lonely Survivor

    -

    How to upgrade your skills and weapons

    -

    You can upgrade your skills and weapons in Lonely Survivor by spending coins in the shop. Upgrading your skills and weapons will increase their damage, range, speed, cooldown, etc. You can also unlock new skills and weapons by completing achievements or scanning QR codes. You can access the shop and the achievements menu by tapping on the icons on the top right corner of the screen. You can also access the QR code scanner by tapping on the icon on the bottom right corner of the screen.

    -

    How to use the QR code scanner and the image to text converter

    -

    You can use the QR code scanner in Lonely Survivor to scan any QR code you find in the game or in real life. Scanning a QR code will give you a reward such as coins, items, skills, weapons, outfits, accessories, etc. You can also generate your own QR code by tapping on the icon on the bottom left corner of the screen. You can share your QR code with your friends or other players to give them rewards as well. You can also use the image to text converter in Lonely Survivor to convert any image into text. This can help you read signs, notes, books, etc. in the game or in real life. You can access the image to text converter by tapping on the icon on the bottom center of the screen.

    -

    How to survive longer and get higher scores

    -

    You can survive longer and get higher scores in Lonely Survivor by following these tips:

    -
      -
    • Use your skills and weapons wisely. Don't waste your mana or ammo on weak enemies or objects. Save them for stronger enemies or bosses.
    • -
    • Avoid unnecessary damage. Dodge or block enemy attacks, avoid obstacles, and heal yourself when needed.
    • -
    • Collect coins and items. Coins will help you buy or upgrade your skills and weapons. Items will give you various benefits such as health, mana, speed, shield, etc.
    • -
    • Explore the map. You may find hidden secrets, chests, barrels, crates, etc. that may contain coins, items, skills, weapons, outfits, accessories, etc.
    • -
    • Complete achievements. Achievements will give you rewards such as coins, items, skills, weapons, outfits, accessories, etc. They will also challenge you to improve your skills and strategies.
    • -
    -

    Conclusion

    -

    A summary of the main points and a call to action

    -

    Lonely Survivor is an adventure roguelike game that combines action, strategy, and exploration. You can download and install the mod apk to enjoy unlimited money, god mode, menu mod, and other features. You can play the game by moving your character with the virtual joystick on the left side of the screen, and using your skills and weapons with the buttons on the right side of the screen. You can use different skills and weapons such as fireball, ice spike, lightning bolt, sword, bow, gun, etc. You will face different enemies and bosses such as zombies, skeletons, spiders, wolves, dragons, giant, etc. You can also scan and generate QR codes, and convert images to text with the app. You can upgrade your skills and weapons with coins in the shop. You can also unlock new skills and weapons by completing achievements or scanning QR codes. You can survive longer and get higher scores by using your skills and weapons wisely, avoiding unnecessary damage, collecting coins and items, exploring the map, and completing achievements.

    -

    If you are looking for a challenging and fun adventure game that will test your skills and reflexes, you should try Lonely Survivor. You can download and install the mod apk from the links below and enjoy unlimited money, god mode, menu mod, and other features. You can also share your QR code with your friends or other players to give them rewards as well. Lonely Survivor is a game that will keep you entertained for hours. Download it now and start your adventure!

    -

    FAQs

    -

    Here are some frequently asked questions about Lonely Survivor:

    -
      -
    1. What is the latest version of Lonely Survivor?
    2. -

      The latest version of Lonely Survivor is 1.7.0, which was released on June 22, 2023.

      -
    3. Is Lonely Survivor free to play?
    4. -

      Yes, Lonely Survivor is free to play. However, you can buy or upgrade your skills and weapons with real money. You can also watch ads to get more coins or items.

      -
    5. Is Lonely Survivor offline or online?
    6. -

      Lonely Survivor is an offline game. You don't need an internet connection to play it. However, you need an internet connection to download or update the game, scan or generate QR codes, convert images to text, or share your QR code with others.

      -
    7. How can I contact the developer of Lonely Survivor?
    8. -

      You can contact the developer of Lonely Survivor by sending an email to 5play@gmail.com or by visiting their website . You can also follow them on Facebook or Twitter for the latest news and updates.

      -
    9. How can I report a bug or a problem in Lonely Survivor?
    10. -

      You can report a bug or a problem in Lonely Survivor by sending an email to 5play@gmail.com or by leaving a comment on their website . You can also rate and review the game on Google Play or App Store and give your feedback.

      -

    197e85843d
    -
    -
    \ No newline at end of file diff --git a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download 7DS Grand Cross APK Global Version for Android and iOS.md b/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download 7DS Grand Cross APK Global Version for Android and iOS.md deleted file mode 100644 index 4de910dde5c83325a431af9fd3a44728e2467a96..0000000000000000000000000000000000000000 --- a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download 7DS Grand Cross APK Global Version for Android and iOS.md +++ /dev/null @@ -1,154 +0,0 @@ -
    -

    7ds Grand Cross Apk Global: A Guide for Anime Fans

    -

    If you are a fan of the anime and manga series "Seven Deadly Sins", you might have heard of the mobile game adaptation called 7ds Grand Cross. This is an adventure RPG that lets you play as Meliodas, the leader of the Seven Deadly Sins, and other characters from the series. You can relive the story of the anime, enjoy stunning graphics and animations, and engage in strategic and fun battles.

    -

    7ds grand cross apk global


    Download Ziphttps://gohhs.com/2uPtGf



    -

    But what if you want to play the game on your Android device with different language options? That's where 7ds Grand Cross Apk Global comes in. This is a version of the game that is available on Android platforms with 13 language options such as Chinese, English, Korean, etc. You can download and install it easily on your device and enjoy the game with your preferred language.

    -

    In this article, we will guide you through everything you need to know about 7ds Grand Cross Apk Global. We will explain what it is, how to download and install it, how to play it, why you should play it, and what are some tips and tricks for playing it. By the end of this article, you will be ready to embark on an epic adventure with your favorite characters from "Seven Deadly Sins". Let's get started!

    -

    What is 7ds Grand Cross Apk Global?

    -

    A brief introduction to the game and its features

    -

    7ds Grand Cross Apk Global is a card-based RPG mobile game based on the famous anime and manga series " "Seven Deadly Sins" by Nakaba Suzuki. The game was developed by Netmarble and released in Japan in 2019, and later in other regions in 2020. The game has over 30 million downloads worldwide and has received positive reviews from critics and players alike. Some of the features of the game are:

    -
      -
    • You can collect over 200 characters from the "Seven Deadly Sins" universe and customize them with different costumes, furniture, and AR features.
    • -
    • You can relive the story of the anime with original voice dialogues from the seiyuus and cinematic effects.
    • -
    • You can explore various locations from the anime such as Britannia, Camelot, Liones, etc. and interact with NPCs and objects.
    • -
    • You can engage in strategic and fun battles with a card-based RPG system that allows you to combine cards and use ultimate moves.
    • -
    • You can participate in various modes such as story mode, PvP mode, guild mode, death match mode, etc. and earn rewards and rankings.
    • -
    • You can enjoy collaborations with other popular anime series such as Tensura, Attack on Titan, Re: Zero, etc. and get exclusive characters and items.
    • -
    -

    How to download and install the game on Android devices

    -

    If you want to play 7ds Grand Cross Apk Global on your Android device, you need to follow these steps:

    -
      -
    1. Go to the official website of the game [here] and choose your preferred language option.
    2. -
    3. Click on the "Download" button and wait for the apk file to be downloaded on your device.
    4. -
    5. Go to your device's settings and enable the "Unknown sources" option to allow the installation of apps from outside sources.
    6. -
    7. Locate the apk file on your device and tap on it to start the installation process.
    8. -
    9. Follow the instructions on the screen and wait for the installation to be completed.
    10. -
    11. Launch the game and enjoy!
    12. -
    -

    How to play the game and enjoy its content

    -

    Once you have installed the game on your device, you can start playing it by following these tips:

    -
      -
    • Create your account and choose your server. You can also link your account to Facebook or Google for data backup and recovery.
    • -
    • Choose your starter character from Meliodas, Elizabeth, Diane, Ban, King, Gowther, or Merlin. You can also get more characters by summoning them with gems or tickets.
    • -
    • Follow the tutorial and learn the basics of the game such as how to move around, how to use cards, how to fight enemies, etc.
    • -
    • Complete the story mode chapters and relive the events of the anime. You can also watch cutscenes and dialogues from the menu.
    • -
    • Explore the world map and visit different locations. You can also find hidden treasures, quests, secrets, etc.
    • -
    • Join a guild and cooperate with other players. You can also chat with them, send gifts, request help, etc.
    • -
    • Participate in various events and challenges such as boss battles, raids, tower trials, etc. and earn rewards and rankings.
    • -
    • Have fun!
    • -
    -

    Why should you play 7ds Grand Cross Apk Global?

    -

    The game is based on the popular anime and manga series "Seven Deadly Sins"

    -

    If you are a fan of "Seven Deadly Sins", you will love this game because it is faithful to the original source material. You can enjoy:

    -

    The story and characters of the game are faithful to the original

    -

    The game follows the plot of the anime from season 1 to season 4 (so far), covering all the major arcs such as The Holy Knight Saga, The Ten Commandments Saga, The Holy War Saga, etc. You can also experience some original stories that are exclusive to the game. You can interact with all your favorite characters from the series such as Meliodas, Elizabeth, Ban, Diane, King, Gowther, Merlin, Escanor, Hawk, etc. You can also meet new characters that are introduced in the game such as Lilia, Valenti, Shin, etc.

    -

    The game features original voice dialogues from the seiyuus

    -

    The game has a high-quality voice acting from the original seiyuus (voice actors) of the anime such as Kaji Yuki (Meliodas), Amamiya Sora (Elizabeth), Suzuki Tatsuhisa (Ban), Yuuki Aoi (Diane), Fukuyama Jun (King), Takagi Yuuhei (Gowther), Sakamoto Maaya (Merlin), Ono Yuuki (Escanor), Kuno Misaki (Hawk), etc. You can hear them speak their lines and express their emotions in the game. You can also choose from different language options such as Japanese, English, Korean, etc.

    -

    7ds grand cross global apk download
    -7ds grand cross apk global version
    -7ds grand cross global apk mod
    -7ds grand cross global apk latest
    -7ds grand cross global apk update
    -7ds grand cross global apk obb
    -7ds grand cross global apk reddit
    -7ds grand cross global apk ios
    -7ds grand cross global apk free
    -7ds grand cross global apk hack
    -7ds grand cross global apk english
    -7ds grand cross global apk mirror
    -7ds grand cross global apk offline
    -7ds grand cross global apk data
    -7ds grand cross global apk file
    -7ds grand cross global apk pure
    -7ds grand cross global apk android
    -7ds grand cross global apk nox
    -7ds grand cross global apk bluestacks
    -7ds grand cross global apk pc
    -7ds grand cross global apk online
    -7ds grand cross global apk full
    -7ds grand cross global apk cracked
    -7ds grand cross global apk unlimited
    -7ds grand cross global apk original
    -7ds grand cross global apk patched
    -7ds grand cross global apk installer
    -7ds grand cross global apk mega
    -7ds grand cross global apk mediafire
    -7ds grand cross global apk google drive
    -7ds grand cross global apk direct link
    -7ds grand cross global apk vip
    -7ds grand cross global apk premium
    -7ds grand cross global apk pro
    -7ds grand cross global apk plus
    -7ds grand cross global apk gold
    -7ds grand cross global apk gems
    -7ds grand cross global apk coins
    -7ds grand cross global apk diamonds
    -7ds grand cross global apk characters
    -7ds grand cross global apk gameplay
    -7ds grand cross global apk review
    -7ds grand cross global apk tips
    -7ds grand cross global apk guide
    -7ds grand cross global apk cheats
    -7ds grand cross global apk codes
    -7ds grand cross global apk news
    -7ds grand cross global apk events
    -7ds grand cross global apk wiki

    -

    The game has collaborated with other anime series such as Tensura, Attack on Titan, Re: Zero, etc.

    -

    The game has also featured crossover events with other popular anime series such as That Time I Got Reincarnated as a Slime (Tensura), Attack on Titan (Shingeki no Kyojin), Re: Zero − Starting Life in Another World (Re: Zero kara Hajimeru Isekai Seikatsu), etc. You can get exclusive characters and items from these series and enjoy special stories and quests. You can also see how the characters from different worlds interact with each other and have fun.

    -

    The game has stunning graphics and animations

    -

    Another reason to play this game is that it has amazing graphics and animations that will make you feel like you are watching the anime. You can admire:

    -

    The game uses 3D models and 3D animation for the characters and scenes

    -

    The game uses high-quality 3D models and 3D animation for the characters and scenes in the game. You can see the details and expressions of the characters and their movements. You can also zoom in and out and rotate the camera to view them from different angles. You can also change the settings to adjust the graphics quality according to your device's performance.

    -

    The game recreates the iconic moments from the anime with cinematic effects

    -

    The game also recreates the iconic moments from the anime with cinematic effects such as slow motion, zoom, blur, etc. You can relive the epic scenes such as Meliodas vs. The Ten Commandments, Escanor vs. Estarossa, Ban vs. Demon King, etc. You can also watch the opening and ending animations of the anime in the game.

    -

    The game has a high-quality soundtrack produced by Okabe Keiichi from MONACA and the music director from Nier series

    -

    The game also has a high-quality soundtrack produced by Okabe Keiichi from MONACA and the music director from Nier series. The soundtrack consists of original songs and instrumental tracks that match the mood and atmosphere of the game. You can listen to the songs such as "Nanatsu no Taizai", "Perfect Time", "Eiyuu-tachi", etc. You can also change the settings to adjust the sound volume and effects according to your preference.

    -

    The game has a strategic and fun combat system

    -

    The last reason to play this game is that it has a strategic and fun combat system that will challenge your skills and creativity. You can experience:

    -

    The game uses a card-based RPG system with skill synthesis

    -

    The game uses a card-based RPG system with skill synthesis for the battles. You can choose up to four characters for your team and use their cards to perform actions such as attack, defense, heal, etc. You can also combine cards of the same type or rank to create more powerful skills or ultimate moves. You can also use support characters or items to assist you in battle.

    -

    The game allows you to combine cards and use ultimate moves to defeat enemies

    -

    The game also allows you to combine cards and use ultimate moves to defeat enemies. You can fill up your ultimate gauge by using cards or taking damage, and then unleash your ultimate move when it is full. You can also use skill synthesis to create more powerful ultimate moves or combo attacks with your teammates. You can see the spectacular animations of your ultimate moves such as Full Counter, Disaster, Sunshine, Infinity, etc.

    -

    The game lets you customize your characters with different costumes, furniture, and AR features

    -

    The game also lets you customize your characters with different costumes, furniture, and AR features. You can change your characters' outfits with various costumes such as school uniforms, swimsuits, casual clothes, etc. You can also decorate your tavern with different furniture such as tables, chairs, beds, etc. You can also use AR features to take photos or videos of your characters in real life.

    -

    What are some tips and tricks for playing 7ds Grand Cross Apk Global?

    -

    How to get more resources and rewards in the game

    -

    If you want to get more resources and rewards in the game, you should follow these tips:

    -

    Complete daily tasks and achievements

    -

    You should complete daily tasks and achievements in the game to get more gems, gold, stamina, tickets, etc. You can check your daily tasks and achievements from the menu and complete them as much as possible. You can also get more rewards by logging in daily, watching ads, and inviting friends.

    -

    Participate in events and challenges

    -

    You should also participate in events and challenges in the game to get more resources and rewards. You can check the current and upcoming events and challenges from the menu and join them as much as possible. You can get exclusive characters, items, costumes, etc. by participating in these events and challenges. You can also get more rewards by completing the event missions and rankings.

    -

    Join a guild and cooperate with other players

    -

    You should also join a guild and cooperate with other players in the game to get more resources and rewards. You can join or create a guild from the menu and chat with other guild members, send gifts, request help, etc. You can also participate in guild activities such as guild boss battles, guild wars, guild donations, etc. and earn guild coins, guild points, guild chests, etc.

    -

    How to improve your characters and skills in the game

    -

    If you want to improve your characters and skills in the game, you should follow these tips:

    -

    Upgrade your cards and equipment

    -

    You should upgrade your cards and equipment in the game to improve your characters' stats and abilities. You can upgrade your cards by using gold and materials such as books, potions, chalices, etc. You can also awaken your cards by using awakening stones to unlock new skills and ultimate moves. You can upgrade your equipment by using gold and materials such as anvils, hammers, crystals, etc. You can also enhance your equipment by using enhancement stones to increase their substats.

    -

    Enhance your characters' stats and abilities

    -

    You should also enhance your characters' stats and abilities in the game to improve their performance. You can enhance your characters' stats by using food items such as dishes, snacks, drinks, etc. You can also enhance your characters' abilities by using cosmetics items such as costumes, hairstyles, accessories, etc. You can also use furniture items such as tables, chairs, beds, etc. to increase their affection level and unlock more dialogues.

    -

    Unlock new skills and ultimate moves

    -

    You should also unlock new skills and ultimate moves in the game to improve your characters' combat potential. You can unlock new skills by awakening your cards or using skill books. You can also unlock new ultimate moves by increasing your characters' ultimate level or using ultimate books. You can also use association characters or link characters to boost your characters' stats and ultimate moves.

    -

    How to win battles and quests in the game

    -

    If you want to win battles and quests in the game, you should follow these tips:

    -

    Choose the right cards and strategy for each situation

    -

    You should choose the right cards and strategy for each situation in the game to win battles and quests. You should consider the type, rank, color, effect, cost, etc. of each card and use them wisely. You should also consider the synergy between your cards and your teammates' cards and use skill synthesis or combo attacks to create more powerful skills or ultimate moves.

    -

    Use elemental advantages and disadvantages

    -

    You should also use elemental advantages and disadvantages in the game to win battles and quests. You should consider the element of each character and enemy and use them accordingly. There are five elements in the game: red (strength), green (speed), blue (vitality), purple (magic), and yellow (spirit). Each element has an advantage or disadvantage over another element: red > green > blue > red; purple > yellow > purple. You should use the element that has an advantage over the enemy's element to deal more damage or reduce damage.

    -

    Use support characters and items wisely

    -

    You should also use support characters and items wisely in the game to win battles and quests. You should consider the role, effect, and compatibility of each support character and item and use them accordingly. There are four roles of support characters in the game: attack, defense, heal, and utility. Each role has a different effect and function in battle. You should use the support character that matches your team's needs and strategy. You should also use items such as potions, food, equipment, etc. to boost your characters' stats, heal them, or give them special effects.

    -

    Conclusion

    -

    7ds Grand Cross Apk Global is a great game for anime fans who want to enjoy the story and characters of "Seven Deadly Sins" on their Android devices. The game has amazing graphics and animations, a strategic and fun combat system, and a lot of content and features to explore. The game is also easy to download and install, and has 13 language options to choose from. If you are looking for a new adventure RPG to play, you should give 7ds Grand Cross Apk Global a try. You will not regret it!

    -

    FAQs

    -

    Here are some frequently asked questions about 7ds Grand Cross Apk Global:

    -

    Q: Is 7ds Grand Cross Apk Global free to play?

    -

    A: Yes, 7ds Grand Cross Apk Global is free to play. You can download and install it without paying anything. However, the game also has some optional in-app purchases that can enhance your gaming experience.

    -

    Q: Is 7ds Grand Cross Apk Global safe to download and install?

    -

    A: Yes, 7ds Grand Cross Apk Global is safe to download and install. The game is developed by Netmarble, a reputable company that has created many popular games such as Marvel Future Fight, BTS World, King of Fighters All Star, etc. The game is also verified by Google Play Protect, which scans apps for malware and viruses.

    -

    Q: How much storage space does 7ds Grand Cross Apk Global require?

    -

    A: 7ds Grand Cross Apk Global requires about 3 GB of storage space on your device. You should also have enough free space for updates and additional data.

    -

    Q: Can I play 7ds Grand Cross Apk Global offline?

    -

    A: No, 7ds Grand Cross Apk Global requires an internet connection to play. You should have a stable and fast internet connection to enjoy the game without any lag or interruption.

    -

    Q: Can I play 7ds Grand Cross Apk Global with other players?

    -

    A: Yes, 7ds Grand Cross Apk Global has various multiplayer modes that allow you to play with other players. You can join a guild and cooperate with other guild members, compete with other players in PvP mode, team up with other players in death match mode, etc.

    401be4b1e0
    -
    -
    \ No newline at end of file diff --git a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download Beach Buggy Racing 2 MOD APK with Unlimited Money and Diamonds.md b/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download Beach Buggy Racing 2 MOD APK with Unlimited Money and Diamonds.md deleted file mode 100644 index 1a296a04ad4c88c5a54520ff3eececfb10014fc4..0000000000000000000000000000000000000000 --- a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download Beach Buggy Racing 2 MOD APK with Unlimited Money and Diamonds.md +++ /dev/null @@ -1,81 +0,0 @@ - -

    Beach Buggy Racing 2 Hack Mod APK Unlimited Money and Diamonds

    -

    Are you a fan of kart racing games? Do you love to race on exotic tracks with crazy power-ups and wacky characters? If yes, then you should check out Beach Buggy Racing 2, one of the most popular racing games for mobile devices. And if you want to have more fun and excitement, you should try the hack mod APK that gives you unlimited money and diamonds, as well as access to all drivers, cars, power-ups, and tracks. In this article, we will tell you everything you need to know about Beach Buggy Racing 2 hack mod APK, including its features, how to download and install it, and some tips and tricks for playing the game.

    -

    Introduction

    -

    What is Beach Buggy Racing 2?

    -

    Beach Buggy Racing 2 is a sequel to the hit racing game Beach Buggy Racing, which introduced over 100 million international mobile players to console-style kart-racing with a playful offroad twist. In Beach Buggy Racing 2, you can join the Beach Buggy Racing League and compete against drivers and cars from around the world. You can race through Egyptian pyramids, dragon-infested castles, pirate ship wrecks, and experimental alien bio-labs. You can collect and upgrade an arsenal of fun and wacky power-ups, such as Chain Lightning, Donut Tires, Boost Juice, and Killer Bees. You can also recruit new drivers, each with their own unique special ability, and assemble a garage full of cars, from beach buggies to monster trucks to muscle cars.

    -

    beach buggy racing 2 hack mod apk unlimited money and diamonds


    Downloadhttps://gohhs.com/2uPmNR



    -

    What is the hack mod APK?

    -

    The hack mod APK is a modified version of the original game that gives you some advantages that are not available in the official version. For example, you can get unlimited money and diamonds, which are the premium currencies in the game. You can use them to buy new cars, upgrade power-ups, unlock tracks, and more. You can also unlock all drivers and cars without having to complete challenges or spend gems. You can also unlock all power-ups and tracks without having to level up or earn stars. Moreover, you can enjoy the game without any ads or root requirement.

    -

    Why use the hack mod APK?

    -

    You may wonder why you should use the hack mod APK instead of playing the original game. Well, there are several reasons why you may want to do so. First of all, you can save a lot of time and effort by getting unlimited money and diamonds. You don't have to grind for coins or gems or watch ads to get them. You can just buy whatever you want without any limitation. Second, you can have more fun and variety by unlocking all drivers and cars. You don't have to stick with the same character or vehicle for a long time. You can switch between different drivers and cars according to your preference or strategy. Third, you can experience more challenge and excitement by unlocking all power-ups and tracks. You don't have to wait for your level or stars to increase to access new power-ups or tracks. You can just choose any power-up or track that suits your mood or skill level.

    -

    Features of the hack mod APK

    -

    Unlimited money and diamonds

    -

    One of the main features of the hack mod APK is that it gives you unlimited money and diamonds. Money and diamonds are the two currencies in Beach Buggy Racing 2 that you can use to buy various items and upgrades. Money is the basic currency that you can earn by racing, completing challenges, or watching ads. Diamonds are the premium currency that you can buy with real money or get by completing achievements or daily missions. With unlimited money and diamonds, you can buy any car, power-up, track, or driver that you want without any restriction. You can also upgrade your power-ups to the maximum level and make them more powerful and effective. You can also buy extra tickets to enter tournaments and events and win more rewards.

    -

    All drivers and cars unlocked

    -

    Another feature of the hack mod APK is that it unlocks all drivers and cars for you. Drivers and cars are the two main components of Beach Buggy Racing 2 that determine your performance and style in the game. Drivers are the characters that you can choose to race with, each with their own special ability that can give you an edge in the race. Cars are the vehicles that you can drive, each with their own stats and attributes that affect your speed, acceleration, handling, and durability. There are over 40 drivers and over 40 cars in Beach Buggy Racing 2, each with their own unique design and personality. With the hack mod APK, you can unlock all drivers and cars without having to complete challenges or spend gems. You can try out different combinations of drivers and cars and find your favorite one.

    -

    All power-ups and tracks unlocked

    -

    A third feature of the hack mod APK is that it unlocks all power-ups and tracks for you. Power-ups and tracks are the two elements of Beach Buggy Racing 2 that make the game fun and exciting. Power-ups are the items that you can collect and use during the race, such as rockets, fireballs, oil slicks, shields, magnets, and more. Power-ups can help you attack your opponents, defend yourself, or boost your speed. Tracks are the locations where you race, such as beaches, jungles, volcanoes, castles, pyramids, and more. Tracks have different layouts, obstacles, shortcuts, and hazards that challenge your driving skills. There are over 40 power-ups and over 40 tracks in Beach Buggy Racing 2, each with their own effects and features. With the hack mod APK, you can unlock all power-ups and tracks without having to level up or earn stars. You can explore all the power-ups and tracks and discover their secrets.

    -

    No ads and no root required

    -

    A fourth feature of the hack mod APK is that it removes all ads and does not require root access. Ads are the annoying pop-ups that appear in between races or menus that interrupt your gameplay and waste your time. Ads are also a source of income for the developers of the game, but they can be frustrating for some players who just want to enjoy the game without any distraction. Root access is a permission that allows you to modify your device's system settings and files. Root access is sometimes required for some hack mod APKs to work properly, but it can also void your warranty or damage your device if done incorrectly. With the hack mod APK, you don't have to worry about any ads or root access. You can play the game smoothly and safely without any interruption or risk.

    -

    How to download and install the hack mod APK

    -

    Step 1: Enable unknown sources

    -

    The first step to download and install the hack mod APK is to enable unknown sources on your device. Unknown sources are sources other than the Google Play Store or other official app stores that offer apps or files for download. The hack mod APK is an unknown source because it is not available on any official app store. To enable unknown sources on your device, go to Settings > Security > Unknown Sources (or similar option) and toggle it on.

    -

    Step 2: Download the hack mod APK file

    -

    The second step to download and install the hack mod APK is to download the hack mod APK file from a reliable website. There are many websites that offer hack mod APKs for various games, but not all of them are trustworthy or safe. Some websites may contain viruses or malware that can harm your device or steal your personal information. Some websites may also provide fake or outdated hack mod APKs that do not work or cause problems in the game. To download the hack mod APK file for Beach Buggy Racing 2, go to [this website] (or similar website) and click on the download button.

    -

    beach buggy racing 2 mod apk free download with unlimited gems
    -how to hack beach buggy racing 2 and get unlimited coins and diamonds
    -beach buggy racing 2 unlimited money and gems mod apk latest version
    -download beach buggy racing 2 mod apk hack for android with unlimited resources
    -beach buggy racing 2 hack mod apk no root required with unlimited cash and diamonds
    -beach buggy racing 2 mod apk unlimited everything unlocked for free
    -beach buggy racing 2 hack cheat mod apk with unlimited coins gems and diamonds
    -beach buggy racing 2 mod apk offline with unlimited money and gems
    -beach buggy racing 2 hack online generator for unlimited resources
    -beach buggy racing 2 mod apk revdl with unlimited money and diamonds
    -beach buggy racing 2 hack tool apk download with unlimited coins and gems
    -beach buggy racing 2 mod apk rexdl with unlimited diamonds and money
    -beach buggy racing 2 hack version download with unlimited gems and coins
    -beach buggy racing 2 mod apk happymod with unlimited money and diamonds
    -beach buggy racing 2 hack apk pure with unlimited resources
    -beach buggy racing 2 mod apk android 1 with unlimited money and gems
    -beach buggy racing 2 hack ios download with unlimited coins and diamonds
    -beach buggy racing 2 mod apk obb with unlimited money and gems
    -beach buggy racing 2 hack no survey no human verification with unlimited resources
    -beach buggy racing 2 mod apk an1 with unlimited diamonds and money
    -beach buggy racing 2 hack without verification with unlimited gems and coins
    -beach buggy racing 2 mod apk all cars unlocked with unlimited money and diamonds
    -beach buggy racing 2 hack no root with unlimited resources
    -beach buggy racing 2 mod apk latest version download with unlimited money and gems
    -beach buggy racing 2 hack game download with unlimited coins and diamonds

    -

    Step 3: Install the hack mod APK file

    -

    The third step to download and install the hack mod APK is to install the hack mod APK file on your device. To install the hack mod APK file on your device, go to your file manager and locate the downloaded file. Tap on the file and follow the instructions on the screen to install it. You may need to grant some permissions to the app to allow it to access your device's resources and functions.

    -

    Step 4: Launch the game and enjoy

    -

    The fourth and final step to download and install the hack mod APK is to launch the game and enjoy. To launch the game, go to your app drawer and tap on the Beach Buggy Racing 2 icon. You should see a message that says "Modded by [name of the modder] (or similar message)" on the loading screen. This means that the hack mod APK is working properly and you can enjoy all its features. You can check your money and diamonds balance, as well as your drivers, cars, power-ups, and tracks in the game menu. You can also start racing and have fun with unlimited money and diamonds, all drivers and cars unlocked, all power-ups and tracks unlocked, no ads, and no root required.

    -

    Tips and tricks for playing Beach Buggy Racing 2

    -

    Master the drift and powerslide

    -

    One of the tips and tricks for playing Beach Buggy Racing 2 is to master the drift and powerslide. Drift and powerslide are two techniques that allow you to turn corners faster and more smoothly, as well as fill up your boost meter. To drift, you need to tap and hold the brake button while turning. To powerslide, you need to tap and release the brake button quickly while turning. Both techniques will make your car skid sideways and create sparks behind your wheels. The longer you drift or powerslide, the more boost you will get. You can use boost to speed up your car by tapping the boost button on the right side of the screen.

    -

    Use your special ability wisely

    -

    Another tip and trick for playing Beach Buggy Racing 2 is to use your special ability wisely. Special ability is a unique power that each driver has that can give you an advantage in the race. For example, Rez has a special ability called Electro Blast that can zap nearby opponents with lightning bolts. McSkelly has a special ability called Bone Shaker that can shake off any power-ups that hit him. To use your special ability, you need to fill up your special meter by collecting coins or hitting opponents with power-ups. Once your special meter is full, you can tap the special button on the left side of the screen to activate your special ability. You should use your special ability at the right time and place to maximize its effect.

    -

    Collect and upgrade power-ups

    -

    A third tip and trick for playing Beach Buggy Racing 2 is to collect and upgrade power-ups. Power-ups are items that you can collect and use during the race, such as rockets, fireballs, oil slicks, shields, magnets, and more. Power-ups can help you attack your opponents, defend yourself, or boost your speed. You can collect power-ups by driving over them on the track or by opening chests after each race. You can also upgrade power-ups by spending coins or gems in the power-up shop. Upgrading power-ups will make them more powerful and effective, such as increasing their damage, range, duration, or number.

    -

    Customize your ride and deck

    -

    A fourth tip and trick for playing Beach Buggy Racing 2 is to customize your ride and deck. Ride is your car that you can drive in the game, while deck is your set of power-ups that you can use in the game. You can customize your ride by changing its color, paint job, wheels, spoiler, engine, exhaust, or stickers. You can also customize your deck by choosing which power-ups you want to use in each race. You can have up to four power-ups in your deck at a time. You can customize your ride and deck by spending coins or gems in the garage or in the deck menu.

    -

    Play against the world and challenge yourself

    -hundreds of other players and win prizes. You can also play adventure mode where you can explore different worlds and complete various challenges. You can also play quick race mode where you can race on any track with any driver and car. You can also play split screen mode where you can race with up to four friends on the same device. You can also play custom race mode where you can create your own race with your own rules and settings. You can also play special events where you can participate in seasonal or themed races and win exclusive rewards.

    -

    Conclusion

    -

    Beach Buggy Racing 2 is a fun and addictive racing game that you can play on your mobile device. It offers a lot of features and content that will keep you entertained for hours. However, if you want to have more fun and excitement, you should try the hack mod APK that gives you unlimited money and diamonds, as well as access to all drivers, cars, power-ups, and tracks. The hack mod APK is easy to download and install, and it does not require any ads or root access. It also gives you some tips and tricks for playing the game better. With the hack mod APK, you can enjoy Beach Buggy Racing 2 to the fullest and become the best racer in the world.

    -

    FAQs

    -

    Here are some frequently asked questions about Beach Buggy Racing 2 hack mod APK:

    -

    Is the hack mod APK safe to use?

    -

    Yes, the hack mod APK is safe to use as long as you download it from a reliable website and follow the instructions carefully. The hack mod APK does not contain any viruses or malware that can harm your device or steal your personal information. It also does not require any root access that can void your warranty or damage your device. However, you should always be careful when downloading and installing any files from unknown sources and scan them with a reputable antivirus software before opening them.

    -

    Will the hack mod APK affect my game progress or account?

    -

    No, the hack mod APK will not affect your game progress or account in any negative way. The hack mod APK works independently from the original game and does not interfere with its data or settings. You can still play the original game normally without any problems or conflicts. You can also switch between the original game and the hack mod APK anytime you want without losing any progress or data. However, you should note that the hack mod APK may not be compatible with some features or updates of the original game, such as cloud saving, leaderboards, achievements, or online multiplayer.

    -

    Can I use the hack mod APK on other devices or platforms?

    -

    No, the hack mod APK is only designed for Android devices and platforms. It will not work on iOS devices or platforms, such as iPhones, iPads, or Macs. It will also not work on Windows devices or platforms, such as PCs, laptops, or tablets. It will also not work on other devices or platforms that are not compatible with Android applications or files. If you want to play Beach Buggy Racing 2 on other devices or platforms, you will have to download and install the official version of the game from the respective app stores.

    -

    How can I update the hack mod APK?

    -

    The hack mod APK is not an official version of the game and it does not receive regular updates from the developers of the game. Therefore, you will have to check for updates manually from the website where you downloaded the hack mod APK. If there is a new version of the hack mod APK available, you will have to download and install it again following the same steps as before. However, you should note that updating the hack mod APK may cause some issues or errors in the game, such as crashes, glitches, or compatibility problems.

    -

    Where can I get more information or support for the hack mod APK?

    -

    If you have any questions or problems regarding the hack mod APK, you can contact the website where you downloaded it from or the person who created it. They may be able to provide you with more information or support for the hack mod APK. However, you should note that they are not affiliated with or endorsed by Vector Unit, the developers of Beach Buggy Racing 2. Therefore, they may not be able to answer all your questions or solve all your problems related to the game itself.

    401be4b1e0
    -
    -
    \ No newline at end of file diff --git a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download Word Search Generator and Print Fun Puzzles for Kids and Adults.md b/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download Word Search Generator and Print Fun Puzzles for Kids and Adults.md deleted file mode 100644 index 61114c982cb67f1da694d81c329b7292c5596760..0000000000000000000000000000000000000000 --- a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download Word Search Generator and Print Fun Puzzles for Kids and Adults.md +++ /dev/null @@ -1,96 +0,0 @@ -
    -

    Download Word Search Generator: How to Create Fun and Educational Puzzles

    -

    Do you love word search puzzles? Do you want to create your own puzzles for yourself, your friends, or your students? If so, you might want to download a word search generator. A word search generator is a tool that lets you create customized word search puzzles on any topic you like. In this article, we will show you how to download a word search generator, how to use it effectively, and why it is a great way to have fun and learn new words.

    -

    download word search generator


    Downloadhttps://gohhs.com/2uPu0W



    -

    What is a word search generator?

    -

    A tool that lets you create your own word search puzzles

    -

    A word search generator is a software program or an online application that allows you to make your own word search puzzles. You can enter your own list of words, choose the size and shape of the grid, and customize the font and color of the letters. The word search generator will then create a puzzle for you, which you can preview, print, or save as a PDF file.

    -

    The benefits of using a word search generator

    -

    It's fun and easy to use

    -

    One of the main benefits of using a word search generator is that it is fun and easy to use. You can create puzzles in minutes, without any special skills or knowledge. You can also experiment with different settings and options, and see the results instantly. You can make puzzles for yourself, or share them with others. You can also use them as gifts, invitations, or party games.

    -

    It's customizable and flexible

    -

    Another benefit of using a word search generator is that it is customizable and flexible. You can create puzzles on any theme or topic you like, such as animals, sports, movies, or holidays. You can also choose the level of difficulty, the number of words, and the direction of the words. You can make puzzles as simple or as complex as you want. You can also adjust the appearance of the puzzle, such as the font size, color, and style.

    -

    It's educational and challenging

    -

    A third benefit of using a word search generator is that it is educational and challenging. Word search puzzles are a great way to improve your vocabulary, spelling, and concentration skills. They can also help you learn new words and concepts related to your theme or topic. Word search puzzles can also challenge your brain and keep it sharp. They can stimulate your memory, logic, and problem-solving abilities.

    -

    How to download a word search generator?

    -

    Choose a reliable and reputable website

    -

    Some examples of websites that offer word search generators

    -

    The first step to download a word search generator is to choose a reliable and reputable website that offers this service. There are many websites that provide word search generators, but not all of them are trustworthy or high-quality. Some websites may have viruses, malware, or pop-up ads that can harm your computer or device. Some websites may also have limited features, poor design or functionality, or charge a fee for downloading or using their word search generators. To avoid these issues, you should choose a website that is well-known, reputable, and safe. You should also check the reviews and ratings of the website, and see what other users have to say about their experience. You should also look for a website that offers a variety of features, options, and templates for your word search puzzles. You should also make sure that the website is free or affordable, and that it does not require you to sign up or register. Here are some examples of websites that offer word search generators: - [Canva](^2^): A free online design tool that lets you create beautiful word search puzzles with hundreds of templates, fonts, colors, and icons. You can also print or share your puzzles online. - [Ahrefs](^3^): A free keyword generator tool that helps you find thousands of keyword ideas for your word search puzzles. You can also see key SEO metrics and analyze the competition for each keyword. - [The Word Search](^4^): A free word search maker that allows you to create custom puzzles with your own words and settings. You can also browse and play thousands of puzzles created by other users.

    Follow the instructions on the website

    -

    How to enter your word list and customize your puzzle

    -

    The next step to download a word search generator is to follow the instructions on the website you have chosen. Each website may have slightly different steps, but the general process is similar. You will need to enter your word list and customize your puzzle according to your preferences. To enter your word list, you will need to type or paste the words you want to include in your puzzle. You can use any words you like, as long as they are relevant to your theme or topic. You can also use phrases or sentences, but they may be harder to fit in the grid. You should also make sure that your words are spelled correctly and do not contain any special characters or symbols. To customize your puzzle, you will need to choose the size and shape of the grid, the direction of the words, and the appearance of the letters. You can make the grid as small or as large as you want, depending on how many words you have and how difficult you want the puzzle to be. You can also choose whether you want the words to be horizontal, vertical, diagonal, forward, backward, or a combination of these. You can also change the font size, style, and color of the letters, as well as the background color of the grid.

    -

    download word search maker
    -download word search puzzle generator
    -download word search creator
    -download word search builder
    -download word search maker software
    -download word search puzzle maker app
    -download word search generator for teachers
    -download word search generator for kids
    -download word search generator for free
    -download word search generator online
    -download word search generator with answer key
    -download word search generator with clues
    -download word search generator with hidden message
    -download word search generator with pictures
    -download word search generator with themes
    -download custom word search generator
    -download printable word search generator
    -download pdf word search generator
    -download editable word search generator
    -download interactive word search generator
    -download crossword and word search generator
    -download bingo and word search generator
    -download sudoku and word search generator
    -download scrabble and word search generator
    -download anagram and word search generator
    -download easy word search generator
    -download hard word search generator
    -download fun word search generator
    -download educational word search generator
    -download holiday word search generator
    -download halloween word search generator
    -download christmas word search generator
    -download thanksgiving word search generator
    -download easter word search generator
    -download valentine's day word search generator
    -download birthday word search generator
    -download sports word search generator
    -download animals word search generator
    -download food word search generator
    -download movies word search generator
    -download music word search generator
    -download books word search generator
    -download games word search generator
    -download science word search generator
    -download math word search generator
    -download history word search generator
    -download geography word search generator
    -download art word search generator
    -download language word search generator.

    -

    How to preview and print your puzzle

    -

    The final step to download a word search generator is to preview and print your puzzle. Once you have entered your word list and customized your puzzle, you will be able to see how it looks on the screen. You can check if everything is correct and if you are satisfied with the result. You can also make any changes or adjustments if needed. To print your puzzle, you will need to click on the print button or icon on the website. You will then be able to choose your printer settings and options, such as paper size, orientation, margins, and quality. You will also be able to save your puzzle as a PDF file if you want to print it later or share it online.

    -

    How to use a word search generator effectively?

    -

    Choose a suitable theme or topic for your puzzle

    -

    Some tips on how to select a good theme or topic

    -

    One of the most important aspects of using a word search generator effectively is choosing a suitable theme or topic for your puzzle. Your theme or topic should be relevant, interesting, and appropriate for your audience and purpose. It should also be specific enough to narrow down your word list and avoid confusion. Here are some tips on how to select a good theme or topic for your puzzle: - Think about who will be playing your puzzle and what they like or need. For example, if you are making a puzzle for kids, you might want to choose a theme that is fun, colorful, and educational. If you are making a puzzle for adults, you might want to choose a theme that is challenging, engaging, and informative. - Think about what kind of message or goal you want to convey with your puzzle. For example, if you are making a puzzle for a birthday party, you might want to choose a theme that is festive, personal, and celebratory. If you are making a puzzle for a class project, you might want to choose a theme that is related to the subject matter, curriculum, and learning objectives. - Think about what kind of words you want to use in your puzzle. For example, if you are making a puzzle for beginners, you might want to choose words that are easy, common, and familiar. If you are making a puzzle for advanced learners, you might want to choose words that are difficult, rare, and unfamiliar. - Think about how many words you want to use in your puzzle. For example, if you are making a puzzle for a short time, you might want to use fewer words that are longer and more complex. If you are making a puzzle for a long time, you might want to use more words that are shorter and simpler.

    Choose appropriate words for your puzzle

    -

    Some tips on how to select good words for your puzzle

    -

    Another important aspect of using a word search generator effectively is choosing appropriate words for your puzzle. Your words should be relevant, interesting, and appropriate for your theme or topic. They should also be clear, accurate, and consistent. Here are some tips on how to select good words for your puzzle: - Use words that are related to your theme or topic. For example, if your theme is animals, you might want to use words that are names of animals, their habitats, their characteristics, or their sounds. - Use words that are suitable for your audience and purpose. For example, if your audience is kids, you might want to use words that are simple, fun, and positive. If your purpose is to teach, you might want to use words that are informative, relevant, and challenging. - Use words that are spelled correctly and do not contain any special characters or symbols. For example, avoid using words that have apostrophes, hyphens, or accents. These can make the puzzle harder to read and solve. - Use words that are consistent in length and difficulty. For example, avoid using words that are too long or too short compared to the rest of the words. Also avoid using words that are too easy or too hard compared to the rest of the words.

    Choose the right level of difficulty for your puzzle

    -

    Some tips on how to adjust the level of difficulty for your puzzle

    -

    The final important aspect of using a word search generator effectively is choosing the right level of difficulty for your puzzle. Your level of difficulty should be appropriate for your audience and purpose. It should also be balanced and varied. Here are some tips on how to adjust the level of difficulty for your puzzle: - Use the size and shape of the grid to control the level of difficulty. For example, a larger grid with more letters will make the puzzle harder than a smaller grid with fewer letters. A square or rectangular grid will make the puzzle easier than a circular or irregular grid. - Use the direction of the words to control the level of difficulty. For example, horizontal and vertical words will make the puzzle easier than diagonal or backward words. A combination of different directions will make the puzzle harder than a single direction. - Use the number and length of the words to control the level of difficulty. For example, fewer words with longer lengths will make the puzzle harder than more words with shorter lengths. A variation of different lengths will make the puzzle harder than a uniform length.

    Conclusion

    -

    Summarize the main points of the article

    -

    In conclusion, downloading a word search generator is a great way to create fun and educational puzzles on any topic you like. You can use a word search generator to enter your own word list, customize your puzzle settings, and print or share your puzzle online. You can also use a word search generator effectively by choosing a suitable theme or topic, appropriate words, and the right level of difficulty for your puzzle.

    -

    Provide a call to action for the reader

    -

    If you want to download a word search generator and start making your own puzzles today, check out one of the websites we mentioned above. You will be amazed by how easy and enjoyable it is to create your own word search puzzles. You will also be able to improve your vocabulary, spelling, and concentration skills while having fun.

    -

    FAQs

    -

    What is a word search generator?

    -

    A word search generator is a tool that lets you create your own word search puzzles on any topic you like.

    -

    How do I download a word search generator?

    -

    You can download a word search generator by choosing a reliable and reputable website that offers this service, and following the instructions on the website.

    -

    How do I use a word search generator effectively?

    -

    You can use a word search generator effectively by choosing a suitable theme or topic, appropriate words, and the right level of difficulty for your puzzle.

    -

    What are the benefits of using a word search generator?

    -

    The benefits of using a word search generator are that it is fun and easy to use, customizable and flexible, and educational and challenging.

    -

    Where can I find more word search puzzles to play?

    -

    You can find more word search puzzles to play by browsing the websites that offer word search generators, or by searching online for other websites that have word search puzzles on various topics and levels.

    401be4b1e0
    -
    -
    \ No newline at end of file diff --git a/spaces/fffiloni/Image-to-MusicGen/tests/models/test_musicgen.py b/spaces/fffiloni/Image-to-MusicGen/tests/models/test_musicgen.py deleted file mode 100644 index 53eff4405ab7de18e0ae18df8c8f9959a1c9e031..0000000000000000000000000000000000000000 --- a/spaces/fffiloni/Image-to-MusicGen/tests/models/test_musicgen.py +++ /dev/null @@ -1,50 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import pytest -import torch - -from audiocraft.models import MusicGen - - -class TestSEANetModel: - def get_musicgen(self): - mg = MusicGen.get_pretrained(name='debug', device='cpu') - mg.set_generation_params(duration=2.0) - return mg - - def test_base(self): - mg = self.get_musicgen() - assert mg.frame_rate == 25 - assert mg.sample_rate == 32000 - assert mg.audio_channels == 1 - - def test_generate_unconditional(self): - mg = self.get_musicgen() - wav = mg.generate_unconditional(3) - assert list(wav.shape) == [3, 1, 64000] - - def test_generate_continuation(self): - mg = self.get_musicgen() - prompt = torch.randn(3, 1, 32000) - wav = mg.generate_continuation(prompt, 32000) - assert list(wav.shape) == [3, 1, 64000] - - prompt = torch.randn(2, 1, 32000) - wav = mg.generate_continuation( - prompt, 32000, ['youpi', 'lapin dort']) - assert list(wav.shape) == [2, 1, 64000] - - prompt = torch.randn(2, 1, 32000) - with pytest.raises(AssertionError): - wav = mg.generate_continuation( - prompt, 32000, ['youpi', 'lapin dort', 'one too many']) - - def test_generate(self): - mg = self.get_musicgen() - wav = mg.generate( - ['youpi', 'lapin dort']) - assert list(wav.shape) == [2, 1, 64000] diff --git a/spaces/fffiloni/controlnet-animation-doodle/node_modules/socket.io/node_modules/ms/index.js b/spaces/fffiloni/controlnet-animation-doodle/node_modules/socket.io/node_modules/ms/index.js deleted file mode 100644 index c4498bcc212589664a5fe0d45e5908b174ab0a37..0000000000000000000000000000000000000000 --- a/spaces/fffiloni/controlnet-animation-doodle/node_modules/socket.io/node_modules/ms/index.js +++ /dev/null @@ -1,162 +0,0 @@ -/** - * Helpers. - */ - -var s = 1000; -var m = s * 60; -var h = m * 60; -var d = h * 24; -var w = d * 7; -var y = d * 365.25; - -/** - * Parse or format the given `val`. - * - * Options: - * - * - `long` verbose formatting [false] - * - * @param {String|Number} val - * @param {Object} [options] - * @throws {Error} throw an error if val is not a non-empty string or a number - * @return {String|Number} - * @api public - */ - -module.exports = function(val, options) { - options = options || {}; - var type = typeof val; - if (type === 'string' && val.length > 0) { - return parse(val); - } else if (type === 'number' && isFinite(val)) { - return options.long ? fmtLong(val) : fmtShort(val); - } - throw new Error( - 'val is not a non-empty string or a valid number. val=' + - JSON.stringify(val) - ); -}; - -/** - * Parse the given `str` and return milliseconds. - * - * @param {String} str - * @return {Number} - * @api private - */ - -function parse(str) { - str = String(str); - if (str.length > 100) { - return; - } - var match = /^(-?(?:\d+)?\.?\d+) *(milliseconds?|msecs?|ms|seconds?|secs?|s|minutes?|mins?|m|hours?|hrs?|h|days?|d|weeks?|w|years?|yrs?|y)?$/i.exec( - str - ); - if (!match) { - return; - } - var n = parseFloat(match[1]); - var type = (match[2] || 'ms').toLowerCase(); - switch (type) { - case 'years': - case 'year': - case 'yrs': - case 'yr': - case 'y': - return n * y; - case 'weeks': - case 'week': - case 'w': - return n * w; - case 'days': - case 'day': - case 'd': - return n * d; - case 'hours': - case 'hour': - case 'hrs': - case 'hr': - case 'h': - return n * h; - case 'minutes': - case 'minute': - case 'mins': - case 'min': - case 'm': - return n * m; - case 'seconds': - case 'second': - case 'secs': - case 'sec': - case 's': - return n * s; - case 'milliseconds': - case 'millisecond': - case 'msecs': - case 'msec': - case 'ms': - return n; - default: - return undefined; - } -} - -/** - * Short format for `ms`. - * - * @param {Number} ms - * @return {String} - * @api private - */ - -function fmtShort(ms) { - var msAbs = Math.abs(ms); - if (msAbs >= d) { - return Math.round(ms / d) + 'd'; - } - if (msAbs >= h) { - return Math.round(ms / h) + 'h'; - } - if (msAbs >= m) { - return Math.round(ms / m) + 'm'; - } - if (msAbs >= s) { - return Math.round(ms / s) + 's'; - } - return ms + 'ms'; -} - -/** - * Long format for `ms`. - * - * @param {Number} ms - * @return {String} - * @api private - */ - -function fmtLong(ms) { - var msAbs = Math.abs(ms); - if (msAbs >= d) { - return plural(ms, msAbs, d, 'day'); - } - if (msAbs >= h) { - return plural(ms, msAbs, h, 'hour'); - } - if (msAbs >= m) { - return plural(ms, msAbs, m, 'minute'); - } - if (msAbs >= s) { - return plural(ms, msAbs, s, 'second'); - } - return ms + ' ms'; -} - -/** - * Pluralization helper. - */ - -function plural(ms, msAbs, n, name) { - var isPlural = msAbs >= n * 1.5; - return Math.round(ms / n) + ' ' + name + (isPlural ? 's' : ''); -} diff --git a/spaces/fffiloni/controlnet-animation-doodle/node_modules/ws/lib/websocket-server.js b/spaces/fffiloni/controlnet-animation-doodle/node_modules/ws/lib/websocket-server.js deleted file mode 100644 index bac30eb3301f75a686cf194a1cd1b9f9f4b2430b..0000000000000000000000000000000000000000 --- a/spaces/fffiloni/controlnet-animation-doodle/node_modules/ws/lib/websocket-server.js +++ /dev/null @@ -1,535 +0,0 @@ -/* eslint no-unused-vars: ["error", { "varsIgnorePattern": "^net|tls|https$" }] */ - -'use strict'; - -const EventEmitter = require('events'); -const http = require('http'); -const https = require('https'); -const net = require('net'); -const tls = require('tls'); -const { createHash } = require('crypto'); - -const extension = require('./extension'); -const PerMessageDeflate = require('./permessage-deflate'); -const subprotocol = require('./subprotocol'); -const WebSocket = require('./websocket'); -const { GUID, kWebSocket } = require('./constants'); - -const keyRegex = /^[+/0-9A-Za-z]{22}==$/; - -const RUNNING = 0; -const CLOSING = 1; -const CLOSED = 2; - -/** - * Class representing a WebSocket server. - * - * @extends EventEmitter - */ -class WebSocketServer extends EventEmitter { - /** - * Create a `WebSocketServer` instance. - * - * @param {Object} options Configuration options - * @param {Number} [options.backlog=511] The maximum length of the queue of - * pending connections - * @param {Boolean} [options.clientTracking=true] Specifies whether or not to - * track clients - * @param {Function} [options.handleProtocols] A hook to handle protocols - * @param {String} [options.host] The hostname where to bind the server - * @param {Number} [options.maxPayload=104857600] The maximum allowed message - * size - * @param {Boolean} [options.noServer=false] Enable no server mode - * @param {String} [options.path] Accept only connections matching this path - * @param {(Boolean|Object)} [options.perMessageDeflate=false] Enable/disable - * permessage-deflate - * @param {Number} [options.port] The port where to bind the server - * @param {(http.Server|https.Server)} [options.server] A pre-created HTTP/S - * server to use - * @param {Boolean} [options.skipUTF8Validation=false] Specifies whether or - * not to skip UTF-8 validation for text and close messages - * @param {Function} [options.verifyClient] A hook to reject connections - * @param {Function} [options.WebSocket=WebSocket] Specifies the `WebSocket` - * class to use. It must be the `WebSocket` class or class that extends it - * @param {Function} [callback] A listener for the `listening` event - */ - constructor(options, callback) { - super(); - - options = { - maxPayload: 100 * 1024 * 1024, - skipUTF8Validation: false, - perMessageDeflate: false, - handleProtocols: null, - clientTracking: true, - verifyClient: null, - noServer: false, - backlog: null, // use default (511 as implemented in net.js) - server: null, - host: null, - path: null, - port: null, - WebSocket, - ...options - }; - - if ( - (options.port == null && !options.server && !options.noServer) || - (options.port != null && (options.server || options.noServer)) || - (options.server && options.noServer) - ) { - throw new TypeError( - 'One and only one of the "port", "server", or "noServer" options ' + - 'must be specified' - ); - } - - if (options.port != null) { - this._server = http.createServer((req, res) => { - const body = http.STATUS_CODES[426]; - - res.writeHead(426, { - 'Content-Length': body.length, - 'Content-Type': 'text/plain' - }); - res.end(body); - }); - this._server.listen( - options.port, - options.host, - options.backlog, - callback - ); - } else if (options.server) { - this._server = options.server; - } - - if (this._server) { - const emitConnection = this.emit.bind(this, 'connection'); - - this._removeListeners = addListeners(this._server, { - listening: this.emit.bind(this, 'listening'), - error: this.emit.bind(this, 'error'), - upgrade: (req, socket, head) => { - this.handleUpgrade(req, socket, head, emitConnection); - } - }); - } - - if (options.perMessageDeflate === true) options.perMessageDeflate = {}; - if (options.clientTracking) { - this.clients = new Set(); - this._shouldEmitClose = false; - } - - this.options = options; - this._state = RUNNING; - } - - /** - * Returns the bound address, the address family name, and port of the server - * as reported by the operating system if listening on an IP socket. - * If the server is listening on a pipe or UNIX domain socket, the name is - * returned as a string. - * - * @return {(Object|String|null)} The address of the server - * @public - */ - address() { - if (this.options.noServer) { - throw new Error('The server is operating in "noServer" mode'); - } - - if (!this._server) return null; - return this._server.address(); - } - - /** - * Stop the server from accepting new connections and emit the `'close'` event - * when all existing connections are closed. - * - * @param {Function} [cb] A one-time listener for the `'close'` event - * @public - */ - close(cb) { - if (this._state === CLOSED) { - if (cb) { - this.once('close', () => { - cb(new Error('The server is not running')); - }); - } - - process.nextTick(emitClose, this); - return; - } - - if (cb) this.once('close', cb); - - if (this._state === CLOSING) return; - this._state = CLOSING; - - if (this.options.noServer || this.options.server) { - if (this._server) { - this._removeListeners(); - this._removeListeners = this._server = null; - } - - if (this.clients) { - if (!this.clients.size) { - process.nextTick(emitClose, this); - } else { - this._shouldEmitClose = true; - } - } else { - process.nextTick(emitClose, this); - } - } else { - const server = this._server; - - this._removeListeners(); - this._removeListeners = this._server = null; - - // - // The HTTP/S server was created internally. Close it, and rely on its - // `'close'` event. - // - server.close(() => { - emitClose(this); - }); - } - } - - /** - * See if a given request should be handled by this server instance. - * - * @param {http.IncomingMessage} req Request object to inspect - * @return {Boolean} `true` if the request is valid, else `false` - * @public - */ - shouldHandle(req) { - if (this.options.path) { - const index = req.url.indexOf('?'); - const pathname = index !== -1 ? req.url.slice(0, index) : req.url; - - if (pathname !== this.options.path) return false; - } - - return true; - } - - /** - * Handle a HTTP Upgrade request. - * - * @param {http.IncomingMessage} req The request object - * @param {(net.Socket|tls.Socket)} socket The network socket between the - * server and client - * @param {Buffer} head The first packet of the upgraded stream - * @param {Function} cb Callback - * @public - */ - handleUpgrade(req, socket, head, cb) { - socket.on('error', socketOnError); - - const key = req.headers['sec-websocket-key']; - const version = +req.headers['sec-websocket-version']; - - if (req.method !== 'GET') { - const message = 'Invalid HTTP method'; - abortHandshakeOrEmitwsClientError(this, req, socket, 405, message); - return; - } - - if (req.headers.upgrade.toLowerCase() !== 'websocket') { - const message = 'Invalid Upgrade header'; - abortHandshakeOrEmitwsClientError(this, req, socket, 400, message); - return; - } - - if (!key || !keyRegex.test(key)) { - const message = 'Missing or invalid Sec-WebSocket-Key header'; - abortHandshakeOrEmitwsClientError(this, req, socket, 400, message); - return; - } - - if (version !== 8 && version !== 13) { - const message = 'Missing or invalid Sec-WebSocket-Version header'; - abortHandshakeOrEmitwsClientError(this, req, socket, 400, message); - return; - } - - if (!this.shouldHandle(req)) { - abortHandshake(socket, 400); - return; - } - - const secWebSocketProtocol = req.headers['sec-websocket-protocol']; - let protocols = new Set(); - - if (secWebSocketProtocol !== undefined) { - try { - protocols = subprotocol.parse(secWebSocketProtocol); - } catch (err) { - const message = 'Invalid Sec-WebSocket-Protocol header'; - abortHandshakeOrEmitwsClientError(this, req, socket, 400, message); - return; - } - } - - const secWebSocketExtensions = req.headers['sec-websocket-extensions']; - const extensions = {}; - - if ( - this.options.perMessageDeflate && - secWebSocketExtensions !== undefined - ) { - const perMessageDeflate = new PerMessageDeflate( - this.options.perMessageDeflate, - true, - this.options.maxPayload - ); - - try { - const offers = extension.parse(secWebSocketExtensions); - - if (offers[PerMessageDeflate.extensionName]) { - perMessageDeflate.accept(offers[PerMessageDeflate.extensionName]); - extensions[PerMessageDeflate.extensionName] = perMessageDeflate; - } - } catch (err) { - const message = - 'Invalid or unacceptable Sec-WebSocket-Extensions header'; - abortHandshakeOrEmitwsClientError(this, req, socket, 400, message); - return; - } - } - - // - // Optionally call external client verification handler. - // - if (this.options.verifyClient) { - const info = { - origin: - req.headers[`${version === 8 ? 'sec-websocket-origin' : 'origin'}`], - secure: !!(req.socket.authorized || req.socket.encrypted), - req - }; - - if (this.options.verifyClient.length === 2) { - this.options.verifyClient(info, (verified, code, message, headers) => { - if (!verified) { - return abortHandshake(socket, code || 401, message, headers); - } - - this.completeUpgrade( - extensions, - key, - protocols, - req, - socket, - head, - cb - ); - }); - return; - } - - if (!this.options.verifyClient(info)) return abortHandshake(socket, 401); - } - - this.completeUpgrade(extensions, key, protocols, req, socket, head, cb); - } - - /** - * Upgrade the connection to WebSocket. - * - * @param {Object} extensions The accepted extensions - * @param {String} key The value of the `Sec-WebSocket-Key` header - * @param {Set} protocols The subprotocols - * @param {http.IncomingMessage} req The request object - * @param {(net.Socket|tls.Socket)} socket The network socket between the - * server and client - * @param {Buffer} head The first packet of the upgraded stream - * @param {Function} cb Callback - * @throws {Error} If called more than once with the same socket - * @private - */ - completeUpgrade(extensions, key, protocols, req, socket, head, cb) { - // - // Destroy the socket if the client has already sent a FIN packet. - // - if (!socket.readable || !socket.writable) return socket.destroy(); - - if (socket[kWebSocket]) { - throw new Error( - 'server.handleUpgrade() was called more than once with the same ' + - 'socket, possibly due to a misconfiguration' - ); - } - - if (this._state > RUNNING) return abortHandshake(socket, 503); - - const digest = createHash('sha1') - .update(key + GUID) - .digest('base64'); - - const headers = [ - 'HTTP/1.1 101 Switching Protocols', - 'Upgrade: websocket', - 'Connection: Upgrade', - `Sec-WebSocket-Accept: ${digest}` - ]; - - const ws = new this.options.WebSocket(null); - - if (protocols.size) { - // - // Optionally call external protocol selection handler. - // - const protocol = this.options.handleProtocols - ? this.options.handleProtocols(protocols, req) - : protocols.values().next().value; - - if (protocol) { - headers.push(`Sec-WebSocket-Protocol: ${protocol}`); - ws._protocol = protocol; - } - } - - if (extensions[PerMessageDeflate.extensionName]) { - const params = extensions[PerMessageDeflate.extensionName].params; - const value = extension.format({ - [PerMessageDeflate.extensionName]: [params] - }); - headers.push(`Sec-WebSocket-Extensions: ${value}`); - ws._extensions = extensions; - } - - // - // Allow external modification/inspection of handshake headers. - // - this.emit('headers', headers, req); - - socket.write(headers.concat('\r\n').join('\r\n')); - socket.removeListener('error', socketOnError); - - ws.setSocket(socket, head, { - maxPayload: this.options.maxPayload, - skipUTF8Validation: this.options.skipUTF8Validation - }); - - if (this.clients) { - this.clients.add(ws); - ws.on('close', () => { - this.clients.delete(ws); - - if (this._shouldEmitClose && !this.clients.size) { - process.nextTick(emitClose, this); - } - }); - } - - cb(ws, req); - } -} - -module.exports = WebSocketServer; - -/** - * Add event listeners on an `EventEmitter` using a map of - * pairs. - * - * @param {EventEmitter} server The event emitter - * @param {Object.} map The listeners to add - * @return {Function} A function that will remove the added listeners when - * called - * @private - */ -function addListeners(server, map) { - for (const event of Object.keys(map)) server.on(event, map[event]); - - return function removeListeners() { - for (const event of Object.keys(map)) { - server.removeListener(event, map[event]); - } - }; -} - -/** - * Emit a `'close'` event on an `EventEmitter`. - * - * @param {EventEmitter} server The event emitter - * @private - */ -function emitClose(server) { - server._state = CLOSED; - server.emit('close'); -} - -/** - * Handle socket errors. - * - * @private - */ -function socketOnError() { - this.destroy(); -} - -/** - * Close the connection when preconditions are not fulfilled. - * - * @param {(net.Socket|tls.Socket)} socket The socket of the upgrade request - * @param {Number} code The HTTP response status code - * @param {String} [message] The HTTP response body - * @param {Object} [headers] Additional HTTP response headers - * @private - */ -function abortHandshake(socket, code, message, headers) { - // - // The socket is writable unless the user destroyed or ended it before calling - // `server.handleUpgrade()` or in the `verifyClient` function, which is a user - // error. Handling this does not make much sense as the worst that can happen - // is that some of the data written by the user might be discarded due to the - // call to `socket.end()` below, which triggers an `'error'` event that in - // turn causes the socket to be destroyed. - // - message = message || http.STATUS_CODES[code]; - headers = { - Connection: 'close', - 'Content-Type': 'text/html', - 'Content-Length': Buffer.byteLength(message), - ...headers - }; - - socket.once('finish', socket.destroy); - - socket.end( - `HTTP/1.1 ${code} ${http.STATUS_CODES[code]}\r\n` + - Object.keys(headers) - .map((h) => `${h}: ${headers[h]}`) - .join('\r\n') + - '\r\n\r\n' + - message - ); -} - -/** - * Emit a `'wsClientError'` event on a `WebSocketServer` if there is at least - * one listener for it, otherwise call `abortHandshake()`. - * - * @param {WebSocketServer} server The WebSocket server - * @param {http.IncomingMessage} req The request object - * @param {(net.Socket|tls.Socket)} socket The socket of the upgrade request - * @param {Number} code The HTTP response status code - * @param {String} message The HTTP response body - * @private - */ -function abortHandshakeOrEmitwsClientError(server, req, socket, code, message) { - if (server.listenerCount('wsClientError')) { - const err = new Error(message); - Error.captureStackTrace(err, abortHandshakeOrEmitwsClientError); - - server.emit('wsClientError', err, socket, req); - } else { - abortHandshake(socket, code, message); - } -} diff --git a/spaces/fffiloni/lama-video-watermark-remover/models/ade20k/segm_lib/nn/modules/__init__.py b/spaces/fffiloni/lama-video-watermark-remover/models/ade20k/segm_lib/nn/modules/__init__.py deleted file mode 100644 index bc8709d92c610b36e0bcbd7da20c1eb41dc8cfcf..0000000000000000000000000000000000000000 --- a/spaces/fffiloni/lama-video-watermark-remover/models/ade20k/segm_lib/nn/modules/__init__.py +++ /dev/null @@ -1,12 +0,0 @@ -# -*- coding: utf-8 -*- -# File : __init__.py -# Author : Jiayuan Mao -# Email : maojiayuan@gmail.com -# Date : 27/01/2018 -# -# This file is part of Synchronized-BatchNorm-PyTorch. -# https://github.com/vacancy/Synchronized-BatchNorm-PyTorch -# Distributed under MIT License. - -from .batchnorm import SynchronizedBatchNorm1d, SynchronizedBatchNorm2d, SynchronizedBatchNorm3d -from .replicate import DataParallelWithCallback, patch_replication_callback diff --git a/spaces/fffiloni/langchain-chat-with-pdf/README.md b/spaces/fffiloni/langchain-chat-with-pdf/README.md deleted file mode 100644 index e56b64c48a39334e7503170435197d25138db1c7..0000000000000000000000000000000000000000 --- a/spaces/fffiloni/langchain-chat-with-pdf/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Chat with PDF -emoji: 📄🤖 -colorFrom: purple -colorTo: pink -sdk: gradio -sdk_version: 3.27.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/fgenie/scamtext_PAL_self_consistency/funcs/f_13.py b/spaces/fgenie/scamtext_PAL_self_consistency/funcs/f_13.py deleted file mode 100644 index 80bfb681fe57b3bf988e4e82a3e5c72ac9ea5370..0000000000000000000000000000000000000000 --- a/spaces/fgenie/scamtext_PAL_self_consistency/funcs/f_13.py +++ /dev/null @@ -1,42 +0,0 @@ -def is_spam(message): - spam_indicators = [ - '조아팟', - '무료수신거부', - '루멘스', - '문의', - '추천', - '공개', - '상한가', - '미리확인', - 'https://', - 'http://', - '내일 발표', - '엠바고', - '상장', - '이벤트', - '상품권', - '파트너', - '쿠폰', - '할인', - '프로모션', - '프리미엄', - '기회', - '출시', - '방송', - '매스컴', - '뉴스', - '사전등록', - '마감', - ] - - message = message.lower() - count = 0 - - for indicator in spam_indicators: - if indicator.lower() in message: - count += 1 - - if count >= 2: - return True - else: - return False \ No newline at end of file diff --git a/spaces/fiyen/YangyangChatGPT/chatgpt - macOS.command b/spaces/fiyen/YangyangChatGPT/chatgpt - macOS.command deleted file mode 100644 index fa015edca9e6916f24394813ce8ba77d2072e296..0000000000000000000000000000000000000000 --- a/spaces/fiyen/YangyangChatGPT/chatgpt - macOS.command +++ /dev/null @@ -1,7 +0,0 @@ -#!/bin/bash -echo Opening ChuanhuChatGPT... -cd "$(dirname "${BASH_SOURCE[0]}")" -nohup python3 ChuanhuChatbot.py >/dev/null 2>&1 & -sleep 5 -open http://127.0.0.1:7860 -echo Finished opening ChuanhuChatGPT (http://127.0.0.1:7860/). If you kill ChuanhuChatbot, Use "pkill -f 'ChuanhuChatbot'" command in terminal. \ No newline at end of file diff --git a/spaces/fornaxai/RNet/README.md b/spaces/fornaxai/RNet/README.md deleted file mode 100644 index cfd6a99d5f4a8ab136420ec0f0b8591fd9bb1fdf..0000000000000000000000000000000000000000 --- a/spaces/fornaxai/RNet/README.md +++ /dev/null @@ -1,11 +0,0 @@ ---- -title: RNet -emoji: 📚 -colorFrom: yellow -colorTo: blue -sdk: static -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/frncscp/bullerengue/musika/train.py b/spaces/frncscp/bullerengue/musika/train.py deleted file mode 100644 index 067f82e1e3fdc73dcdc3955b8da7d708e55fcd9b..0000000000000000000000000000000000000000 --- a/spaces/frncscp/bullerengue/musika/train.py +++ /dev/null @@ -1,243 +0,0 @@ -import tensorflow as tf -import tensorboard -import numpy as np -from tqdm import tqdm -import time -import datetime -import os -import subprocess -from utils import Utils_functions -from models import Models_functions -from losses import * - - -class Train_functions: - def __init__(self, args): - - self.args = args - self.U = Utils_functions(args) - self.M = Models_functions(args) - - def gradient_penalty(self, x, net): - x_hat = x - with tf.GradientTape() as t: - t.watch(x_hat) - d_hat, _ = net(x_hat, training=True) - gradients = t.gradient(d_hat, x_hat) - ddx = tf.sqrt(1e-6 + tf.reduce_sum(gradients**2, axis=[1, 2, 3])) - d_regularizer = tf.reduce_mean((ddx - 1.0) ** 2) - return d_regularizer - - def train_all(self, a, ema, g_train=True, disc_train=True, models_ls=None): - - critic, gen, enc, dec, enc2, dec2, gen_ema, [opt_dec, opt_disc], switch = models_ls - - a = tf.expand_dims(a, -3) - a = self.U.rand_channel_swap(a) - - noiseg = tf.random.normal([self.args.bs, self.args.coorddepth], dtype=tf.float32) - - noisel = tf.concat([tf.random.normal([self.args.bs, self.args.coorddepth], dtype=tf.float32), noiseg], -1) - noisec = tf.concat([tf.random.normal([self.args.bs, self.args.coorddepth], dtype=tf.float32), noiseg], -1) - noiser = tf.concat([tf.random.normal([self.args.bs, self.args.coorddepth], dtype=tf.float32), noiseg], -1) - - rl = tf.linspace(noisel, noisec, self.args.coordlen + 1, axis=-2)[:, :-1, :] - rr = tf.linspace(noisec, noiser, self.args.coordlen + 1, axis=-2) - - noisetot = tf.concat([rl, rr], -2) - - noisetot = self.U.center_coordinate(noisetot) - noise = self.U.crop_coordinate(noisetot) - - with tf.GradientTape() as tape_gen, tf.GradientTape() as tape_disc, tf.GradientTape() as tape_gp: - if not disc_train: - tape_disc.stop_recording() - if not g_train: - tape_gen.stop_recording() - - tape_gp.watch(a) - - ab = gen(noise, training=True) - - loss_dtr = 0.0 - loss_dtf = 0.0 - loss_gt = 0.0 - loss_did = 0.0 - loss_gp = 0.0 - if disc_train or g_train: - - ca = critic(a, training=True) - cab = critic(ab, training=True) - - switch.assign(self.U.update_switch(switch, ca, cab)) - - grad_gp = tape_gp.gradient(tf.reduce_sum(ca), [a])[0] - loss_gp = tf.reduce_mean(tf.reduce_sum(tf.reshape(grad_gp**2, [tf.shape(grad_gp)[0], -1]), -1)) - - if disc_train: - - loss_dtr = d_loss_r(ca) - loss_dtf = d_loss_f(cab) - - loss_dt = (loss_dtr + loss_dtf) / 2.0 - - loss_d = loss_dt + self.args.gp_max_weight * (-switch) * loss_gp - - if self.args.mixed_precision: - loss_d = opt_disc.get_scaled_loss(loss_d) - - if g_train: - - loss_gt = g_loss_f(cab) - loss_gen = loss_gt - - if self.args.mixed_precision: - loss_gen = opt_dec.get_scaled_loss(loss_gen) - - if disc_train: - grad_disc = tape_disc.gradient(loss_d, critic.trainable_weights) - if self.args.mixed_precision: - grad_disc = opt_disc.get_unscaled_gradients(grad_disc) - opt_disc.apply_gradients(zip(grad_disc, critic.trainable_weights)) - - if g_train: - grad_dec = tape_gen.gradient(loss_gen, gen.trainable_variables) - if self.args.mixed_precision: - grad_dec = opt_dec.get_unscaled_gradients(grad_dec) - opt_dec.apply_gradients(zip(grad_dec, gen.trainable_variables)) - - ema.apply(gen.trainable_variables) - - return loss_dtr, loss_dtf, loss_gp, loss_gt - - # @tf.function(jit_compile=True) - # def train_tot(self, a, ema, models_ls=None): - # return self.train_all(a, ema, g_train=True, disc_train=True, models_ls=models_ls) - - def update_lr(self, lr, opts=None): - opt_dec, opt_disc = opts - opt_dec.learning_rate = lr - opt_disc.learning_rate = lr * 1.0 - - def train(self, ds, models_ls=None): - - @tf.function(jit_compile=self.args.xla) - def train_tot(a, ema, models_ls=None): - return self.train_all(a, ema, g_train=True, disc_train=True, models_ls=models_ls) - - current_time = datetime.datetime.now().strftime("%Y%m%d-%H%M%S") - train_log_dir = ( - f"{self.args.log_path}/MUSIKA_latlen_{self.args.latlen}_latdepth_{self.args.latdepth}_sr_{self.args.sr}/" - + current_time - + "/train" - ) - train_summary_writer = tf.summary.create_file_writer(train_log_dir) - - exp_path = f"{self.args.save_path}/MUSIKA_latlen_{self.args.latlen}_latdepth_{self.args.latdepth}_sr_{self.args.sr}_time_{current_time}" - os.makedirs(exp_path, exist_ok=True) - - print("--------------------------------") - print("--------------------------------") - print("--------------------------------") - print("--------------------------------") - print("--------------------------------") - - _ = subprocess.Popen( - [ - "tensorboard", - "--logdir", - f"{self.args.log_path}/MUSIKA_latlen_{self.args.latlen}_latdepth_{self.args.latdepth}_sr_{self.args.sr}", - "--port", - "6006", - ] - ) - print("CLICK ON LINK BELOW TO OPEN TENSORBOARD INTERFACE") - print("http://localhost:6006/") - print("--------------------------------") - print("--------------------------------") - print("--------------------------------") - print("--------------------------------") - print("--------------------------------") - - ema = tf.train.ExponentialMovingAverage(decay=0.999) - critic, gen, enc, dec, enc2, dec2, gen_ema, [opt_dec, opt_disc], switch = models_ls - ema.apply(gen_ema.trainable_variables) - - self.update_lr(self.args.lr, [opt_dec, opt_disc]) - c = 0 - g = 0 - m = 0 - idloss = 0.0 - - print("Preparing for Training (this can take one or two minutes)...") - - for epoch in range(self.args.epochs): - bef = time.time() - bef_loop = time.time() - - dtr_list = [] - dtf_list = [] - did_list = [] - gt_list = [] - id_list = [] - - pbar = tqdm( - ds, - desc=f"Epoch {epoch}/{self.args.epochs}", - position=0, - leave=True, - total=self.args.totsamples // self.args.bs, - ) - - for batchi, (wv) in enumerate(pbar): - a = wv - - dloss_tr, dloss_tf, dloss_id, gloss_t = train_tot(a, ema, models_ls=models_ls) - - with train_summary_writer.as_default(): - tf.summary.scalar("disc_loss_r", dloss_tr, step=m) - tf.summary.scalar("disc_loss_f", dloss_tf, step=m) - tf.summary.scalar("gen_loss", gloss_t, step=m) - tf.summary.scalar("gradient_penalty", dloss_id, step=m) - tf.summary.scalar("gp_weight", -switch.value() * self.args.gp_max_weight, step=m) - tf.summary.scalar("lr", self.args.lr, step=m) - - dtr_list.append(dloss_tr) - dtf_list.append(dloss_tf) - did_list.append(dloss_id) - gt_list.append(gloss_t) - - c += 1 - g += 1 - m += 1 - - if batchi % 20 == 0: - - pbar.set_postfix( - { - "DR": np.mean(dtr_list[-g:], axis=0), - "DF": np.mean(dtf_list[-g:], axis=0), - "G": np.mean(gt_list[-g:], axis=0), - "GP": np.mean(did_list[-g:], axis=0), - "LR": self.args.lr, - "TIME": (time.time() - bef_loop) / 20, - } - ) - bef_loop = time.time() - nbatch = batchi - - for var, var_ema in zip(gen.trainable_variables, gen_ema.trainable_variables): - var_ema.assign(ema.average(var)) - - self.U.save_end( - epoch, - np.mean(gt_list[-self.args.save_every * c :], axis=0), - np.mean(dtr_list[-self.args.save_every * c :], axis=0), - np.mean(dtf_list[-self.args.save_every * c :], axis=0), - n_save=self.args.save_every, - models_ls=models_ls, - save_path=exp_path, - ) - - c = 0 - g = 0 diff --git a/spaces/fun-research/FC-CLIP/fcclip/__init__.py b/spaces/fun-research/FC-CLIP/fcclip/__init__.py deleted file mode 100644 index b721a8c908f58fa57cc9d5e06e99619fe5ad4901..0000000000000000000000000000000000000000 --- a/spaces/fun-research/FC-CLIP/fcclip/__init__.py +++ /dev/null @@ -1,26 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -from . import data # register all new datasets -from . import modeling - -# config -from .config import add_maskformer2_config, add_fcclip_config - -# dataset loading -from .data.dataset_mappers.coco_instance_new_baseline_dataset_mapper import COCOInstanceNewBaselineDatasetMapper -from .data.dataset_mappers.coco_panoptic_new_baseline_dataset_mapper import COCOPanopticNewBaselineDatasetMapper -from .data.dataset_mappers.mask_former_instance_dataset_mapper import ( - MaskFormerInstanceDatasetMapper, -) -from .data.dataset_mappers.mask_former_panoptic_dataset_mapper import ( - MaskFormerPanopticDatasetMapper, -) -from .data.dataset_mappers.mask_former_semantic_dataset_mapper import ( - MaskFormerSemanticDatasetMapper, -) - -# models -from .fcclip import FCCLIP -from .test_time_augmentation import SemanticSegmentorWithTTA - -# evaluation -from .evaluation.instance_evaluation import InstanceSegEvaluator diff --git a/spaces/g8a9/ferret/single.py b/spaces/g8a9/ferret/single.py deleted file mode 100644 index b9859dd0be3113e115fb62f21fcbc0f2fe0945c7..0000000000000000000000000000000000000000 --- a/spaces/g8a9/ferret/single.py +++ /dev/null @@ -1,133 +0,0 @@ -import streamlit as st -from transformers import AutoModelForSequenceClassification, AutoTokenizer, AutoConfig -from ferret import Benchmark -from torch.nn.functional import softmax - -DEFAULT_MODEL = "cardiffnlp/twitter-xlm-roberta-base-sentiment" -DEFAULT_QUERY = "Great movie for a great nap!" - - -@st.cache() -def get_model(model_name): - return AutoModelForSequenceClassification.from_pretrained(model_name) - - -@st.cache() -def get_config(model_name): - return AutoConfig.from_pretrained(model_name) - - -def get_tokenizer(tokenizer_name): - return AutoTokenizer.from_pretrained(tokenizer_name, use_fast=True) - - -def body(): - - st.title("Benchmark on individual texts") - - st.markdown( - """ - You are working now on the *single instance* mode -- i.e., you will work and - inspect one textual query at a time. - - Post-hoc explanation techniques disclose 🔎 the rationale behind a given prediction a model - makes while detecting a sentiment out of a text. In a sense, they let you *poke* inside the model. - - But **who watches the watchers**? Are these explanations *accurate*? Can you *trust* them? - - Let's find out! - - Let's choose your favourite mode and let *ferret* do the rest. - We will: - - 1. download your model - if you're impatient, here it is a [cute video](https://www.youtube.com/watch?v=0Xks8t-SWHU) 🦜 for you; - 2. explain using *ferret*'s built-in methods ⚙️ - 3. evaluate explanations with state-of-the-art **faithfulness metrics** 🚀 - - """ - ) - - col1, col2 = st.columns([3, 1]) - with col1: - model_name = st.text_input("HF Model", DEFAULT_MODEL) - config = AutoConfig.from_pretrained(model_name) - - with col2: - class_labels = list(config.id2label.values()) - target = st.selectbox( - "Target", - options=class_labels, - index=0, - help="Class label you want to explain.", - ) - - text = st.text_input("Text", DEFAULT_QUERY) - compute = st.button("Run") - - if compute and model_name: - - with st.spinner("Preparing the magic. Hang in there..."): - model = get_model(model_name) - tokenizer = get_tokenizer(model_name) - bench = Benchmark(model, tokenizer) - - st.markdown("### Prediction") - scores = bench.score(text) - scores_str = ", ".join([f"{k}: {v:.2f}" for k, v in scores.items()]) - st.text(scores_str) - - with st.spinner("Computing Explanations.."): - explanations = bench.explain(text, target=class_labels.index(target)) - - st.markdown("### Explanations") - st.dataframe(bench.show_table(explanations)) - st.caption("Darker red (blue) means higher (lower) contribution.") - - with st.spinner("Evaluating Explanations..."): - evaluations = bench.evaluate_explanations( - explanations, target=class_labels.index(target), apply_style=False - ) - - st.markdown("### Faithfulness Metrics") - st.dataframe(bench.show_evaluation_table(evaluations)) - st.caption("Darker colors mean better performance.") - - st.markdown( - """ - **Legend** - - - **AOPC Comprehensiveness** (aopc_compr) measures *comprehensiveness*, i.e., if the explanation captures all the tokens needed to make the prediction. Higher is better. - - - **AOPC Sufficiency** (aopc_suff) measures *sufficiency*, i.e., if the relevant tokens in the explanation are sufficient to make the prediction. Lower is better. - - - **Leave-On-Out TAU Correlation** (taucorr_loo) measures the Kendall rank correlation coefficient τ between the explanation and leave-one-out importances. Closer to 1 is better. - - See the paper for details. - """ - ) - - # It is computed as the drop in the model probability if the relevant tokens of the explanations are removed. The higher the comprehensiveness, the more faithful is the explanation. - - # It is computed as the drop in the model probability if only the relevant tokens of the explanations are considered. The lower the sufficiency, the more faithful is the explanation since there is less change in the model prediction. - - # The latter are computed by omittig individual input tokens and measuring the variation on the model prediction. The closer the τ correlation is to 1, the more faithful is the explanation. - - st.markdown( - """ - **In code, it would be as simple as** - """ - ) - st.code( - f""" -from transformers import AutoModelForSequenceClassification, AutoTokenizer -from ferret import Benchmark - -model = AutoModelForSequenceClassification.from_pretrained("{model_name}") -tokenizer = AutoTokenizer.from_pretrained("{model_name}") - -bench = Benchmark(model, tokenizer) -explanations = bench.explain("{text}") -evaluations = bench.evaluate_explanations(explanations) - """, - language="python", - ) diff --git a/spaces/georgefen/Face-Landmark-ControlNet/ldm/models/autoencoder.py b/spaces/georgefen/Face-Landmark-ControlNet/ldm/models/autoencoder.py deleted file mode 100644 index d122549995ce2cd64092c81a58419ed4a15a02fd..0000000000000000000000000000000000000000 --- a/spaces/georgefen/Face-Landmark-ControlNet/ldm/models/autoencoder.py +++ /dev/null @@ -1,219 +0,0 @@ -import torch -import pytorch_lightning as pl -import torch.nn.functional as F -from contextlib import contextmanager - -from ldm.modules.diffusionmodules.model import Encoder, Decoder -from ldm.modules.distributions.distributions import DiagonalGaussianDistribution - -from ldm.util import instantiate_from_config -from ldm.modules.ema import LitEma - - -class AutoencoderKL(pl.LightningModule): - def __init__(self, - ddconfig, - lossconfig, - embed_dim, - ckpt_path=None, - ignore_keys=[], - image_key="image", - colorize_nlabels=None, - monitor=None, - ema_decay=None, - learn_logvar=False - ): - super().__init__() - self.learn_logvar = learn_logvar - self.image_key = image_key - self.encoder = Encoder(**ddconfig) - self.decoder = Decoder(**ddconfig) - self.loss = instantiate_from_config(lossconfig) - assert ddconfig["double_z"] - self.quant_conv = torch.nn.Conv2d(2*ddconfig["z_channels"], 2*embed_dim, 1) - self.post_quant_conv = torch.nn.Conv2d(embed_dim, ddconfig["z_channels"], 1) - self.embed_dim = embed_dim - if colorize_nlabels is not None: - assert type(colorize_nlabels)==int - self.register_buffer("colorize", torch.randn(3, colorize_nlabels, 1, 1)) - if monitor is not None: - self.monitor = monitor - - self.use_ema = ema_decay is not None - if self.use_ema: - self.ema_decay = ema_decay - assert 0. < ema_decay < 1. - self.model_ema = LitEma(self, decay=ema_decay) - print(f"Keeping EMAs of {len(list(self.model_ema.buffers()))}.") - - if ckpt_path is not None: - self.init_from_ckpt(ckpt_path, ignore_keys=ignore_keys) - - def init_from_ckpt(self, path, ignore_keys=list()): - sd = torch.load(path, map_location="cpu")["state_dict"] - keys = list(sd.keys()) - for k in keys: - for ik in ignore_keys: - if k.startswith(ik): - print("Deleting key {} from state_dict.".format(k)) - del sd[k] - self.load_state_dict(sd, strict=False) - print(f"Restored from {path}") - - @contextmanager - def ema_scope(self, context=None): - if self.use_ema: - self.model_ema.store(self.parameters()) - self.model_ema.copy_to(self) - if context is not None: - print(f"{context}: Switched to EMA weights") - try: - yield None - finally: - if self.use_ema: - self.model_ema.restore(self.parameters()) - if context is not None: - print(f"{context}: Restored training weights") - - def on_train_batch_end(self, *args, **kwargs): - if self.use_ema: - self.model_ema(self) - - def encode(self, x): - h = self.encoder(x) - moments = self.quant_conv(h) - posterior = DiagonalGaussianDistribution(moments) - return posterior - - def decode(self, z): - z = self.post_quant_conv(z) - dec = self.decoder(z) - return dec - - def forward(self, input, sample_posterior=True): - posterior = self.encode(input) - if sample_posterior: - z = posterior.sample() - else: - z = posterior.mode() - dec = self.decode(z) - return dec, posterior - - def get_input(self, batch, k): - x = batch[k] - if len(x.shape) == 3: - x = x[..., None] - x = x.permute(0, 3, 1, 2).to(memory_format=torch.contiguous_format).float() - return x - - def training_step(self, batch, batch_idx, optimizer_idx): - inputs = self.get_input(batch, self.image_key) - reconstructions, posterior = self(inputs) - - if optimizer_idx == 0: - # train encoder+decoder+logvar - aeloss, log_dict_ae = self.loss(inputs, reconstructions, posterior, optimizer_idx, self.global_step, - last_layer=self.get_last_layer(), split="train") - self.log("aeloss", aeloss, prog_bar=True, logger=True, on_step=True, on_epoch=True) - self.log_dict(log_dict_ae, prog_bar=False, logger=True, on_step=True, on_epoch=False) - return aeloss - - if optimizer_idx == 1: - # train the discriminator - discloss, log_dict_disc = self.loss(inputs, reconstructions, posterior, optimizer_idx, self.global_step, - last_layer=self.get_last_layer(), split="train") - - self.log("discloss", discloss, prog_bar=True, logger=True, on_step=True, on_epoch=True) - self.log_dict(log_dict_disc, prog_bar=False, logger=True, on_step=True, on_epoch=False) - return discloss - - def validation_step(self, batch, batch_idx): - log_dict = self._validation_step(batch, batch_idx) - with self.ema_scope(): - log_dict_ema = self._validation_step(batch, batch_idx, postfix="_ema") - return log_dict - - def _validation_step(self, batch, batch_idx, postfix=""): - inputs = self.get_input(batch, self.image_key) - reconstructions, posterior = self(inputs) - aeloss, log_dict_ae = self.loss(inputs, reconstructions, posterior, 0, self.global_step, - last_layer=self.get_last_layer(), split="val"+postfix) - - discloss, log_dict_disc = self.loss(inputs, reconstructions, posterior, 1, self.global_step, - last_layer=self.get_last_layer(), split="val"+postfix) - - self.log(f"val{postfix}/rec_loss", log_dict_ae[f"val{postfix}/rec_loss"]) - self.log_dict(log_dict_ae) - self.log_dict(log_dict_disc) - return self.log_dict - - def configure_optimizers(self): - lr = self.learning_rate - ae_params_list = list(self.encoder.parameters()) + list(self.decoder.parameters()) + list( - self.quant_conv.parameters()) + list(self.post_quant_conv.parameters()) - if self.learn_logvar: - print(f"{self.__class__.__name__}: Learning logvar") - ae_params_list.append(self.loss.logvar) - opt_ae = torch.optim.Adam(ae_params_list, - lr=lr, betas=(0.5, 0.9)) - opt_disc = torch.optim.Adam(self.loss.discriminator.parameters(), - lr=lr, betas=(0.5, 0.9)) - return [opt_ae, opt_disc], [] - - def get_last_layer(self): - return self.decoder.conv_out.weight - - @torch.no_grad() - def log_images(self, batch, only_inputs=False, log_ema=False, **kwargs): - log = dict() - x = self.get_input(batch, self.image_key) - x = x.to(self.device) - if not only_inputs: - xrec, posterior = self(x) - if x.shape[1] > 3: - # colorize with random projection - assert xrec.shape[1] > 3 - x = self.to_rgb(x) - xrec = self.to_rgb(xrec) - log["samples"] = self.decode(torch.randn_like(posterior.sample())) - log["reconstructions"] = xrec - if log_ema or self.use_ema: - with self.ema_scope(): - xrec_ema, posterior_ema = self(x) - if x.shape[1] > 3: - # colorize with random projection - assert xrec_ema.shape[1] > 3 - xrec_ema = self.to_rgb(xrec_ema) - log["samples_ema"] = self.decode(torch.randn_like(posterior_ema.sample())) - log["reconstructions_ema"] = xrec_ema - log["inputs"] = x - return log - - def to_rgb(self, x): - assert self.image_key == "segmentation" - if not hasattr(self, "colorize"): - self.register_buffer("colorize", torch.randn(3, x.shape[1], 1, 1).to(x)) - x = F.conv2d(x, weight=self.colorize) - x = 2.*(x-x.min())/(x.max()-x.min()) - 1. - return x - - -class IdentityFirstStage(torch.nn.Module): - def __init__(self, *args, vq_interface=False, **kwargs): - self.vq_interface = vq_interface - super().__init__() - - def encode(self, x, *args, **kwargs): - return x - - def decode(self, x, *args, **kwargs): - return x - - def quantize(self, x, *args, **kwargs): - if self.vq_interface: - return x, None, [None, None, None] - return x - - def forward(self, x, *args, **kwargs): - return x - diff --git a/spaces/ghoskno/ColorCanny-Controlnet/app.py b/spaces/ghoskno/ColorCanny-Controlnet/app.py deleted file mode 100644 index 9048d902909f3ed2e7e130b24e0360b74510a8c0..0000000000000000000000000000000000000000 --- a/spaces/ghoskno/ColorCanny-Controlnet/app.py +++ /dev/null @@ -1,197 +0,0 @@ -import cv2 -import gradio as gr -import numpy as np -import torch - -from diffusers import StableDiffusionControlNetPipeline, StableDiffusionLatentUpscalePipeline, ControlNetModel, AutoencoderKL -from diffusers import UniPCMultistepScheduler -from PIL import Image - -from lpw import _encode_prompt - -controlnet_ColorCanny = ControlNetModel.from_pretrained("ghoskno/Color-Canny-Controlnet-model", torch_dtype=torch.float16) - -vae = AutoencoderKL.from_pretrained("stabilityai/sd-vae-ft-mse", torch_dtype=torch.float16) - -pipe = StableDiffusionControlNetPipeline.from_pretrained("Lykon/DreamShaper", vae=vae, controlnet=controlnet_ColorCanny, torch_dtype=torch.float16) - -pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config) -pipe.enable_model_cpu_offload() -pipe.enable_xformers_memory_efficient_attention() -pipe.enable_attention_slicing() - -# Generator seed -generator = torch.manual_seed(0) - -def HWC3(x): - assert x.dtype == np.uint8 - if x.ndim == 2: - x = x[:, :, None] - assert x.ndim == 3 - H, W, C = x.shape - assert C == 1 or C == 3 or C == 4 - if C == 3: - return x - if C == 1: - return np.concatenate([x, x, x], axis=2) - if C == 4: - color = x[:, :, 0:3].astype(np.float32) - alpha = x[:, :, 3:4].astype(np.float32) / 255.0 - y = color * alpha + 255.0 * (1.0 - alpha) - y = y.clip(0, 255).astype(np.uint8) - return y - -def resize_image(input_image, resolution, max_edge=False, edge_limit=False): - H, W, C = input_image.shape - - H = float(H) - W = float(W) - if max_edge: - k = float(resolution) / max(H, W) - else: - k = float(resolution) / min(H, W) - H *= k - W *= k - - H, W = int(H), int(W) - - img = cv2.resize(input_image, (W, H), interpolation=cv2.INTER_LANCZOS4 if k > 1 else cv2.INTER_AREA) - if not edge_limit: - return img - pH = int(np.round(H / 64.0)) * 64 - pW = int(np.round(W / 64.0)) * 64 - pimg = np.zeros((pH, pW, 3), dtype=img.dtype) - - oH, oW = (pH-H)//2, (pW-W)//2 - pimg[oH:oH+H, oW:oW+W] = img - return pimg - -def get_canny_filter(image, low_threshold=100, high_threshold=200): - image = cv2.Canny(image, low_threshold, high_threshold) - image = image[:, :, None] - image = np.concatenate([image, image, image], axis=2) - return image - -def get_color_filter(cond_image, mask_size=64): - H, W = cond_image.shape[:2] - cond_image = cv2.resize(cond_image, (W // mask_size, H // mask_size), interpolation=cv2.INTER_CUBIC) - color = cv2.resize(cond_image, (W, H), interpolation=cv2.INTER_NEAREST) - return color - -def get_colorcanny(image, mask_size): - canny_img = get_canny_filter(image) - - color_img = get_color_filter(image, int(mask_size)) - - color_img[np.where(canny_img > 128)] = 255 - return color_img - -def process(input_image, prompt, n_prompt, strength=1.0, color_mask_size=96, size=512, scale=6.0, ddim_steps=20): - prompt_embeds, negative_prompt_embeds = _encode_prompt(pipe, prompt, pipe.device, 1, True, n_prompt, 3) - input_image = resize_image(input_image, size, max_edge=True, edge_limit=True) - - cond_img = get_colorcanny(input_image, color_mask_size) - cond_img = Image.fromarray(cond_img) - output = pipe( - prompt_embeds=prompt_embeds, - negative_prompt_embeds=negative_prompt_embeds, - image=cond_img, - generator=generator, - num_images_per_prompt=1, - num_inference_steps=ddim_steps, - guidance_scale=scale, - controlnet_conditioning_scale=float(strength) - ) - return [output.images[0], cond_img] - - -def inpaint_process(inpaint_image, input_image, prompt, n_prompt, strength=1.0, color_mask_size=96, size=512, scale=6.0, ddim_steps=20): - if inpaint_image is None: - return process(input_image, prompt, n_prompt, strength, color_mask_size, size, scale, ddim_steps) - - prompt_embeds, negative_prompt_embeds = _encode_prompt(pipe, prompt, pipe.device, 1, True, n_prompt, 3) - input_image = resize_image(input_image, size, max_edge=True, edge_limit=True) - inpaint_image = resize_image(inpaint_image, size, max_edge=True, edge_limit=True) - - canny_img = get_canny_filter(input_image) - - color_img = get_color_filter(inpaint_image, int(color_mask_size)) - - color_img[np.where(canny_img > 128)] = 255 - cond_img = Image.fromarray(color_img) - - output = pipe( - prompt_embeds=prompt_embeds, - negative_prompt_embeds=negative_prompt_embeds, - image=cond_img, - generator=generator, - num_images_per_prompt=1, - num_inference_steps=ddim_steps, - guidance_scale=scale, - controlnet_conditioning_scale=float(strength) - ) - return [output.images[0], cond_img] - - -block = gr.Blocks().queue() - -with block: - gr.Markdown(""" - # 🧨 Color-Canny-ControlNet - - This is an extended model of ControlNet that not only utilizes the Canny edge of images but also incorporates the color features. - - We trained this model on the cleaned laion-art dataset that contains 2.6 million images with 2 epochs, using the Canny edge and color mosaic of the images as input. The processed dataset and pretrained model can be found in [ghoskno/laion-art-en-colorcanny](https://huggingface.co/datasets/ghoskno/laion-art-en-colorcanny) and [ghoskno/Color-Canny-Controlnet-model](https://huggingface.co/ghoskno/Color-Canny-Controlnet-model). - - This allows generated images to maintain the same color composition as the original images. If you are looking to control both the contours and colors of the original image while using ControlNet to generate images, then this is the best option for you! You can try out this model or test the examples provided below 🤗. - - ## Update - Hi, everyone, We have added a Color-Canny-ControlNet accelerated version of our implementation based on Nvidia Triton and operator optimization. This faster ControlNet is deployed on a Nvidia A10 machine. For a 512-pixel image, the inference takes about 1.2s, which is more faster than general implementation with accelerate PyTorch2.0 about 40%. - We provide detailed test results as shown below. - - | Method | Infomation | Inference Times | Speed-up Ratio | - | --------------------- | --------------------- | --------------------- | -------------- | - | Benchmark | [Huggingface implement](https://huggingface.co/blog/controlnet?spm=ata.21736010.0.0.422d24288Kj7zm) | 3.00s(V100)/5.00s(T4) | / | - | Accelerate PyTorch2.0 | xFormers | 2.03s(A10) | 0 | - | | SDPA | 2.02s(A10) | 0.5% | - | Ours | TRT & OP optimize | 1.20s(A10) | 40.4% | - - Welcome to try this [faster Color-Canny-ControlNet](http://121.40.118.209:7860/). - """) - with gr.Row(): - with gr.Column(): - input_image = gr.Image(source='upload', type="numpy") - color_image = gr.ImagePaint(type="numpy") - prompt = gr.Textbox(label="Prompt", value='') - n_prompt = gr.Textbox(label="Negative Prompt", value='') - with gr.Row(): - run_button = gr.Button(label="Run") - run_edit_button = gr.Button(value='Run with inpaint color', label="Run with inpaint color") - with gr.Accordion('Advanced', open=False): - strength = gr.Slider(label="Control Strength", minimum=0.0, maximum=2.0, value=1.0, step=0.01) - color_mask_size = gr.Slider(label="Color Mask Size", minimum=32, maximum=256, value=96, step=16) - size = gr.Slider(label="Size", minimum=256, maximum=768, value=512, step=128) - scale = gr.Slider(label="Guidance Scale", minimum=0.1, maximum=30.0, value=6.0, step=0.1) - ddim_steps = gr.Slider(label="Steps", minimum=1, maximum=50, value=20, step=1) - - with gr.Column(): - result_gallery = gr.Gallery(label='Output', show_label=False, elem_id="gallery").style(grid=2, height='auto') - ips = [input_image, prompt, n_prompt, strength, color_mask_size, size, scale, ddim_steps] - run_button.click(fn=process, inputs=ips, outputs=[result_gallery]) - run_edit_button.click(fn=inpaint_process, inputs=[color_image] + ips, outputs=[result_gallery]) - - - gr.Examples( - examples=[ - ["./asserts/1.png", "a concept art of by Makoto Shinkai, a girl is standing in the middle of the sea", "text, bad anatomy, blurry, (low quality, blurry)"], - ["./asserts/2.png", "a concept art with vivid ocean by Makoto Shinkai", "text, bad anatomy, blurry, (low quality, blurry)"], - ["./asserts/3.png", "sky city on the sea, with waves churning and wind power plants on the island", "text, bad anatomy, blurry, (low quality, blurry)"], - ], - inputs=[ - input_image, prompt, n_prompt - ], - outputs=[result_gallery], - fn=process, - cache_examples=True, - ) -block.launch(debug = True, server_name='0.0.0.0') \ No newline at end of file diff --git a/spaces/giswqs/solara-geospatial/pages/03_mapbox.py b/spaces/giswqs/solara-geospatial/pages/03_mapbox.py deleted file mode 100644 index 7157fc92c60c81e7aeb23fd3acf78381cc14ce0c..0000000000000000000000000000000000000000 --- a/spaces/giswqs/solara-geospatial/pages/03_mapbox.py +++ /dev/null @@ -1,23 +0,0 @@ -import mapwidget.mapbox as mapwidget - -import solara - -zoom = solara.reactive(2) -center = solara.reactive((20, 0)) - - -@solara.component -def Page(): - with solara.Column(style={"min-width": "500px", "height": "500px"}): - solara.Text( - "Not fully working yet. Try resizing the window to use the full width." - ) - # solara components support reactive variables - solara.SliderInt(label="Zoom level", value=zoom, min=1, max=20) - # using 3rd party widget library require wiring up the events manually - # using zoom.value and zoom.set - mapwidget.Map.element( # type: ignore - zoom=zoom.value, center=center.value, height='600px', width="100%" - ) - solara.Text(f"Zoom: {zoom.value}") - solara.Text(f"Center: {center.value}") diff --git a/spaces/haakohu/deep_privacy2/dp2/data/transforms/stylegan2_transform.py b/spaces/haakohu/deep_privacy2/dp2/data/transforms/stylegan2_transform.py deleted file mode 100644 index 49a143cddf9673d079b87ac7d725c433713e54c5..0000000000000000000000000000000000000000 --- a/spaces/haakohu/deep_privacy2/dp2/data/transforms/stylegan2_transform.py +++ /dev/null @@ -1,394 +0,0 @@ -import numpy as np -import scipy.signal -import torch -try: - from sg3_torch_utils import misc - from sg3_torch_utils.ops import upfirdn2d - from sg3_torch_utils.ops import grid_sample_gradfix - from sg3_torch_utils.ops import conv2d_gradfix -except: - pass -#---------------------------------------------------------------------------- -# Coefficients of various wavelet decomposition low-pass filters. - -wavelets = { - 'haar': [0.7071067811865476, 0.7071067811865476], - 'db1': [0.7071067811865476, 0.7071067811865476], - 'db2': [-0.12940952255092145, 0.22414386804185735, 0.836516303737469, 0.48296291314469025], - 'db3': [0.035226291882100656, -0.08544127388224149, -0.13501102001039084, 0.4598775021193313, 0.8068915093133388, 0.3326705529509569], - 'db4': [-0.010597401784997278, 0.032883011666982945, 0.030841381835986965, -0.18703481171888114, -0.02798376941698385, 0.6308807679295904, 0.7148465705525415, 0.23037781330885523], - 'db5': [0.003335725285001549, -0.012580751999015526, -0.006241490213011705, 0.07757149384006515, -0.03224486958502952, -0.24229488706619015, 0.13842814590110342, 0.7243085284385744, 0.6038292697974729, 0.160102397974125], - 'db6': [-0.00107730108499558, 0.004777257511010651, 0.0005538422009938016, -0.031582039318031156, 0.02752286553001629, 0.09750160558707936, -0.12976686756709563, -0.22626469396516913, 0.3152503517092432, 0.7511339080215775, 0.4946238903983854, 0.11154074335008017], - 'db7': [0.0003537138000010399, -0.0018016407039998328, 0.00042957797300470274, 0.012550998556013784, -0.01657454163101562, -0.03802993693503463, 0.0806126091510659, 0.07130921926705004, -0.22403618499416572, -0.14390600392910627, 0.4697822874053586, 0.7291320908465551, 0.39653931948230575, 0.07785205408506236], - 'db8': [-0.00011747678400228192, 0.0006754494059985568, -0.0003917403729959771, -0.00487035299301066, 0.008746094047015655, 0.013981027917015516, -0.04408825393106472, -0.01736930100202211, 0.128747426620186, 0.00047248457399797254, -0.2840155429624281, -0.015829105256023893, 0.5853546836548691, 0.6756307362980128, 0.3128715909144659, 0.05441584224308161], - 'sym2': [-0.12940952255092145, 0.22414386804185735, 0.836516303737469, 0.48296291314469025], - 'sym3': [0.035226291882100656, -0.08544127388224149, -0.13501102001039084, 0.4598775021193313, 0.8068915093133388, 0.3326705529509569], - 'sym4': [-0.07576571478927333, -0.02963552764599851, 0.49761866763201545, 0.8037387518059161, 0.29785779560527736, -0.09921954357684722, -0.012603967262037833, 0.0322231006040427], - 'sym5': [0.027333068345077982, 0.029519490925774643, -0.039134249302383094, 0.1993975339773936, 0.7234076904024206, 0.6339789634582119, 0.01660210576452232, -0.17532808990845047, -0.021101834024758855, 0.019538882735286728], - 'sym6': [0.015404109327027373, 0.0034907120842174702, -0.11799011114819057, -0.048311742585633, 0.4910559419267466, 0.787641141030194, 0.3379294217276218, -0.07263752278646252, -0.021060292512300564, 0.04472490177066578, 0.0017677118642428036, -0.007800708325034148], - 'sym7': [0.002681814568257878, -0.0010473848886829163, -0.01263630340325193, 0.03051551316596357, 0.0678926935013727, -0.049552834937127255, 0.017441255086855827, 0.5361019170917628, 0.767764317003164, 0.2886296317515146, -0.14004724044296152, -0.10780823770381774, 0.004010244871533663, 0.010268176708511255], - 'sym8': [-0.0033824159510061256, -0.0005421323317911481, 0.03169508781149298, 0.007607487324917605, -0.1432942383508097, -0.061273359067658524, 0.4813596512583722, 0.7771857517005235, 0.3644418948353314, -0.05194583810770904, -0.027219029917056003, 0.049137179673607506, 0.003808752013890615, -0.01495225833704823, -0.0003029205147213668, 0.0018899503327594609], -} - -#---------------------------------------------------------------------------- -# Helpers for constructing transformation matrices. - - -def matrix(*rows, device=None): - assert all(len(row) == len(rows[0]) for row in rows) - elems = [x for row in rows for x in row] - ref = [x for x in elems if isinstance(x, torch.Tensor)] - if len(ref) == 0: - return misc.constant(np.asarray(rows), device=device) - assert device is None or device == ref[0].device - elems = [x if isinstance(x, torch.Tensor) else misc.constant(x, shape=ref[0].shape, device=ref[0].device) for x in elems] - return torch.stack(elems, dim=-1).reshape(ref[0].shape + (len(rows), -1)) - - -def translate2d(tx, ty, **kwargs): - return matrix( - [1, 0, tx], - [0, 1, ty], - [0, 0, 1], - **kwargs) - - -def translate3d(tx, ty, tz, **kwargs): - return matrix( - [1, 0, 0, tx], - [0, 1, 0, ty], - [0, 0, 1, tz], - [0, 0, 0, 1], - **kwargs) - - -def scale2d(sx, sy, **kwargs): - return matrix( - [sx, 0, 0], - [0, sy, 0], - [0, 0, 1], - **kwargs) - - -def scale3d(sx, sy, sz, **kwargs): - return matrix( - [sx, 0, 0, 0], - [0, sy, 0, 0], - [0, 0, sz, 0], - [0, 0, 0, 1], - **kwargs) - - -def rotate2d(theta, **kwargs): - return matrix( - [torch.cos(theta), torch.sin(-theta), 0], - [torch.sin(theta), torch.cos(theta), 0], - [0, 0, 1], - **kwargs) - - -def rotate3d(v, theta, **kwargs): - vx = v[..., 0]; vy = v[..., 1]; vz = v[..., 2] - s = torch.sin(theta); c = torch.cos(theta); cc = 1 - c - return matrix( - [vx*vx*cc+c, vx*vy*cc-vz*s, vx*vz*cc+vy*s, 0], - [vy*vx*cc+vz*s, vy*vy*cc+c, vy*vz*cc-vx*s, 0], - [vz*vx*cc-vy*s, vz*vy*cc+vx*s, vz*vz*cc+c, 0], - [0, 0, 0, 1], - **kwargs) - - -def translate2d_inv(tx, ty, **kwargs): - return translate2d(-tx, -ty, **kwargs) - - -def scale2d_inv(sx, sy, **kwargs): - return scale2d(1 / sx, 1 / sy, **kwargs) - - -def rotate2d_inv(theta, **kwargs): - return rotate2d(-theta, **kwargs) - - -class StyleGANAugmentPipe(torch.nn.Module): - def __init__(self, - rotate90=0, xint=0, xint_max=0.125, - scale=0, rotate=0, aniso=0, xfrac=0, scale_std=0.2, rotate_max=1, aniso_std=0.2, xfrac_std=0.125, - brightness=0, contrast=0, lumaflip=0, hue=0, saturation=0, brightness_std=0.2, contrast_std=0.5, - hue_max=1, saturation_std=1, - imgfilter=0, imgfilter_bands=[1,1,1,1], imgfilter_std=1, - ): - super().__init__() - self.register_buffer('p', torch.ones([])) # Overall multiplier for augmentation probability. - - # Pixel blitting. - self.rotate90 = float(rotate90) # Probability multiplier for 90 degree rotations. - self.xint = float(xint) # Probability multiplier for integer translation. - self.xint_max = float(xint_max) # Range of integer translation, relative to image dimensions. - - # General geometric transformations. - self.scale = float(scale) # Probability multiplier for isotropic scaling. - self.rotate = float(rotate) # Probability multiplier for arbitrary rotation. - self.aniso = float(aniso) # Probability multiplier for anisotropic scaling. - self.xfrac = float(xfrac) # Probability multiplier for fractional translation. - self.scale_std = float(scale_std) # Log2 standard deviation of isotropic scaling. - self.rotate_max = float(rotate_max) # Range of arbitrary rotation, 1 = full circle. - self.aniso_std = float(aniso_std) # Log2 standard deviation of anisotropic scaling. - self.xfrac_std = float(xfrac_std) # Standard deviation of frational translation, relative to image dimensions. - - # Color transformations. - self.brightness = float(brightness) # Probability multiplier for brightness. - self.contrast = float(contrast) # Probability multiplier for contrast. - self.lumaflip = float(lumaflip) # Probability multiplier for luma flip. - self.hue = float(hue) # Probability multiplier for hue rotation. - self.saturation = float(saturation) # Probability multiplier for saturation. - self.brightness_std = float(brightness_std) # Standard deviation of brightness. - self.contrast_std = float(contrast_std) # Log2 standard deviation of contrast. - self.hue_max = float(hue_max) # Range of hue rotation, 1 = full circle. - self.saturation_std = float(saturation_std) # Log2 standard deviation of saturation. - - # Image-space filtering. - self.imgfilter = float(imgfilter) # Probability multiplier for image-space filtering. - self.imgfilter_bands = list(imgfilter_bands) # Probability multipliers for individual frequency bands. - self.imgfilter_std = float(imgfilter_std) # Log2 standard deviation of image-space filter amplification. - - # Setup orthogonal lowpass filter for geometric augmentations. - self.register_buffer('Hz_geom', upfirdn2d.setup_filter(wavelets['sym6'])) - - # Construct filter bank for image-space filtering. - Hz_lo = np.asarray(wavelets['sym2']) # H(z) - Hz_hi = Hz_lo * ((-1) ** np.arange(Hz_lo.size)) # H(-z) - Hz_lo2 = np.convolve(Hz_lo, Hz_lo[::-1]) / 2 # H(z) * H(z^-1) / 2 - Hz_hi2 = np.convolve(Hz_hi, Hz_hi[::-1]) / 2 # H(-z) * H(-z^-1) / 2 - Hz_fbank = np.eye(4, 1) # Bandpass(H(z), b_i) - for i in range(1, Hz_fbank.shape[0]): - Hz_fbank = np.dstack([Hz_fbank, np.zeros_like(Hz_fbank)]).reshape(Hz_fbank.shape[0], -1)[:, :-1] - Hz_fbank = scipy.signal.convolve(Hz_fbank, [Hz_lo2]) - Hz_fbank[i, (Hz_fbank.shape[1] - Hz_hi2.size) // 2 : (Hz_fbank.shape[1] + Hz_hi2.size) // 2] += Hz_hi2 - self.register_buffer('Hz_fbank', torch.as_tensor(Hz_fbank, dtype=torch.float32)) - - def forward(self, batch, debug_percentile=None): - images = batch["img"] - batch["vertices"] = batch["vertices"].float() - assert isinstance(images, torch.Tensor) and images.ndim == 4 - batch_size, num_channels, height, width = images.shape - device = images.device - self.Hz_fbank = self.Hz_fbank.to(device) - self.Hz_geom = self.Hz_geom.to(device) - if debug_percentile is not None: - debug_percentile = torch.as_tensor(debug_percentile, dtype=torch.float32, device=device) - - # ------------------------------------- - # Select parameters for pixel blitting. - # ------------------------------------- - - # Initialize inverse homogeneous 2D transform: G_inv @ pixel_out ==> pixel_in - I_3 = torch.eye(3, device=device) - G_inv = I_3 - - # Apply integer translation with probability (xint * strength). - if self.xint > 0: - t = (torch.rand([batch_size, 2], device=device) * 2 - 1) * self.xint_max - t = torch.where(torch.rand([batch_size, 1], device=device) < self.xint * self.p, t, torch.zeros_like(t)) - if debug_percentile is not None: - t = torch.full_like(t, (debug_percentile * 2 - 1) * self.xint_max) - G_inv = G_inv @ translate2d_inv(torch.round(t[:,0] * width), torch.round(t[:,1] * height)) - - # -------------------------------------------------------- - # Select parameters for general geometric transformations. - # -------------------------------------------------------- - - # Apply isotropic scaling with probability (scale * strength). - if self.scale > 0: - s = torch.exp2(torch.randn([batch_size], device=device) * self.scale_std) - s = torch.where(torch.rand([batch_size], device=device) < self.scale * self.p, s, torch.ones_like(s)) - if debug_percentile is not None: - s = torch.full_like(s, torch.exp2(torch.erfinv(debug_percentile * 2 - 1) * self.scale_std)) - G_inv = G_inv @ scale2d_inv(s, s) - - # Apply pre-rotation with probability p_rot. - p_rot = 1 - torch.sqrt((1 - self.rotate * self.p).clamp(0, 1)) # P(pre OR post) = p - if self.rotate > 0: - theta = (torch.rand([batch_size], device=device) * 2 - 1) * np.pi * self.rotate_max - theta = torch.where(torch.rand([batch_size], device=device) < p_rot, theta, torch.zeros_like(theta)) - if debug_percentile is not None: - theta = torch.full_like(theta, (debug_percentile * 2 - 1) * np.pi * self.rotate_max) - G_inv = G_inv @ rotate2d_inv(-theta) # Before anisotropic scaling. - - # Apply anisotropic scaling with probability (aniso * strength). - if self.aniso > 0: - s = torch.exp2(torch.randn([batch_size], device=device) * self.aniso_std) - s = torch.where(torch.rand([batch_size], device=device) < self.aniso * self.p, s, torch.ones_like(s)) - if debug_percentile is not None: - s = torch.full_like(s, torch.exp2(torch.erfinv(debug_percentile * 2 - 1) * self.aniso_std)) - G_inv = G_inv @ scale2d_inv(s, 1 / s) - - # Apply post-rotation with probability p_rot. - if self.rotate > 0: - theta = (torch.rand([batch_size], device=device) * 2 - 1) * np.pi * self.rotate_max - theta = torch.where(torch.rand([batch_size], device=device) < p_rot, theta, torch.zeros_like(theta)) - if debug_percentile is not None: - theta = torch.zeros_like(theta) - G_inv = G_inv @ rotate2d_inv(-theta) # After anisotropic scaling. - - # Apply fractional translation with probability (xfrac * strength). - if self.xfrac > 0: - t = torch.randn([batch_size, 2], device=device) * self.xfrac_std - t = torch.where(torch.rand([batch_size, 1], device=device) < self.xfrac * self.p, t, torch.zeros_like(t)) - if debug_percentile is not None: - t = torch.full_like(t, torch.erfinv(debug_percentile * 2 - 1) * self.xfrac_std) - G_inv = G_inv @ translate2d_inv(t[:,0] * width, t[:,1] * height) - - # ---------------------------------- - # Execute geometric transformations. - # ---------------------------------- - - # Execute if the transform is not identity. - if G_inv is not I_3: - # Calculate padding. - cx = (width - 1) / 2 - cy = (height - 1) / 2 - cp = matrix([-cx, -cy, 1], [cx, -cy, 1], [cx, cy, 1], [-cx, cy, 1], device=device) # [idx, xyz] - cp = G_inv @ cp.t() # [batch, xyz, idx] - Hz_pad = self.Hz_geom.shape[0] // 4 - margin = cp[:, :2, :].permute(1, 0, 2).flatten(1) # [xy, batch * idx] - margin = torch.cat([-margin, margin]).max(dim=1).values # [x0, y0, x1, y1] - margin = margin + misc.constant([Hz_pad * 2 - cx, Hz_pad * 2 - cy] * 2, device=device) - margin = margin.max(misc.constant([0, 0] * 2, device=device)) - margin = margin.min(misc.constant([width-1, height-1] * 2, device=device)) - mx0, my0, mx1, my1 = margin.ceil().to(torch.int32) - - # Pad image and adjust origin. - images = torch.nn.functional.pad(input=images, pad=[mx0,mx1,my0,my1], mode='reflect') - batch["mask"] = torch.nn.functional.pad(input=batch["mask"], pad=[mx0,mx1,my0,my1], mode='constant', value=1.0) - batch["E_mask"] = torch.nn.functional.pad(input=batch["E_mask"], pad=[mx0,mx1,my0,my1], mode='constant', value=0.0) - batch["vertices"] = torch.nn.functional.pad(input=batch["vertices"], pad=[mx0,mx1,my0,my1], mode='constant', value=0.0) - G_inv = translate2d((mx0 - mx1) / 2, (my0 - my1) / 2) @ G_inv - - # Upsample. - images = upfirdn2d.upsample2d(x=images, f=self.Hz_geom, up=2) - batch["mask"] = torch.nn.functional.interpolate(batch["mask"], scale_factor=2, mode="nearest") - batch["E_mask"] = torch.nn.functional.interpolate(batch["E_mask"], scale_factor=2, mode="nearest") - batch["vertices"] = torch.nn.functional.interpolate(batch["vertices"], scale_factor=2, mode="nearest") - G_inv = scale2d(2, 2, device=device) @ G_inv @ scale2d_inv(2, 2, device=device) - G_inv = translate2d(-0.5, -0.5, device=device) @ G_inv @ translate2d_inv(-0.5, -0.5, device=device) - - # Execute transformation. - shape = [batch_size, num_channels, (height + Hz_pad * 2) * 2, (width + Hz_pad * 2) * 2] - G_inv = scale2d(2 / images.shape[3], 2 / images.shape[2], device=device) @ G_inv @ scale2d_inv(2 / shape[3], 2 / shape[2], device=device) - grid = torch.nn.functional.affine_grid(theta=G_inv[:,:2,:], size=shape, align_corners=False) - images = grid_sample_gradfix.grid_sample(images, grid) - - batch["mask"] = torch.nn.functional.grid_sample( - input=batch["mask"], grid=grid, mode='nearest', padding_mode="border", align_corners=False) - batch["E_mask"] = torch.nn.functional.grid_sample( - input=batch["E_mask"], grid=grid, mode='nearest', padding_mode="border", align_corners=False) - batch["vertices"] = torch.nn.functional.grid_sample( - input=batch["vertices"], grid=grid, mode='nearest', padding_mode="border", align_corners=False) - - - # Downsample and crop. - images = upfirdn2d.downsample2d(x=images, f=self.Hz_geom, down=2, padding=-Hz_pad*2, flip_filter=True) - batch["mask"] = torch.nn.functional.interpolate(batch["mask"][:, :, Hz_pad*2:-Hz_pad*2, Hz_pad*2:-Hz_pad*2], scale_factor=.5, mode="nearest", recompute_scale_factor=False) - batch["E_mask"] = torch.nn.functional.interpolate(batch["E_mask"][:, :, Hz_pad*2:-Hz_pad*2, Hz_pad*2:-Hz_pad*2], scale_factor=.5, mode="nearest", recompute_scale_factor=False) - batch["vertices"] = torch.nn.functional.interpolate(batch["vertices"][:, :, Hz_pad*2:-Hz_pad*2, Hz_pad*2:-Hz_pad*2], scale_factor=.5, mode="nearest", recompute_scale_factor=False) - # -------------------------------------------- - # Select parameters for color transformations. - # -------------------------------------------- - - # Initialize homogeneous 3D transformation matrix: C @ color_in ==> color_out - I_4 = torch.eye(4, device=device) - C = I_4 - - # Apply brightness with probability (brightness * strength). - if self.brightness > 0: - b = torch.randn([batch_size], device=device) * self.brightness_std - b = torch.where(torch.rand([batch_size], device=device) < self.brightness * self.p, b, torch.zeros_like(b)) - if debug_percentile is not None: - b = torch.full_like(b, torch.erfinv(debug_percentile * 2 - 1) * self.brightness_std) - C = translate3d(b, b, b) @ C - - # Apply contrast with probability (contrast * strength). - if self.contrast > 0: - c = torch.exp2(torch.randn([batch_size], device=device) * self.contrast_std) - c = torch.where(torch.rand([batch_size], device=device) < self.contrast * self.p, c, torch.ones_like(c)) - if debug_percentile is not None: - c = torch.full_like(c, torch.exp2(torch.erfinv(debug_percentile * 2 - 1) * self.contrast_std)) - C = scale3d(c, c, c) @ C - - # Apply luma flip with probability (lumaflip * strength). - v = misc.constant(np.asarray([1, 1, 1, 0]) / np.sqrt(3), device=device) # Luma axis. - - # Apply hue rotation with probability (hue * strength). - if self.hue > 0 and num_channels > 1: - theta = (torch.rand([batch_size], device=device) * 2 - 1) * np.pi * self.hue_max - theta = torch.where(torch.rand([batch_size], device=device) < self.hue * self.p, theta, torch.zeros_like(theta)) - if debug_percentile is not None: - theta = torch.full_like(theta, (debug_percentile * 2 - 1) * np.pi * self.hue_max) - C = rotate3d(v, theta) @ C # Rotate around v. - - # Apply saturation with probability (saturation * strength). - if self.saturation > 0 and num_channels > 1: - s = torch.exp2(torch.randn([batch_size, 1, 1], device=device) * self.saturation_std) - s = torch.where(torch.rand([batch_size, 1, 1], device=device) < self.saturation * self.p, s, torch.ones_like(s)) - if debug_percentile is not None: - s = torch.full_like(s, torch.exp2(torch.erfinv(debug_percentile * 2 - 1) * self.saturation_std)) - C = (v.ger(v) + (I_4 - v.ger(v)) * s) @ C - - # ------------------------------ - # Execute color transformations. - # ------------------------------ - - # Execute if the transform is not identity. - if C is not I_4: - images = images.reshape([batch_size, num_channels, height * width]) - if num_channels == 3: - images = C[:, :3, :3] @ images + C[:, :3, 3:] - elif num_channels == 1: - C = C[:, :3, :].mean(dim=1, keepdims=True) - images = images * C[:, :, :3].sum(dim=2, keepdims=True) + C[:, :, 3:] - else: - raise ValueError('Image must be RGB (3 channels) or L (1 channel)') - images = images.reshape([batch_size, num_channels, height, width]) - - # ---------------------- - # Image-space filtering. - # ---------------------- - - if self.imgfilter > 0: - num_bands = self.Hz_fbank.shape[0] - assert len(self.imgfilter_bands) == num_bands - expected_power = misc.constant(np.array([10, 1, 1, 1]) / 13, device=device) # Expected power spectrum (1/f). - - # Apply amplification for each band with probability (imgfilter * strength * band_strength). - g = torch.ones([batch_size, num_bands], device=device) # Global gain vector (identity). - for i, band_strength in enumerate(self.imgfilter_bands): - t_i = torch.exp2(torch.randn([batch_size], device=device) * self.imgfilter_std) - t_i = torch.where(torch.rand([batch_size], device=device) < self.imgfilter * self.p * band_strength, t_i, torch.ones_like(t_i)) - if debug_percentile is not None: - t_i = torch.full_like(t_i, torch.exp2(torch.erfinv(debug_percentile * 2 - 1) * self.imgfilter_std)) if band_strength > 0 else torch.ones_like(t_i) - t = torch.ones([batch_size, num_bands], device=device) # Temporary gain vector. - t[:, i] = t_i # Replace i'th element. - t = t / (expected_power * t.square()).sum(dim=-1, keepdims=True).sqrt() # Normalize power. - g = g * t # Accumulate into global gain. - - # Construct combined amplification filter. - Hz_prime = g @ self.Hz_fbank # [batch, tap] - Hz_prime = Hz_prime.unsqueeze(1).repeat([1, num_channels, 1]) # [batch, channels, tap] - Hz_prime = Hz_prime.reshape([batch_size * num_channels, 1, -1]) # [batch * channels, 1, tap] - - # Apply filter. - p = self.Hz_fbank.shape[1] // 2 - images = images.reshape([1, batch_size * num_channels, height, width]) - images = torch.nn.functional.pad(input=images, pad=[p,p,p,p], mode='reflect') - images = conv2d_gradfix.conv2d(input=images, weight=Hz_prime.unsqueeze(2), groups=batch_size*num_channels) - images = conv2d_gradfix.conv2d(input=images, weight=Hz_prime.unsqueeze(3), groups=batch_size*num_channels) - images = images.reshape([batch_size, num_channels, height, width]) - - # ------------------------ - # Image-space corruptions. - # ------------------------ - batch["img"] = images - batch["vertices"] = batch["vertices"].long() - batch["border"] = 1 - batch["E_mask"] - batch["mask"] - return batch diff --git a/spaces/hands012/gpt-academic/request_llm/edge_gpt_free.py b/spaces/hands012/gpt-academic/request_llm/edge_gpt_free.py deleted file mode 100644 index ef6187379c470b0f325d50d7642cfc95b933f1ef..0000000000000000000000000000000000000000 --- a/spaces/hands012/gpt-academic/request_llm/edge_gpt_free.py +++ /dev/null @@ -1,1112 +0,0 @@ -""" -======================================================================== -第一部分:来自EdgeGPT.py -https://github.com/acheong08/EdgeGPT -======================================================================== -""" -""" -Main.py -""" - -import argparse -import asyncio -import json -import os -import random -import re -import ssl -import sys -import time -import uuid -from enum import Enum -from pathlib import Path -from typing import Generator -from typing import Literal -from typing import Optional -from typing import Union - -import aiohttp -import certifi -import httpx -from prompt_toolkit import PromptSession -from prompt_toolkit.auto_suggest import AutoSuggestFromHistory -from prompt_toolkit.completion import WordCompleter -from prompt_toolkit.history import InMemoryHistory -from prompt_toolkit.key_binding import KeyBindings -from rich.live import Live -from rich.markdown import Markdown - -DELIMITER = "\x1e" - - -# Generate random IP between range 13.104.0.0/14 -FORWARDED_IP = ( - f"13.{random.randint(104, 107)}.{random.randint(0, 255)}.{random.randint(0, 255)}" -) - -HEADERS = { - "accept": "application/json", - "accept-language": "en-US,en;q=0.9", - "content-type": "application/json", - "sec-ch-ua": '"Not_A Brand";v="99", "Microsoft Edge";v="110", "Chromium";v="110"', - "sec-ch-ua-arch": '"x86"', - "sec-ch-ua-bitness": '"64"', - "sec-ch-ua-full-version": '"109.0.1518.78"', - "sec-ch-ua-full-version-list": '"Chromium";v="110.0.5481.192", "Not A(Brand";v="24.0.0.0", "Microsoft Edge";v="110.0.1587.69"', - "sec-ch-ua-mobile": "?0", - "sec-ch-ua-model": "", - "sec-ch-ua-platform": '"Windows"', - "sec-ch-ua-platform-version": '"15.0.0"', - "sec-fetch-dest": "empty", - "sec-fetch-mode": "cors", - "sec-fetch-site": "same-origin", - "x-ms-client-request-id": str(uuid.uuid4()), - "x-ms-useragent": "azsdk-js-api-client-factory/1.0.0-beta.1 core-rest-pipeline/1.10.0 OS/Win32", - "Referer": "https://www.bing.com/search?q=Bing+AI&showconv=1&FORM=hpcodx", - "Referrer-Policy": "origin-when-cross-origin", - "x-forwarded-for": FORWARDED_IP, -} - -HEADERS_INIT_CONVER = { - "authority": "edgeservices.bing.com", - "accept": "text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.7", - "accept-language": "en-US,en;q=0.9", - "cache-control": "max-age=0", - "sec-ch-ua": '"Chromium";v="110", "Not A(Brand";v="24", "Microsoft Edge";v="110"', - "sec-ch-ua-arch": '"x86"', - "sec-ch-ua-bitness": '"64"', - "sec-ch-ua-full-version": '"110.0.1587.69"', - "sec-ch-ua-full-version-list": '"Chromium";v="110.0.5481.192", "Not A(Brand";v="24.0.0.0", "Microsoft Edge";v="110.0.1587.69"', - "sec-ch-ua-mobile": "?0", - "sec-ch-ua-model": '""', - "sec-ch-ua-platform": '"Windows"', - "sec-ch-ua-platform-version": '"15.0.0"', - "sec-fetch-dest": "document", - "sec-fetch-mode": "navigate", - "sec-fetch-site": "none", - "sec-fetch-user": "?1", - "upgrade-insecure-requests": "1", - "user-agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/110.0.0.0 Safari/537.36 Edg/110.0.1587.69", - "x-edge-shopping-flag": "1", - "x-forwarded-for": FORWARDED_IP, -} - -ssl_context = ssl.create_default_context() -ssl_context.load_verify_locations(certifi.where()) - - -class NotAllowedToAccess(Exception): - pass - - -class ConversationStyle(Enum): - creative = [ - "nlu_direct_response_filter", - "deepleo", - "disable_emoji_spoken_text", - "responsible_ai_policy_235", - "enablemm", - "h3imaginative", - "travelansgnd", - "dv3sugg", - "clgalileo", - "gencontentv3", - "dv3sugg", - "responseos", - "e2ecachewrite", - "cachewriteext", - "nodlcpcwrite", - "travelansgnd", - "nojbfedge", - ] - balanced = [ - "nlu_direct_response_filter", - "deepleo", - "disable_emoji_spoken_text", - "responsible_ai_policy_235", - "enablemm", - "galileo", - "dv3sugg", - "responseos", - "e2ecachewrite", - "cachewriteext", - "nodlcpcwrite", - "travelansgnd", - "nojbfedge", - ] - precise = [ - "nlu_direct_response_filter", - "deepleo", - "disable_emoji_spoken_text", - "responsible_ai_policy_235", - "enablemm", - "galileo", - "dv3sugg", - "responseos", - "e2ecachewrite", - "cachewriteext", - "nodlcpcwrite", - "travelansgnd", - "h3precise", - "clgalileo", - "nojbfedge", - ] - - -CONVERSATION_STYLE_TYPE = Optional[ - Union[ConversationStyle, Literal["creative", "balanced", "precise"]] -] - - -def _append_identifier(msg: dict) -> str: - """ - Appends special character to end of message to identify end of message - """ - # Convert dict to json string - return json.dumps(msg, ensure_ascii=False) + DELIMITER - - -def _get_ran_hex(length: int = 32) -> str: - """ - Returns random hex string - """ - return "".join(random.choice("0123456789abcdef") for _ in range(length)) - - -class _ChatHubRequest: - """ - Request object for ChatHub - """ - - def __init__( - self, - conversation_signature: str, - client_id: str, - conversation_id: str, - invocation_id: int = 0, - ) -> None: - self.struct: dict = {} - - self.client_id: str = client_id - self.conversation_id: str = conversation_id - self.conversation_signature: str = conversation_signature - self.invocation_id: int = invocation_id - - def update( - self, - prompt: str, - conversation_style: CONVERSATION_STYLE_TYPE, - options = None, - webpage_context = None, - search_result = False, - ) -> None: - """ - Updates request object - """ - if options is None: - options = [ - "deepleo", - "enable_debug_commands", - "disable_emoji_spoken_text", - "enablemm", - ] - if conversation_style: - if not isinstance(conversation_style, ConversationStyle): - conversation_style = getattr(ConversationStyle, conversation_style) - options = conversation_style.value - self.struct = { - "arguments": [ - { - "source": "cib", - "optionsSets": options, - "allowedMessageTypes": [ - "Chat", - "Disengaged", - "AdsQuery", - "SemanticSerp", - "GenerateContentQuery", - "SearchQuery", - ], - "sliceIds": [ - "chk1cf", - "nopreloadsscf", - "winlongmsg2tf", - "perfimpcomb", - "sugdivdis", - "sydnoinputt", - "wpcssopt", - "wintone2tf", - "0404sydicnbs0", - "405suggbs0", - "scctl", - "330uaugs0", - "0329resp", - "udscahrfon", - "udstrblm5", - "404e2ewrt", - "408nodedups0", - "403tvlansgnd", - ], - "traceId": _get_ran_hex(32), - "isStartOfSession": self.invocation_id == 0, - "message": { - "author": "user", - "inputMethod": "Keyboard", - "text": prompt, - "messageType": "Chat", - }, - "conversationSignature": self.conversation_signature, - "participant": { - "id": self.client_id, - }, - "conversationId": self.conversation_id, - }, - ], - "invocationId": str(self.invocation_id), - "target": "chat", - "type": 4, - } - if search_result: - have_search_result = [ - "InternalSearchQuery", - "InternalSearchResult", - "InternalLoaderMessage", - "RenderCardRequest", - ] - self.struct["arguments"][0]["allowedMessageTypes"] += have_search_result - if webpage_context: - self.struct["arguments"][0]["previousMessages"] = [ - { - "author": "user", - "description": webpage_context, - "contextType": "WebPage", - "messageType": "Context", - "messageId": "discover-web--page-ping-mriduna-----", - }, - ] - self.invocation_id += 1 - - -class _Conversation: - """ - Conversation API - """ - - def __init__( - self, - proxy = None, - async_mode = False, - cookies = None, - ) -> None: - if async_mode: - return - self.struct: dict = { - "conversationId": None, - "clientId": None, - "conversationSignature": None, - "result": {"value": "Success", "message": None}, - } - self.proxy = proxy - proxy = ( - proxy - or os.environ.get("all_proxy") - or os.environ.get("ALL_PROXY") - or os.environ.get("https_proxy") - or os.environ.get("HTTPS_PROXY") - or None - ) - if proxy is not None and proxy.startswith("socks5h://"): - proxy = "socks5://" + proxy[len("socks5h://") :] - self.session = httpx.Client( - proxies=proxy, - timeout=30, - headers=HEADERS_INIT_CONVER, - ) - if cookies: - for cookie in cookies: - self.session.cookies.set(cookie["name"], cookie["value"]) - # Send GET request - response = self.session.get( - url=os.environ.get("BING_PROXY_URL") - or "https://edgeservices.bing.com/edgesvc/turing/conversation/create", - ) - if response.status_code != 200: - response = self.session.get( - "https://edge.churchless.tech/edgesvc/turing/conversation/create", - ) - if response.status_code != 200: - print(f"Status code: {response.status_code}") - print(response.text) - print(response.url) - raise Exception("Authentication failed") - try: - self.struct = response.json() - except (json.decoder.JSONDecodeError, NotAllowedToAccess) as exc: - raise Exception( - "Authentication failed. You have not been accepted into the beta.", - ) from exc - if self.struct["result"]["value"] == "UnauthorizedRequest": - raise NotAllowedToAccess(self.struct["result"]["message"]) - - @staticmethod - async def create( - proxy = None, - cookies = None, - ): - self = _Conversation(async_mode=True) - self.struct = { - "conversationId": None, - "clientId": None, - "conversationSignature": None, - "result": {"value": "Success", "message": None}, - } - self.proxy = proxy - proxy = ( - proxy - or os.environ.get("all_proxy") - or os.environ.get("ALL_PROXY") - or os.environ.get("https_proxy") - or os.environ.get("HTTPS_PROXY") - or None - ) - if proxy is not None and proxy.startswith("socks5h://"): - proxy = "socks5://" + proxy[len("socks5h://") :] - transport = httpx.AsyncHTTPTransport(retries=10) - # Convert cookie format to httpx format - formatted_cookies = None - if cookies: - formatted_cookies = httpx.Cookies() - for cookie in cookies: - formatted_cookies.set(cookie["name"], cookie["value"]) - async with httpx.AsyncClient( - proxies=proxy, - timeout=30, - headers=HEADERS_INIT_CONVER, - transport=transport, - cookies=formatted_cookies, - ) as client: - # Send GET request - response = await client.get( - url=os.environ.get("BING_PROXY_URL") - or "https://edgeservices.bing.com/edgesvc/turing/conversation/create", - ) - if response.status_code != 200: - response = await client.get( - "https://edge.churchless.tech/edgesvc/turing/conversation/create", - ) - if response.status_code != 200: - print(f"Status code: {response.status_code}") - print(response.text) - print(response.url) - raise Exception("Authentication failed") - try: - self.struct = response.json() - except (json.decoder.JSONDecodeError, NotAllowedToAccess) as exc: - raise Exception( - "Authentication failed. You have not been accepted into the beta.", - ) from exc - if self.struct["result"]["value"] == "UnauthorizedRequest": - raise NotAllowedToAccess(self.struct["result"]["message"]) - return self - - -class _ChatHub: - """ - Chat API - """ - - def __init__( - self, - conversation: _Conversation, - proxy = None, - cookies = None, - ) -> None: - self.session = None - self.wss = None - self.request: _ChatHubRequest - self.loop: bool - self.task: asyncio.Task - self.request = _ChatHubRequest( - conversation_signature=conversation.struct["conversationSignature"], - client_id=conversation.struct["clientId"], - conversation_id=conversation.struct["conversationId"], - ) - self.cookies = cookies - self.proxy: str = proxy - - async def ask_stream( - self, - prompt: str, - wss_link: str, - conversation_style: CONVERSATION_STYLE_TYPE = None, - raw: bool = False, - options: dict = None, - webpage_context = None, - search_result: bool = False, - ) -> Generator[str, None, None]: - """ - Ask a question to the bot - """ - timeout = aiohttp.ClientTimeout(total=30) - self.session = aiohttp.ClientSession(timeout=timeout) - - if self.wss and not self.wss.closed: - await self.wss.close() - # Check if websocket is closed - self.wss = await self.session.ws_connect( - wss_link, - headers=HEADERS, - ssl=ssl_context, - proxy=self.proxy, - autoping=False, - ) - await self._initial_handshake() - if self.request.invocation_id == 0: - # Construct a ChatHub request - self.request.update( - prompt=prompt, - conversation_style=conversation_style, - options=options, - webpage_context=webpage_context, - search_result=search_result, - ) - else: - async with httpx.AsyncClient() as client: - response = await client.post( - "https://sydney.bing.com/sydney/UpdateConversation/", - json={ - "messages": [ - { - "author": "user", - "description": webpage_context, - "contextType": "WebPage", - "messageType": "Context", - }, - ], - "conversationId": self.request.conversation_id, - "source": "cib", - "traceId": _get_ran_hex(32), - "participant": {"id": self.request.client_id}, - "conversationSignature": self.request.conversation_signature, - }, - ) - if response.status_code != 200: - print(f"Status code: {response.status_code}") - print(response.text) - print(response.url) - raise Exception("Update web page context failed") - # Construct a ChatHub request - self.request.update( - prompt=prompt, - conversation_style=conversation_style, - options=options, - ) - # Send request - await self.wss.send_str(_append_identifier(self.request.struct)) - final = False - draw = False - resp_txt = "" - result_text = "" - resp_txt_no_link = "" - while not final: - msg = await self.wss.receive() - objects = msg.data.split(DELIMITER) - for obj in objects: - if obj is None or not obj: - continue - response = json.loads(obj) - if response.get("type") != 2 and raw: - yield False, response - elif response.get("type") == 1 and response["arguments"][0].get( - "messages", - ): - if not draw: - if ( - response["arguments"][0]["messages"][0].get("messageType") - == "GenerateContentQuery" - ): - async with ImageGenAsync("", True) as image_generator: - images = await image_generator.get_images( - response["arguments"][0]["messages"][0]["text"], - ) - for i, image in enumerate(images): - resp_txt = resp_txt + f"\n![image{i}]({image})" - draw = True - if ( - response["arguments"][0]["messages"][0]["contentOrigin"] - != "Apology" - ) and not draw: - resp_txt = result_text + response["arguments"][0][ - "messages" - ][0]["adaptiveCards"][0]["body"][0].get("text", "") - resp_txt_no_link = result_text + response["arguments"][0][ - "messages" - ][0].get("text", "") - if response["arguments"][0]["messages"][0].get( - "messageType", - ): - resp_txt = ( - resp_txt - + response["arguments"][0]["messages"][0][ - "adaptiveCards" - ][0]["body"][0]["inlines"][0].get("text") - + "\n" - ) - result_text = ( - result_text - + response["arguments"][0]["messages"][0][ - "adaptiveCards" - ][0]["body"][0]["inlines"][0].get("text") - + "\n" - ) - yield False, resp_txt - - elif response.get("type") == 2: - if response["item"]["result"].get("error"): - await self.close() - raise Exception( - f"{response['item']['result']['value']}: {response['item']['result']['message']}", - ) - if draw: - cache = response["item"]["messages"][1]["adaptiveCards"][0][ - "body" - ][0]["text"] - response["item"]["messages"][1]["adaptiveCards"][0]["body"][0][ - "text" - ] = (cache + resp_txt) - if ( - response["item"]["messages"][-1]["contentOrigin"] == "Apology" - and resp_txt - ): - response["item"]["messages"][-1]["text"] = resp_txt_no_link - response["item"]["messages"][-1]["adaptiveCards"][0]["body"][0][ - "text" - ] = resp_txt - print( - "Preserved the message from being deleted", - file=sys.stderr, - ) - final = True - await self.close() - yield True, response - - async def _initial_handshake(self) -> None: - await self.wss.send_str(_append_identifier({"protocol": "json", "version": 1})) - await self.wss.receive() - - async def close(self) -> None: - """ - Close the connection - """ - if self.wss and not self.wss.closed: - await self.wss.close() - if self.session and not self.session.closed: - await self.session.close() - - -class Chatbot: - """ - Combines everything to make it seamless - """ - - def __init__( - self, - proxy = None, - cookies = None, - ) -> None: - self.proxy = proxy - self.chat_hub: _ChatHub = _ChatHub( - _Conversation(self.proxy, cookies=cookies), - proxy=self.proxy, - cookies=cookies, - ) - - @staticmethod - async def create( - proxy = None, - cookies = None, - ): - self = Chatbot.__new__(Chatbot) - self.proxy = proxy - self.chat_hub = _ChatHub( - await _Conversation.create(self.proxy, cookies=cookies), - proxy=self.proxy, - cookies=cookies, - ) - return self - - async def ask( - self, - prompt: str, - wss_link: str = "wss://sydney.bing.com/sydney/ChatHub", - conversation_style: CONVERSATION_STYLE_TYPE = None, - options: dict = None, - webpage_context = None, - search_result: bool = False, - ) -> dict: - """ - Ask a question to the bot - """ - async for final, response in self.chat_hub.ask_stream( - prompt=prompt, - conversation_style=conversation_style, - wss_link=wss_link, - options=options, - webpage_context=webpage_context, - search_result=search_result, - ): - if final: - return response - await self.chat_hub.wss.close() - return {} - - async def ask_stream( - self, - prompt: str, - wss_link: str = "wss://sydney.bing.com/sydney/ChatHub", - conversation_style: CONVERSATION_STYLE_TYPE = None, - raw: bool = False, - options: dict = None, - webpage_context = None, - search_result: bool = False, - ) -> Generator[str, None, None]: - """ - Ask a question to the bot - """ - async for response in self.chat_hub.ask_stream( - prompt=prompt, - conversation_style=conversation_style, - wss_link=wss_link, - raw=raw, - options=options, - webpage_context=webpage_context, - search_result=search_result, - ): - yield response - - async def close(self) -> None: - """ - Close the connection - """ - await self.chat_hub.close() - - async def reset(self) -> None: - """ - Reset the conversation - """ - await self.close() - self.chat_hub = _ChatHub( - await _Conversation.create(self.proxy), - proxy=self.proxy, - cookies=self.chat_hub.cookies, - ) - - -async def _get_input_async( - session: PromptSession = None, - completer: WordCompleter = None, -) -> str: - """ - Multiline input function. - """ - return await session.prompt_async( - completer=completer, - multiline=True, - auto_suggest=AutoSuggestFromHistory(), - ) - - -def _create_session() -> PromptSession: - kb = KeyBindings() - - @kb.add("enter") - def _(event): - buffer_text = event.current_buffer.text - if buffer_text.startswith("!"): - event.current_buffer.validate_and_handle() - else: - event.current_buffer.insert_text("\n") - - @kb.add("escape") - def _(event): - if event.current_buffer.complete_state: - # event.current_buffer.cancel_completion() - event.current_buffer.text = "" - - return PromptSession(key_bindings=kb, history=InMemoryHistory()) - - -def _create_completer(commands: list, pattern_str: str = "$"): - return WordCompleter(words=commands, pattern=re.compile(pattern_str)) - - -async def async_main(args: argparse.Namespace) -> None: - """ - Main function - """ - print("Initializing...") - print("Enter `alt+enter` or `escape+enter` to send a message") - # Read and parse cookies - cookies = None - if args.cookie_file: - cookies = json.loads(open(args.cookie_file, encoding="utf-8").read()) - bot = await Chatbot.create(proxy=args.proxy, cookies=cookies) - session = _create_session() - completer = _create_completer(["!help", "!exit", "!reset"]) - initial_prompt = args.prompt - - while True: - print("\nYou:") - if initial_prompt: - question = initial_prompt - print(question) - initial_prompt = None - else: - question = ( - input() - if args.enter_once - else await _get_input_async(session=session, completer=completer) - ) - print() - if question == "!exit": - break - if question == "!help": - print( - """ - !help - Show this help message - !exit - Exit the program - !reset - Reset the conversation - """, - ) - continue - if question == "!reset": - await bot.reset() - continue - print("Bot:") - if args.no_stream: - print( - ( - await bot.ask( - prompt=question, - conversation_style=args.style, - wss_link=args.wss_link, - ) - )["item"]["messages"][1]["adaptiveCards"][0]["body"][0]["text"], - ) - else: - wrote = 0 - if args.rich: - md = Markdown("") - with Live(md, auto_refresh=False) as live: - async for final, response in bot.ask_stream( - prompt=question, - conversation_style=args.style, - wss_link=args.wss_link, - ): - if not final: - if wrote > len(response): - print(md) - print(Markdown("***Bing revoked the response.***")) - wrote = len(response) - md = Markdown(response) - live.update(md, refresh=True) - else: - async for final, response in bot.ask_stream( - prompt=question, - conversation_style=args.style, - wss_link=args.wss_link, - ): - if not final: - if not wrote: - print(response, end="", flush=True) - else: - print(response[wrote:], end="", flush=True) - wrote = len(response) - print() - await bot.close() - - -def main() -> None: - print( - """ - EdgeGPT - A demo of reverse engineering the Bing GPT chatbot - Repo: github.com/acheong08/EdgeGPT - By: Antonio Cheong - - !help for help - - Type !exit to exit - """, - ) - parser = argparse.ArgumentParser() - parser.add_argument("--enter-once", action="store_true") - parser.add_argument("--no-stream", action="store_true") - parser.add_argument("--rich", action="store_true") - parser.add_argument( - "--proxy", - help="Proxy URL (e.g. socks5://127.0.0.1:1080)", - type=str, - ) - parser.add_argument( - "--wss-link", - help="WSS URL(e.g. wss://sydney.bing.com/sydney/ChatHub)", - type=str, - default="wss://sydney.bing.com/sydney/ChatHub", - ) - parser.add_argument( - "--style", - choices=["creative", "balanced", "precise"], - default="balanced", - ) - parser.add_argument( - "--prompt", - type=str, - default="", - required=False, - help="prompt to start with", - ) - parser.add_argument( - "--cookie-file", - type=str, - default="", - required=False, - help="path to cookie file", - ) - args = parser.parse_args() - asyncio.run(async_main(args)) - - -class Cookie: - """ - Convenience class for Bing Cookie files, data, and configuration. This Class - is updated dynamically by the Query class to allow cycling through >1 - cookie/credentials file e.g. when daily request limits (current 200 per - account per day) are exceeded. - """ - - current_file_index = 0 - dirpath = Path("./").resolve() - search_pattern = "bing_cookies_*.json" - ignore_files = set() - - @classmethod - def fetch_default(cls, path=None): - from selenium import webdriver - from selenium.webdriver.common.by import By - - driver = webdriver.Edge() - driver.get("https://bing.com/chat") - time.sleep(5) - xpath = '//button[@id="bnp_btn_accept"]' - driver.find_element(By.XPATH, xpath).click() - time.sleep(2) - xpath = '//a[@id="codexPrimaryButton"]' - driver.find_element(By.XPATH, xpath).click() - if path is None: - path = Path("./bing_cookies__default.json") - # Double underscore ensures this file is first when sorted - cookies = driver.get_cookies() - Path(path).write_text(json.dumps(cookies, indent=4), encoding="utf-8") - # Path again in case supplied path is: str - print(f"Cookies saved to: {path}") - driver.quit() - - @classmethod - def files(cls): - """Return a sorted list of all cookie files matching .search_pattern""" - all_files = set(cls.dirpath.glob(cls.search_pattern)) - return sorted(list(all_files - cls.ignore_files)) - - @classmethod - def import_data(cls): - """ - Read the active cookie file and populate the following attributes: - - .current_filepath - .current_data - .image_token - """ - try: - cls.current_filepath = cls.files()[cls.current_file_index] - except IndexError: - print( - "> Please set Cookie.current_filepath to a valid cookie file, then run Cookie.import_data()", - ) - return - print(f"> Importing cookies from: {cls.current_filepath.name}") - with open(cls.current_filepath, encoding="utf-8") as file: - cls.current_data = json.load(file) - cls.image_token = [x for x in cls.current_data if x.get("name") == "_U"] - cls.image_token = cls.image_token[0].get("value") - - @classmethod - def import_next(cls): - """ - Cycle through to the next cookies file. Import it. Mark the previous - file to be ignored for the remainder of the current session. - """ - cls.ignore_files.add(cls.current_filepath) - if Cookie.current_file_index >= len(cls.files()): - Cookie.current_file_index = 0 - Cookie.import_data() - - -class Query: - """ - A convenience class that wraps around EdgeGPT.Chatbot to encapsulate input, - config, and output all together. Relies on Cookie class for authentication - """ - - def __init__( - self, - prompt, - style="precise", - content_type="text", - cookie_file=0, - echo=True, - echo_prompt=False, - ): - """ - Arguments: - - prompt: Text to enter into Bing Chat - style: creative, balanced, or precise - content_type: "text" for Bing Chat; "image" for Dall-e - cookie_file: Path, filepath string, or index (int) to list of cookie paths - echo: Print something to confirm request made - echo_prompt: Print confirmation of the evaluated prompt - """ - self.index = [] - self.request_count = {} - self.image_dirpath = Path("./").resolve() - Cookie.import_data() - self.index += [self] - self.prompt = prompt - files = Cookie.files() - if isinstance(cookie_file, int): - index = cookie_file if cookie_file < len(files) else 0 - else: - if not isinstance(cookie_file, (str, Path)): - message = "'cookie_file' must be an int, str, or Path object" - raise TypeError(message) - cookie_file = Path(cookie_file) - if cookie_file in files(): # Supplied filepath IS in Cookie.dirpath - index = files.index(cookie_file) - else: # Supplied filepath is NOT in Cookie.dirpath - if cookie_file.is_file(): - Cookie.dirpath = cookie_file.parent.resolve() - if cookie_file.is_dir(): - Cookie.dirpath = cookie_file.resolve() - index = 0 - Cookie.current_file_index = index - if content_type == "text": - self.style = style - self.log_and_send_query(echo, echo_prompt) - if content_type == "image": - self.create_image() - - def log_and_send_query(self, echo, echo_prompt): - self.response = asyncio.run(self.send_to_bing(echo, echo_prompt)) - name = str(Cookie.current_filepath.name) - if not self.request_count.get(name): - self.request_count[name] = 1 - else: - self.request_count[name] += 1 - - def create_image(self): - image_generator = ImageGen(Cookie.image_token) - image_generator.save_images( - image_generator.get_images(self.prompt), - output_dir=self.image_dirpath, - ) - - async def send_to_bing(self, echo=True, echo_prompt=False): - """Creat, submit, then close a Chatbot instance. Return the response""" - retries = len(Cookie.files()) - while retries: - try: - bot = await Chatbot.create() - if echo_prompt: - print(f"> {self.prompt=}") - if echo: - print("> Waiting for response...") - if self.style.lower() not in "creative balanced precise".split(): - self.style = "precise" - response = await bot.ask( - prompt=self.prompt, - conversation_style=getattr(ConversationStyle, self.style), - # wss_link="wss://sydney.bing.com/sydney/ChatHub" - # What other values can this parameter take? It seems to be optional - ) - return response - except KeyError: - print( - f"> KeyError [{Cookie.current_filepath.name} may have exceeded the daily limit]", - ) - Cookie.import_next() - retries -= 1 - finally: - await bot.close() - - @property - def output(self): - """The response from a completed Chatbot request""" - return self.response["item"]["messages"][1]["text"] - - @property - def sources(self): - """The source names and details parsed from a completed Chatbot request""" - return self.response["item"]["messages"][1]["sourceAttributions"] - - @property - def sources_dict(self): - """The source names and details as a dictionary""" - sources_dict = {} - name = "providerDisplayName" - url = "seeMoreUrl" - for source in self.sources: - if name in source.keys() and url in source.keys(): - sources_dict[source[name]] = source[url] - else: - continue - return sources_dict - - @property - def code(self): - """Extract and join any snippets of Python code in the response""" - code_blocks = self.output.split("```")[1:-1:2] - code_blocks = ["\n".join(x.splitlines()[1:]) for x in code_blocks] - return "\n\n".join(code_blocks) - - @property - def languages(self): - """Extract all programming languages given in code blocks""" - code_blocks = self.output.split("```")[1:-1:2] - return {x.splitlines()[0] for x in code_blocks} - - @property - def suggestions(self): - """Follow-on questions suggested by the Chatbot""" - return [ - x["text"] - for x in self.response["item"]["messages"][1]["suggestedResponses"] - ] - - def __repr__(self): - return f"" - - def __str__(self): - return self.output - - -class ImageQuery(Query): - def __init__(self, prompt, **kwargs): - kwargs.update({"content_type": "image"}) - super().__init__(prompt, **kwargs) - - def __repr__(self): - return f"" - - -if __name__ == "__main__": - main() \ No newline at end of file diff --git a/spaces/hlydecker/ImageBind_zeroshot_demo/CONTRIBUTING.md b/spaces/hlydecker/ImageBind_zeroshot_demo/CONTRIBUTING.md deleted file mode 100644 index 63d0b751e8a00b606ddff92e2524faa3c90a63b0..0000000000000000000000000000000000000000 --- a/spaces/hlydecker/ImageBind_zeroshot_demo/CONTRIBUTING.md +++ /dev/null @@ -1,31 +0,0 @@ -# Contributing to ImageBind -We want to make contributing to this project as easy and transparent as -possible. - -## Pull Requests -We actively welcome your pull requests. - -1. Fork the repo and create your branch from `main`. -2. If you've added code that should be tested, add tests. -3. If you've changed APIs, update the documentation. -4. Ensure the test suite passes. -5. Make sure your code lints. -6. If you haven't already, complete the Contributor License Agreement ("CLA"). - -## Contributor License Agreement ("CLA") -In order to accept your pull request, we need you to submit a CLA. You only need -to do this once to work on any of Meta's open source projects. - -Complete your CLA here: - -## Issues -We use GitHub issues to track public bugs. Please ensure your description is -clear and has sufficient instructions to be able to reproduce the issue. - -Meta has a [bounty program](https://www.facebook.com/whitehat/) for the safe -disclosure of security bugs. In those cases, please go through the process -outlined on that page and do not file a public issue. - -## License -By contributing to Omnivore, you agree that your contributions will be licensed -under the [LICENSE](LICENSE) file in the root directory of this source tree. diff --git a/spaces/ho11laqe/nnUNet_calvingfront_detection/nnunet/training/network_training/nnUNet_variants/loss_function/nnUNetTrainerV2_graduallyTransitionFromCEToDice.py b/spaces/ho11laqe/nnUNet_calvingfront_detection/nnunet/training/network_training/nnUNet_variants/loss_function/nnUNetTrainerV2_graduallyTransitionFromCEToDice.py deleted file mode 100644 index 77159b6bea28d166eb73d2431f1c34c5d0c50189..0000000000000000000000000000000000000000 --- a/spaces/ho11laqe/nnUNet_calvingfront_detection/nnunet/training/network_training/nnUNet_variants/loss_function/nnUNetTrainerV2_graduallyTransitionFromCEToDice.py +++ /dev/null @@ -1,58 +0,0 @@ -# Copyright 2020 Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - - -from nnunet.training.loss_functions.deep_supervision import MultipleOutputLoss2 -from nnunet.training.loss_functions.dice_loss import DC_and_CE_loss -from nnunet.training.network_training.nnUNetTrainerV2 import nnUNetTrainerV2 - - -class nnUNetTrainerV2_graduallyTransitionFromCEToDice(nnUNetTrainerV2): - def __init__(self, plans_file, fold, output_folder=None, dataset_directory=None, batch_dice=True, stage=None, - unpack_data=True, deterministic=True, fp16=False): - super().__init__(plans_file, fold, output_folder, dataset_directory, batch_dice, stage, unpack_data, - deterministic, fp16) - self.loss = DC_and_CE_loss({'batch_dice': self.batch_dice, 'smooth': 1e-5, 'do_bg': False}, {}, weight_ce=2, weight_dice=0) - - def update_loss(self): - # we train the first 500 epochs with CE, then transition to Dice between 500 and 750. The last 250 epochs will be Dice only - - if self.epoch <= 500: - weight_ce = 2 - weight_dice = 0 - elif 500 < self.epoch <= 750: - weight_ce = 2 - 2 / 250 * (self.epoch - 500) - weight_dice = 0 + 2 / 250 * (self.epoch - 500) - elif 750 < self.epoch <= self.max_num_epochs: - weight_ce = 0 - weight_dice = 2 - else: - raise RuntimeError("Invalid epoch: %d" % self.epoch) - - self.print_to_log_file("weight ce", weight_ce, "weight dice", weight_dice) - - self.loss = DC_and_CE_loss({'batch_dice': self.batch_dice, 'smooth': 1e-5, 'do_bg': False}, {}, weight_ce=weight_ce, - weight_dice=weight_dice) - - self.loss = MultipleOutputLoss2(self.loss, self.ds_loss_weights) - - def on_epoch_end(self): - ret = super().on_epoch_end() - self.update_loss() - return ret - - def load_checkpoint_ram(self, checkpoint, train=True): - ret = super().load_checkpoint_ram(checkpoint, train) - self.update_loss() - return ret diff --git a/spaces/huazhao/QQsign/README.md b/spaces/huazhao/QQsign/README.md deleted file mode 100644 index bd56881a2a7709591343e2f15af9a6a8133e115b..0000000000000000000000000000000000000000 --- a/spaces/huazhao/QQsign/README.md +++ /dev/null @@ -1,11 +0,0 @@ ---- -title: QQsign -emoji: 🦀 -colorFrom: blue -colorTo: purple -sdk: docker -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference \ No newline at end of file diff --git a/spaces/hussain-shk/IndiSent/legacy/run_joint_inference.sh b/spaces/hussain-shk/IndiSent/legacy/run_joint_inference.sh deleted file mode 100644 index bf4668c9ecb6b1a1ef9b9b7871c6ee22d7865c0b..0000000000000000000000000000000000000000 --- a/spaces/hussain-shk/IndiSent/legacy/run_joint_inference.sh +++ /dev/null @@ -1,74 +0,0 @@ -src_lang=${1:-en} -tgt_lang=${2:-indic} -bucket_path=${3:-gs://ai4b-anuvaad-nmt/models/transformer-4x/indictrans-${src_lang}-${tgt_lang}} - -mkdir -p ../baselines -expdir=../baselines/baselines-${src_lang}-${tgt_lang} - -if [[ -d $expdir ]] -then - echo "$expdir exists on your filesystem." -else - cd ../baselines - mkdir -p baselines-${src_lang}-${tgt_lang}/model - mkdir -p baselines-${src_lang}-${tgt_lang}/final_bin - cd baselines-${src_lang}-${tgt_lang}/model - gsutil -m cp $bucket_path/model/checkpoint_best.pt . - cd .. - gsutil -m cp $bucket_path/vocab . - gsutil -m cp $bucket_path/final_bin/dict.* final_bin - cd ../indicTrans -fi - - - - - -if [ $src_lang == 'hi' ] || [ $tgt_lang == 'hi' ]; then - TEST_SETS=( wmt-news wat2021-devtest wat2020-devtest anuvaad-legal tico19 sap-documentation-benchmark all) -elif [ $src_lang == 'ta' ] || [ $tgt_lang == 'ta' ]; then - TEST_SETS=( wmt-news wat2021-devtest wat2020-devtest anuvaad-legal tico19 all) -elif [ $src_lang == 'bn' ] || [ $tgt_lang == 'bn' ]; then - TEST_SETS=( wat2021-devtest wat2020-devtest anuvaad-legal tico19 all) -elif [ $src_lang == 'gu' ] || [ $tgt_lang == 'gu' ]; then - TEST_SETS=( wmt-news wat2021-devtest wat2020-devtest all) -elif [ $src_lang == 'as' ] || [ $tgt_lang == 'as' ]; then - TEST_SETS=( all ) -elif [ $src_lang == 'kn' ] || [ $tgt_lang == 'kn' ]; then - TEST_SETS=( wat2021-devtest anuvaad-legal all) -elif [ $src_lang == 'ml' ] || [ $tgt_lang == 'ml' ]; then - TEST_SETS=( wat2021-devtest wat2020-devtest anuvaad-legal all) -elif [ $src_lang == 'mr' ] || [ $tgt_lang == 'mr' ]; then - TEST_SETS=( wat2021-devtest wat2020-devtest all) -elif [ $src_lang == 'or' ] || [ $tgt_lang == 'or' ]; then - TEST_SETS=( all ) -elif [ $src_lang == 'pa' ] || [ $tgt_lang == 'pa' ]; then - TEST_SETS=( all ) -elif [ $src_lang == 'te' ] || [ $tgt_lang == 'te' ]; then - TEST_SETS=( wat2021-devtest wat2020-devtest anuvaad-legal all ) -fi - -if [ $src_lang == 'en' ]; then - indic_lang=$tgt_lang -else - indic_lang=$src_lang -fi - - -for tset in ${TEST_SETS[@]};do - echo $tset $src_lang $tgt_lang - if [ $tset == 'wat2021-devtest' ]; then - SRC_FILE=${expdir}/devtest/$tset/test.$src_lang - REF_FILE=${expdir}/devtest/$tset/test.$tgt_lang - else - SRC_FILE=${expdir}/devtest/$tset/en-${indic_lang}/test.$src_lang - REF_FILE=${expdir}/devtest/$tset/en-${indic_lang}/test.$tgt_lang - fi - RESULTS_DIR=${expdir}/results/$tset - - mkdir -p $RESULTS_DIR - - bash joint_translate.sh $SRC_FILE $RESULTS_DIR/${src_lang}-${tgt_lang} $src_lang $tgt_lang $expdir $REF_FILE - # for newline between different outputs - echo -done diff --git a/spaces/hzwluoye/gpt4/client/css/message.css b/spaces/hzwluoye/gpt4/client/css/message.css deleted file mode 100644 index 64e04147ee4d1e76dda4f39c4f756c9da63e3874..0000000000000000000000000000000000000000 --- a/spaces/hzwluoye/gpt4/client/css/message.css +++ /dev/null @@ -1,65 +0,0 @@ -.message { - width: 100%; - overflow-wrap: break-word; - display: flex; - gap: var(--section-gap); - padding: var(--section-gap); - padding-bottom: 0; -} - -.message:last-child { - animation: 0.6s show_message; -} - -@keyframes show_message { - from { - transform: translateY(10px); - opacity: 0; - } -} - -.message .avatar-container img { - max-width: 48px; - max-height: 48px; - box-shadow: 0.4px 0.5px 0.7px -2px rgba(0, 0, 0, 0.08), 1.1px 1.3px 2px -2px rgba(0, 0, 0, 0.041), - 2.7px 3px 4.8px -2px rgba(0, 0, 0, 0.029), 9px 10px 16px -2px rgba(0, 0, 0, 0.022); -} - -.message .content { - display: flex; - flex-direction: column; - width: 90%; - gap: 18px; -} - -.message .content p, -.message .content li, -.message .content code { - font-size: 1rem; - line-height: 1.3; -} - -@media screen and (max-height: 720px) { - .message { - padding: 12px; - gap: 0; - } - - .message .content { - margin-left: 8px; - width: 80%; - } - - .message .avatar-container img { - max-width: 32px; - max-height: 32px; - } - - .message .content, - .message .content p, - .message .content li, - .message .content code { - font-size: 0.875rem; - line-height: 1.3; - } -} diff --git a/spaces/iamstolas/STOLAS/src/components/turn-counter.tsx b/spaces/iamstolas/STOLAS/src/components/turn-counter.tsx deleted file mode 100644 index 08a9e488f044802a8600f4d195b106567c35aab4..0000000000000000000000000000000000000000 --- a/spaces/iamstolas/STOLAS/src/components/turn-counter.tsx +++ /dev/null @@ -1,23 +0,0 @@ -import React from 'react' -import { Throttling } from '@/lib/bots/bing/types' - -export interface TurnCounterProps { - throttling?: Throttling -} - -export function TurnCounter({ throttling }: TurnCounterProps) { - if (!throttling) { - return null - } - - return ( -
    -
    - {throttling.numUserMessagesInConversation} - - {throttling.maxNumUserMessagesInConversation} -
    -
    -
    - ) -} diff --git a/spaces/iamstolas/STOLAS/src/components/ui/textarea.tsx b/spaces/iamstolas/STOLAS/src/components/ui/textarea.tsx deleted file mode 100644 index e25af722c7a5dc1121a9ab58d6716952f9f76081..0000000000000000000000000000000000000000 --- a/spaces/iamstolas/STOLAS/src/components/ui/textarea.tsx +++ /dev/null @@ -1,24 +0,0 @@ -import * as React from 'react' - -import { cn } from '@/lib/utils' - -export interface TextareaProps - extends React.TextareaHTMLAttributes {} - -const Textarea = React.forwardRef( - ({ className, ...props }, ref) => { - return ( - -
    - - -
    - -
    - - setIsCallView(true)} /> -
    - - ) -} - -export default TextView \ No newline at end of file diff --git a/spaces/kat33/llama.cpp/instructions.html b/spaces/kat33/llama.cpp/instructions.html deleted file mode 100644 index 986062bf2f0ed328e00ecd37f09f7c6b286aed69..0000000000000000000000000000000000000000 --- a/spaces/kat33/llama.cpp/instructions.html +++ /dev/null @@ -1,300 +0,0 @@ - - - - - - - - - - - - - - - - - - - - - - - - - - - Llama.cpp - a Hugging Face Space by kat33 - - - - -
    - -

    Spaces - - Hugging Face's logo -
    - -
    - -
    - kat33 -
    /
    - -
    llama.cpp -
    - -
    -
    - - - - - - - -
    - No application file -
    - - - - - -
    - - - -
    - - - - -
    - - - - - - - - - -

    - -
    - -
    - -
    - -
    - - -
    -
    Hugging Face is way more fun with friends and colleagues! 🤗 - Join an organization
    -
    - -
    - -

    🚀 - Get started with your gradio Space! -

    -

    Your new space has been created, follow these steps to get started (or read our full - documentation ) -

    - -

    Start by cloning this repo by using:

    -
    					$git clone https://huggingface.co/spaces/kat33/llama.cpp
    -				
    -

    Create your gradio - app.py file: -

    -
    import gradio as gr
    -
    -def greet(name):
    -    return "Hello " + name + "!!"
    -
    -iface = gr.Interface(fn=greet, inputs="text", outputs="text")
    -iface.launch()
    -

    Then commit and push:

    -
    					$git add app.py
    -					$git commit -m "Add application file"
    -					$git push
    -				
    -

    (Hint: - Create - the app.py file right in your browser alternatively) -

    -

    🤗 Your app should be running on this page after a few moments !

    -
    -

    Dependencies

    -

    You can add a - requirements.txt - file at the root of the repository to specify Python dependencies -

    -

    If needed, you can also add a - packages.txt - file at the root of the repository to specify - Debian dependencies.

    -

    The gradio - package is pre-installed and its version is set in the - sdk_version - field in the - README.md file. -

    -

    Documentation

    -

    Check out the full documentation of gradio Spaces - here

    -
    -
    - - - - - - - - - - - - - diff --git a/spaces/kdrkdrkdr/YuukaTTS/modules.py b/spaces/kdrkdrkdr/YuukaTTS/modules.py deleted file mode 100644 index 9c7fd9cd6eb8b7e0ec0e08957e970744a374a924..0000000000000000000000000000000000000000 --- a/spaces/kdrkdrkdr/YuukaTTS/modules.py +++ /dev/null @@ -1,390 +0,0 @@ -import copy -import math -import numpy as np -import scipy -import torch -from torch import nn -from torch.nn import functional as F - -from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm - -import commons -from commons import init_weights, get_padding -from transforms import piecewise_rational_quadratic_transform - - -LRELU_SLOPE = 0.1 - - -class LayerNorm(nn.Module): - def __init__(self, channels, eps=1e-5): - super().__init__() - self.channels = channels - self.eps = eps - - self.gamma = nn.Parameter(torch.ones(channels)) - self.beta = nn.Parameter(torch.zeros(channels)) - - def forward(self, x): - x = x.transpose(1, -1) - x = F.layer_norm(x, (self.channels,), self.gamma, self.beta, self.eps) - return x.transpose(1, -1) - - -class ConvReluNorm(nn.Module): - def __init__(self, in_channels, hidden_channels, out_channels, kernel_size, n_layers, p_dropout): - super().__init__() - self.in_channels = in_channels - self.hidden_channels = hidden_channels - self.out_channels = out_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.p_dropout = p_dropout - assert n_layers > 1, "Number of layers should be larger than 0." - - self.conv_layers = nn.ModuleList() - self.norm_layers = nn.ModuleList() - self.conv_layers.append(nn.Conv1d(in_channels, hidden_channels, kernel_size, padding=kernel_size//2)) - self.norm_layers.append(LayerNorm(hidden_channels)) - self.relu_drop = nn.Sequential( - nn.ReLU(), - nn.Dropout(p_dropout)) - for _ in range(n_layers-1): - self.conv_layers.append(nn.Conv1d(hidden_channels, hidden_channels, kernel_size, padding=kernel_size//2)) - self.norm_layers.append(LayerNorm(hidden_channels)) - self.proj = nn.Conv1d(hidden_channels, out_channels, 1) - self.proj.weight.data.zero_() - self.proj.bias.data.zero_() - - def forward(self, x, x_mask): - x_org = x - for i in range(self.n_layers): - x = self.conv_layers[i](x * x_mask) - x = self.norm_layers[i](x) - x = self.relu_drop(x) - x = x_org + self.proj(x) - return x * x_mask - - -class DDSConv(nn.Module): - """ - Dialted and Depth-Separable Convolution - """ - def __init__(self, channels, kernel_size, n_layers, p_dropout=0.): - super().__init__() - self.channels = channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.p_dropout = p_dropout - - self.drop = nn.Dropout(p_dropout) - self.convs_sep = nn.ModuleList() - self.convs_1x1 = nn.ModuleList() - self.norms_1 = nn.ModuleList() - self.norms_2 = nn.ModuleList() - for i in range(n_layers): - dilation = kernel_size ** i - padding = (kernel_size * dilation - dilation) // 2 - self.convs_sep.append(nn.Conv1d(channels, channels, kernel_size, - groups=channels, dilation=dilation, padding=padding - )) - self.convs_1x1.append(nn.Conv1d(channels, channels, 1)) - self.norms_1.append(LayerNorm(channels)) - self.norms_2.append(LayerNorm(channels)) - - def forward(self, x, x_mask, g=None): - if g is not None: - x = x + g - for i in range(self.n_layers): - y = self.convs_sep[i](x * x_mask) - y = self.norms_1[i](y) - y = F.gelu(y) - y = self.convs_1x1[i](y) - y = self.norms_2[i](y) - y = F.gelu(y) - y = self.drop(y) - x = x + y - return x * x_mask - - -class WN(torch.nn.Module): - def __init__(self, hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=0, p_dropout=0): - super(WN, self).__init__() - assert(kernel_size % 2 == 1) - self.hidden_channels =hidden_channels - self.kernel_size = kernel_size, - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - self.p_dropout = p_dropout - - self.in_layers = torch.nn.ModuleList() - self.res_skip_layers = torch.nn.ModuleList() - self.drop = nn.Dropout(p_dropout) - - if gin_channels != 0: - cond_layer = torch.nn.Conv1d(gin_channels, 2*hidden_channels*n_layers, 1) - self.cond_layer = torch.nn.utils.weight_norm(cond_layer, name='weight') - - for i in range(n_layers): - dilation = dilation_rate ** i - padding = int((kernel_size * dilation - dilation) / 2) - in_layer = torch.nn.Conv1d(hidden_channels, 2*hidden_channels, kernel_size, - dilation=dilation, padding=padding) - in_layer = torch.nn.utils.weight_norm(in_layer, name='weight') - self.in_layers.append(in_layer) - - # last one is not necessary - if i < n_layers - 1: - res_skip_channels = 2 * hidden_channels - else: - res_skip_channels = hidden_channels - - res_skip_layer = torch.nn.Conv1d(hidden_channels, res_skip_channels, 1) - res_skip_layer = torch.nn.utils.weight_norm(res_skip_layer, name='weight') - self.res_skip_layers.append(res_skip_layer) - - def forward(self, x, x_mask, g=None, **kwargs): - output = torch.zeros_like(x) - n_channels_tensor = torch.IntTensor([self.hidden_channels]) - - if g is not None: - g = self.cond_layer(g) - - for i in range(self.n_layers): - x_in = self.in_layers[i](x) - if g is not None: - cond_offset = i * 2 * self.hidden_channels - g_l = g[:,cond_offset:cond_offset+2*self.hidden_channels,:] - else: - g_l = torch.zeros_like(x_in) - - acts = commons.fused_add_tanh_sigmoid_multiply( - x_in, - g_l, - n_channels_tensor) - acts = self.drop(acts) - - res_skip_acts = self.res_skip_layers[i](acts) - if i < self.n_layers - 1: - res_acts = res_skip_acts[:,:self.hidden_channels,:] - x = (x + res_acts) * x_mask - output = output + res_skip_acts[:,self.hidden_channels:,:] - else: - output = output + res_skip_acts - return output * x_mask - - def remove_weight_norm(self): - if self.gin_channels != 0: - torch.nn.utils.remove_weight_norm(self.cond_layer) - for l in self.in_layers: - torch.nn.utils.remove_weight_norm(l) - for l in self.res_skip_layers: - torch.nn.utils.remove_weight_norm(l) - - -class ResBlock1(torch.nn.Module): - def __init__(self, channels, kernel_size=3, dilation=(1, 3, 5)): - super(ResBlock1, self).__init__() - self.convs1 = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[2], - padding=get_padding(kernel_size, dilation[2]))) - ]) - self.convs1.apply(init_weights) - - self.convs2 = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))) - ]) - self.convs2.apply(init_weights) - - def forward(self, x, x_mask=None): - for c1, c2 in zip(self.convs1, self.convs2): - xt = F.leaky_relu(x, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c1(xt) - xt = F.leaky_relu(xt, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c2(xt) - x = xt + x - if x_mask is not None: - x = x * x_mask - return x - - def remove_weight_norm(self): - for l in self.convs1: - remove_weight_norm(l) - for l in self.convs2: - remove_weight_norm(l) - - -class ResBlock2(torch.nn.Module): - def __init__(self, channels, kernel_size=3, dilation=(1, 3)): - super(ResBlock2, self).__init__() - self.convs = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]))) - ]) - self.convs.apply(init_weights) - - def forward(self, x, x_mask=None): - for c in self.convs: - xt = F.leaky_relu(x, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c(xt) - x = xt + x - if x_mask is not None: - x = x * x_mask - return x - - def remove_weight_norm(self): - for l in self.convs: - remove_weight_norm(l) - - -class Log(nn.Module): - def forward(self, x, x_mask, reverse=False, **kwargs): - if not reverse: - y = torch.log(torch.clamp_min(x, 1e-5)) * x_mask - logdet = torch.sum(-y, [1, 2]) - return y, logdet - else: - x = torch.exp(x) * x_mask - return x - - -class Flip(nn.Module): - def forward(self, x, *args, reverse=False, **kwargs): - x = torch.flip(x, [1]) - if not reverse: - logdet = torch.zeros(x.size(0)).to(dtype=x.dtype, device=x.device) - return x, logdet - else: - return x - - -class ElementwiseAffine(nn.Module): - def __init__(self, channels): - super().__init__() - self.channels = channels - self.m = nn.Parameter(torch.zeros(channels,1)) - self.logs = nn.Parameter(torch.zeros(channels,1)) - - def forward(self, x, x_mask, reverse=False, **kwargs): - if not reverse: - y = self.m + torch.exp(self.logs) * x - y = y * x_mask - logdet = torch.sum(self.logs * x_mask, [1,2]) - return y, logdet - else: - x = (x - self.m) * torch.exp(-self.logs) * x_mask - return x - - -class ResidualCouplingLayer(nn.Module): - def __init__(self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - p_dropout=0, - gin_channels=0, - mean_only=False): - assert channels % 2 == 0, "channels should be divisible by 2" - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.half_channels = channels // 2 - self.mean_only = mean_only - - self.pre = nn.Conv1d(self.half_channels, hidden_channels, 1) - self.enc = WN(hidden_channels, kernel_size, dilation_rate, n_layers, p_dropout=p_dropout, gin_channels=gin_channels) - self.post = nn.Conv1d(hidden_channels, self.half_channels * (2 - mean_only), 1) - self.post.weight.data.zero_() - self.post.bias.data.zero_() - - def forward(self, x, x_mask, g=None, reverse=False): - x0, x1 = torch.split(x, [self.half_channels]*2, 1) - h = self.pre(x0) * x_mask - h = self.enc(h, x_mask, g=g) - stats = self.post(h) * x_mask - if not self.mean_only: - m, logs = torch.split(stats, [self.half_channels]*2, 1) - else: - m = stats - logs = torch.zeros_like(m) - - if not reverse: - x1 = m + x1 * torch.exp(logs) * x_mask - x = torch.cat([x0, x1], 1) - logdet = torch.sum(logs, [1,2]) - return x, logdet - else: - x1 = (x1 - m) * torch.exp(-logs) * x_mask - x = torch.cat([x0, x1], 1) - return x - - -class ConvFlow(nn.Module): - def __init__(self, in_channels, filter_channels, kernel_size, n_layers, num_bins=10, tail_bound=5.0): - super().__init__() - self.in_channels = in_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.num_bins = num_bins - self.tail_bound = tail_bound - self.half_channels = in_channels // 2 - - self.pre = nn.Conv1d(self.half_channels, filter_channels, 1) - self.convs = DDSConv(filter_channels, kernel_size, n_layers, p_dropout=0.) - self.proj = nn.Conv1d(filter_channels, self.half_channels * (num_bins * 3 - 1), 1) - self.proj.weight.data.zero_() - self.proj.bias.data.zero_() - - def forward(self, x, x_mask, g=None, reverse=False): - x0, x1 = torch.split(x, [self.half_channels]*2, 1) - h = self.pre(x0) - h = self.convs(h, x_mask, g=g) - h = self.proj(h) * x_mask - - b, c, t = x0.shape - h = h.reshape(b, c, -1, t).permute(0, 1, 3, 2) # [b, cx?, t] -> [b, c, t, ?] - - unnormalized_widths = h[..., :self.num_bins] / math.sqrt(self.filter_channels) - unnormalized_heights = h[..., self.num_bins:2*self.num_bins] / math.sqrt(self.filter_channels) - unnormalized_derivatives = h[..., 2 * self.num_bins:] - - x1, logabsdet = piecewise_rational_quadratic_transform(x1, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=reverse, - tails='linear', - tail_bound=self.tail_bound - ) - - x = torch.cat([x0, x1], 1) * x_mask - logdet = torch.sum(logabsdet * x_mask, [1,2]) - if not reverse: - return x, logdet - else: - return x diff --git a/spaces/kdrkdrkdr/ZhongliTTS/monotonic_align/core.py b/spaces/kdrkdrkdr/ZhongliTTS/monotonic_align/core.py deleted file mode 100644 index 1f940605fe4fd0738fa0006149fcba14ef88223a..0000000000000000000000000000000000000000 --- a/spaces/kdrkdrkdr/ZhongliTTS/monotonic_align/core.py +++ /dev/null @@ -1,36 +0,0 @@ -import numba - - -@numba.jit(numba.void(numba.int32[:, :, ::1], numba.float32[:, :, ::1], numba.int32[::1], numba.int32[::1]), - nopython=True, nogil=True) -def maximum_path_jit(paths, values, t_ys, t_xs): - b = paths.shape[0] - max_neg_val = -1e9 - for i in range(int(b)): - path = paths[i] - value = values[i] - t_y = t_ys[i] - t_x = t_xs[i] - - v_prev = v_cur = 0.0 - index = t_x - 1 - - for y in range(t_y): - for x in range(max(0, t_x + y - t_y), min(t_x, y + 1)): - if x == y: - v_cur = max_neg_val - else: - v_cur = value[y - 1, x] - if x == 0: - if y == 0: - v_prev = 0. - else: - v_prev = max_neg_val - else: - v_prev = value[y - 1, x - 1] - value[y, x] += max(v_prev, v_cur) - - for y in range(t_y - 1, -1, -1): - path[y, index] = 1 - if index != 0 and (index == y or value[y - 1, index] < value[y - 1, index - 1]): - index = index - 1 diff --git a/spaces/keras-io/Monocular-Depth-Estimation/app.py b/spaces/keras-io/Monocular-Depth-Estimation/app.py deleted file mode 100644 index 853b9260b12866bcfd247a724870a27f9beb9271..0000000000000000000000000000000000000000 --- a/spaces/keras-io/Monocular-Depth-Estimation/app.py +++ /dev/null @@ -1,34 +0,0 @@ -from layers import BilinearUpSampling2D -from tensorflow.keras.models import load_model -from utils import load_images, predict -import matplotlib.pyplot as plt -import numpy as np -import gradio as gr -from huggingface_hub import from_pretrained_keras - -custom_objects = {'BilinearUpSampling2D': BilinearUpSampling2D, 'depth_loss_function': None} -print('Loading model...') -model = from_pretrained_keras("keras-io/monocular-depth-estimation", custom_objects=custom_objects, compile=False) -print('Successfully loaded model...') -examples = ['examples/00015_colors.png', 'examples/00084_colors.png', 'examples/00033_colors.png'] - - -def infer(image): - inputs = load_images([image]) - outputs = predict(model, inputs) - plasma = plt.get_cmap('plasma') - rescaled = outputs[0][:, :, 0] - rescaled = rescaled - np.min(rescaled) - rescaled = rescaled / np.max(rescaled) - image_out = plasma(rescaled)[:, :, :3] - return image_out - - -iface = gr.Interface( - fn=infer, - title="Monocular Depth Estimation", - description = "Keras Implementation of Unet architecture with Densenet201 backbone for estimating the depth of image 📏", - inputs=[gr.inputs.Image(label="image", type="numpy", shape=(640, 480))], - outputs="image", - article = "Author: Vu Minh Chien. The ideal based on the keras example from Victor Basu", - examples=examples, cache_examples=True).launch(debug=True) diff --git a/spaces/keras-io/token_learner/app.py b/spaces/keras-io/token_learner/app.py deleted file mode 100644 index 7d571b0937f9b2fb6bf9e3b70d6f0ad6f0a5beb6..0000000000000000000000000000000000000000 --- a/spaces/keras-io/token_learner/app.py +++ /dev/null @@ -1,69 +0,0 @@ -import numpy as np -import tensorflow as tf -import gradio as gr -from huggingface_hub import from_pretrained_keras -import cv2 -# import matplotlib.pyplot as plt - - -model = from_pretrained_keras("keras-io/learning_to_tokenize_in_ViT") - -# functions for inference -IMG_SIZE = 32 - -class_names = [ - "Airplane", - "Automobile", - "Bird", - "Cat", - "Deer", - "Dog", - "Frog", - "Horse", - "Ship", - "Truck", -] - -# resize the image and it to a float between 0,1 -def preprocess_image(image, label): - image = tf.image.resize(image, (IMG_SIZE, IMG_SIZE)) - return image, label - - -def read_image(image): - image = tf.convert_to_tensor(image) - image.set_shape([None, None, 3]) - print('$$$$$$$$$$$$$$$$$$$$$ in read image $$$$$$$$$$$$$$$$$$$$$$') - print(image.shape) -# plt.imshow(image) -# plt.show() - # image = tf.image.resize(images=image, size=[IMG_SIZE, IMG_SIZE]) - # image = image / 127.5 - 1 - image, _ = preprocess_image(image, 1) # 1 here is a temporary label - return image - -def infer(input_image): - print('#$$$$$$$$$$$$$$$$$$$$$$$$$ IN INFER $$$$$$$$$$$$$$$$$$$$$$$') - image_tensor = read_image(input_image) - print(image_tensor.shape) - predictions = model.predict(np.expand_dims((image_tensor), axis=0)) - predictions = np.squeeze(predictions).astype(float) - - return dict(zip(class_names, predictions)) - - -# get the inputs -input = gr.inputs.Image(shape=(IMG_SIZE, IMG_SIZE)) -# the app outputs two segmented images -output = [gr.outputs.Label()] -# it's good practice to pass examples, description and a title to guide users -examples = [["./content/examples/Frog.jpg"], ["./content/examples/Truck.jpg"], ["./content/examples/car.jpg"]] -title = "Image Classification using a Mini ViT model with Token Learner" -description = "Upload an image or select from examples to classify it. This is a mini ViT model with Token Learner module trained on CIFAR-10. The allowed classes are - Airplane, Automobile, Bird, Cat, Deer, Dog, Frog, Horse, Ship, Truck

    Space author: Harshavardhan
    Keras example authors: Aritra Roy Gosthipaty , Sayak Paul
    link to the original Keras example
    Note: please note that the test accuracy of this model is only ~55%, so, you will see a lot of errors in prediction

    " - -gr_interface = gr.Interface(infer, input, output, examples=examples, allow_flagging=False, analytics_enabled=False, title=title, description=description).launch(enable_queue=True, debug=False) -gr_interface.launch() - - - - diff --git a/spaces/kevinwang676/Voice-Cloning-SadTalker/README.md b/spaces/kevinwang676/Voice-Cloning-SadTalker/README.md deleted file mode 100644 index 88a96d0c588bd4ebe1fcf8227fb782b1e0058a1c..0000000000000000000000000000000000000000 --- a/spaces/kevinwang676/Voice-Cloning-SadTalker/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: Voice Cloning -emoji: 😻 -colorFrom: blue -colorTo: yellow -sdk: gradio -sdk_version: 3.27.0 -app_file: app.py -pinned: false -license: mit -duplicated_from: kevinwang676/Voice-Cloning-for-Bilibili ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/king007/GPT-Prompt-Generate-2/app.py b/spaces/king007/GPT-Prompt-Generate-2/app.py deleted file mode 100644 index 02e9f42b787bf95571ddb757cc277fb268d66746..0000000000000000000000000000000000000000 --- a/spaces/king007/GPT-Prompt-Generate-2/app.py +++ /dev/null @@ -1,40 +0,0 @@ -from transformers import AutoTokenizer, AutoModelForSeq2SeqLM -import gradio as gr - -tokenizer = AutoTokenizer.from_pretrained("merve/chatgpt-prompt-generator-v12") -model = AutoModelForSeq2SeqLM.from_pretrained("merve/chatgpt-prompt-generator-v12", from_tf=True) -# -tokenizer2 = AutoTokenizer.from_pretrained("Kaludi/chatgpt-gpt4-prompts-bart-large-cnn-samsum") -model2 = AutoModelForSeq2SeqLM.from_pretrained("Kaludi/chatgpt-gpt4-prompts-bart-large-cnn-samsum", from_tf=True) - -def generate(prompt, max_new_tokens): - batch = tokenizer(prompt, return_tensors="pt") - generated_ids = model.generate(batch["input_ids"], max_new_tokens=int(max_new_tokens)) - output = tokenizer.batch_decode(generated_ids, skip_special_tokens=True) - return output[0] - -def generate2(prompt, max_new_tokens): - batch = tokenizer2(prompt, return_tensors="pt") - generated_ids = model2.generate(batch["input_ids"], max_new_tokens=int(max_new_tokens)) - output = tokenizer2.batch_decode(generated_ids, skip_special_tokens=True) - return output[0] - -def generate2_test(prompt): - batch = tokenizer2(prompt, return_tensors="pt") - generated_ids = model2.generate(batch["input_ids"], max_new_tokens=150) - output = tokenizer2.batch_decode(generated_ids, skip_special_tokens=True) - return output[0] - -def generate_prompt(aitype, prompt, max_new_tokens): - if aitype=='1': - return generate(prompt, max_new_tokens) - elif aitype=='2': - return generate2(prompt, max_new_tokens) -# -input_aitype = gr.Textbox(label = "Input a persona, e.g. photographer", value = "2") -input_prompt = gr.Textbox(label = "Input a persona, e.g. photographer", value = "photographer") -input_maxtokens = gr.Textbox(label = "max tokens", value = "150") -output_component = gr.Textbox(label = "Prompt") -examples = [["photographer"], ["developer"]] -description = "" -gr.Interface(generate_prompt, inputs = [input_aitype,input_prompt,input_maxtokens], outputs=output_component, examples=examples, title = "👨🏻‍🎤 ChatGPT Prompt Generator v12 👨🏻‍🎤", description=description).launch() diff --git a/spaces/kxqt/Expedit-SAM/segment_anything/modeling/prompt_encoder.py b/spaces/kxqt/Expedit-SAM/segment_anything/modeling/prompt_encoder.py deleted file mode 100644 index c3143f4f8e02ddd7ca8587b40ff5d47c3a6b7ef3..0000000000000000000000000000000000000000 --- a/spaces/kxqt/Expedit-SAM/segment_anything/modeling/prompt_encoder.py +++ /dev/null @@ -1,214 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. - -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import numpy as np -import torch -from torch import nn - -from typing import Any, Optional, Tuple, Type - -from .common import LayerNorm2d - - -class PromptEncoder(nn.Module): - def __init__( - self, - embed_dim: int, - image_embedding_size: Tuple[int, int], - input_image_size: Tuple[int, int], - mask_in_chans: int, - activation: Type[nn.Module] = nn.GELU, - ) -> None: - """ - Encodes prompts for input to SAM's mask decoder. - - Arguments: - embed_dim (int): The prompts' embedding dimension - image_embedding_size (tuple(int, int)): The spatial size of the - image embedding, as (H, W). - input_image_size (int): The padded size of the image as input - to the image encoder, as (H, W). - mask_in_chans (int): The number of hidden channels used for - encoding input masks. - activation (nn.Module): The activation to use when encoding - input masks. - """ - super().__init__() - self.embed_dim = embed_dim - self.input_image_size = input_image_size - self.image_embedding_size = image_embedding_size - self.pe_layer = PositionEmbeddingRandom(embed_dim // 2) - - self.num_point_embeddings: int = 4 # pos/neg point + 2 box corners - point_embeddings = [nn.Embedding(1, embed_dim) for i in range(self.num_point_embeddings)] - self.point_embeddings = nn.ModuleList(point_embeddings) - self.not_a_point_embed = nn.Embedding(1, embed_dim) - - self.mask_input_size = (4 * image_embedding_size[0], 4 * image_embedding_size[1]) - self.mask_downscaling = nn.Sequential( - nn.Conv2d(1, mask_in_chans // 4, kernel_size=2, stride=2), - LayerNorm2d(mask_in_chans // 4), - activation(), - nn.Conv2d(mask_in_chans // 4, mask_in_chans, kernel_size=2, stride=2), - LayerNorm2d(mask_in_chans), - activation(), - nn.Conv2d(mask_in_chans, embed_dim, kernel_size=1), - ) - self.no_mask_embed = nn.Embedding(1, embed_dim) - - def get_dense_pe(self) -> torch.Tensor: - """ - Returns the positional encoding used to encode point prompts, - applied to a dense set of points the shape of the image encoding. - - Returns: - torch.Tensor: Positional encoding with shape - 1x(embed_dim)x(embedding_h)x(embedding_w) - """ - return self.pe_layer(self.image_embedding_size).unsqueeze(0) - - def _embed_points( - self, - points: torch.Tensor, - labels: torch.Tensor, - pad: bool, - ) -> torch.Tensor: - """Embeds point prompts.""" - points = points + 0.5 # Shift to center of pixel - if pad: - padding_point = torch.zeros((points.shape[0], 1, 2), device=points.device) - padding_label = -torch.ones((labels.shape[0], 1), device=labels.device) - points = torch.cat([points, padding_point], dim=1) - labels = torch.cat([labels, padding_label], dim=1) - point_embedding = self.pe_layer.forward_with_coords(points, self.input_image_size) - point_embedding[labels == -1] = 0.0 - point_embedding[labels == -1] += self.not_a_point_embed.weight - point_embedding[labels == 0] += self.point_embeddings[0].weight - point_embedding[labels == 1] += self.point_embeddings[1].weight - return point_embedding - - def _embed_boxes(self, boxes: torch.Tensor) -> torch.Tensor: - """Embeds box prompts.""" - boxes = boxes + 0.5 # Shift to center of pixel - coords = boxes.reshape(-1, 2, 2) - corner_embedding = self.pe_layer.forward_with_coords(coords, self.input_image_size) - corner_embedding[:, 0, :] += self.point_embeddings[2].weight - corner_embedding[:, 1, :] += self.point_embeddings[3].weight - return corner_embedding - - def _embed_masks(self, masks: torch.Tensor) -> torch.Tensor: - """Embeds mask inputs.""" - mask_embedding = self.mask_downscaling(masks) - return mask_embedding - - def _get_batch_size( - self, - points: Optional[Tuple[torch.Tensor, torch.Tensor]], - boxes: Optional[torch.Tensor], - masks: Optional[torch.Tensor], - ) -> int: - """ - Gets the batch size of the output given the batch size of the input prompts. - """ - if points is not None: - return points[0].shape[0] - elif boxes is not None: - return boxes.shape[0] - elif masks is not None: - return masks.shape[0] - else: - return 1 - - def _get_device(self) -> torch.device: - return self.point_embeddings[0].weight.device - - def forward( - self, - points: Optional[Tuple[torch.Tensor, torch.Tensor]], - boxes: Optional[torch.Tensor], - masks: Optional[torch.Tensor], - ) -> Tuple[torch.Tensor, torch.Tensor]: - """ - Embeds different types of prompts, returning both sparse and dense - embeddings. - - Arguments: - points (tuple(torch.Tensor, torch.Tensor) or none): point coordinates - and labels to embed. - boxes (torch.Tensor or none): boxes to embed - masks (torch.Tensor or none): masks to embed - - Returns: - torch.Tensor: sparse embeddings for the points and boxes, with shape - BxNx(embed_dim), where N is determined by the number of input points - and boxes. - torch.Tensor: dense embeddings for the masks, in the shape - Bx(embed_dim)x(embed_H)x(embed_W) - """ - bs = self._get_batch_size(points, boxes, masks) - sparse_embeddings = torch.empty((bs, 0, self.embed_dim), device=self._get_device()) - if points is not None: - coords, labels = points - point_embeddings = self._embed_points(coords, labels, pad=(boxes is None)) - sparse_embeddings = torch.cat([sparse_embeddings, point_embeddings], dim=1) - if boxes is not None: - box_embeddings = self._embed_boxes(boxes) - sparse_embeddings = torch.cat([sparse_embeddings, box_embeddings], dim=1) - - if masks is not None: - dense_embeddings = self._embed_masks(masks) - else: - dense_embeddings = self.no_mask_embed.weight.reshape(1, -1, 1, 1).expand( - bs, -1, self.image_embedding_size[0], self.image_embedding_size[1] - ) - - return sparse_embeddings, dense_embeddings - - -class PositionEmbeddingRandom(nn.Module): - """ - Positional encoding using random spatial frequencies. - """ - - def __init__(self, num_pos_feats: int = 64, scale: Optional[float] = None) -> None: - super().__init__() - if scale is None or scale <= 0.0: - scale = 1.0 - self.register_buffer( - "positional_encoding_gaussian_matrix", - scale * torch.randn((2, num_pos_feats)), - ) - - def _pe_encoding(self, coords: torch.Tensor) -> torch.Tensor: - """Positionally encode points that are normalized to [0,1].""" - # assuming coords are in [0, 1]^2 square and have d_1 x ... x d_n x 2 shape - coords = 2 * coords - 1 - coords = coords @ self.positional_encoding_gaussian_matrix - coords = 2 * np.pi * coords - # outputs d_1 x ... x d_n x C shape - return torch.cat([torch.sin(coords), torch.cos(coords)], dim=-1) - - def forward(self, size: Tuple[int, int]) -> torch.Tensor: - """Generate positional encoding for a grid of the specified size.""" - h, w = size - device: Any = self.positional_encoding_gaussian_matrix.device - grid = torch.ones((h, w), device=device, dtype=torch.float32) - y_embed = grid.cumsum(dim=0) - 0.5 - x_embed = grid.cumsum(dim=1) - 0.5 - y_embed = y_embed / h - x_embed = x_embed / w - - pe = self._pe_encoding(torch.stack([x_embed, y_embed], dim=-1)) - return pe.permute(2, 0, 1) # C x H x W - - def forward_with_coords( - self, coords_input: torch.Tensor, image_size: Tuple[int, int] - ) -> torch.Tensor: - """Positionally encode points that are not normalized to [0,1].""" - coords = coords_input.clone() - coords[:, :, 0] = coords[:, :, 0] / image_size[1] - coords[:, :, 1] = coords[:, :, 1] / image_size[0] - return self._pe_encoding(coords.to(torch.float)) # B x N x C diff --git a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/aiohttp/client_proto.py b/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/aiohttp/client_proto.py deleted file mode 100644 index 3041157d61d78fe285fe2f688a4a8d5b75c5412d..0000000000000000000000000000000000000000 --- a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/aiohttp/client_proto.py +++ /dev/null @@ -1,251 +0,0 @@ -import asyncio -from contextlib import suppress -from typing import Any, Optional, Tuple - -from .base_protocol import BaseProtocol -from .client_exceptions import ( - ClientOSError, - ClientPayloadError, - ServerDisconnectedError, - ServerTimeoutError, -) -from .helpers import BaseTimerContext -from .http import HttpResponseParser, RawResponseMessage -from .streams import EMPTY_PAYLOAD, DataQueue, StreamReader - - -class ResponseHandler(BaseProtocol, DataQueue[Tuple[RawResponseMessage, StreamReader]]): - """Helper class to adapt between Protocol and StreamReader.""" - - def __init__(self, loop: asyncio.AbstractEventLoop) -> None: - BaseProtocol.__init__(self, loop=loop) - DataQueue.__init__(self, loop) - - self._should_close = False - - self._payload: Optional[StreamReader] = None - self._skip_payload = False - self._payload_parser = None - - self._timer = None - - self._tail = b"" - self._upgraded = False - self._parser: Optional[HttpResponseParser] = None - - self._read_timeout: Optional[float] = None - self._read_timeout_handle: Optional[asyncio.TimerHandle] = None - - @property - def upgraded(self) -> bool: - return self._upgraded - - @property - def should_close(self) -> bool: - if self._payload is not None and not self._payload.is_eof() or self._upgraded: - return True - - return ( - self._should_close - or self._upgraded - or self.exception() is not None - or self._payload_parser is not None - or len(self) > 0 - or bool(self._tail) - ) - - def force_close(self) -> None: - self._should_close = True - - def close(self) -> None: - transport = self.transport - if transport is not None: - transport.close() - self.transport = None - self._payload = None - self._drop_timeout() - - def is_connected(self) -> bool: - return self.transport is not None and not self.transport.is_closing() - - def connection_lost(self, exc: Optional[BaseException]) -> None: - self._drop_timeout() - - if self._payload_parser is not None: - with suppress(Exception): - self._payload_parser.feed_eof() - - uncompleted = None - if self._parser is not None: - try: - uncompleted = self._parser.feed_eof() - except Exception: - if self._payload is not None: - self._payload.set_exception( - ClientPayloadError("Response payload is not completed") - ) - - if not self.is_eof(): - if isinstance(exc, OSError): - exc = ClientOSError(*exc.args) - if exc is None: - exc = ServerDisconnectedError(uncompleted) - # assigns self._should_close to True as side effect, - # we do it anyway below - self.set_exception(exc) - - self._should_close = True - self._parser = None - self._payload = None - self._payload_parser = None - self._reading_paused = False - - super().connection_lost(exc) - - def eof_received(self) -> None: - # should call parser.feed_eof() most likely - self._drop_timeout() - - def pause_reading(self) -> None: - super().pause_reading() - self._drop_timeout() - - def resume_reading(self) -> None: - super().resume_reading() - self._reschedule_timeout() - - def set_exception(self, exc: BaseException) -> None: - self._should_close = True - self._drop_timeout() - super().set_exception(exc) - - def set_parser(self, parser: Any, payload: Any) -> None: - # TODO: actual types are: - # parser: WebSocketReader - # payload: FlowControlDataQueue - # but they are not generi enough - # Need an ABC for both types - self._payload = payload - self._payload_parser = parser - - self._drop_timeout() - - if self._tail: - data, self._tail = self._tail, b"" - self.data_received(data) - - def set_response_params( - self, - *, - timer: Optional[BaseTimerContext] = None, - skip_payload: bool = False, - read_until_eof: bool = False, - auto_decompress: bool = True, - read_timeout: Optional[float] = None, - read_bufsize: int = 2**16, - ) -> None: - self._skip_payload = skip_payload - - self._read_timeout = read_timeout - self._reschedule_timeout() - - self._parser = HttpResponseParser( - self, - self._loop, - read_bufsize, - timer=timer, - payload_exception=ClientPayloadError, - response_with_body=not skip_payload, - read_until_eof=read_until_eof, - auto_decompress=auto_decompress, - ) - - if self._tail: - data, self._tail = self._tail, b"" - self.data_received(data) - - def _drop_timeout(self) -> None: - if self._read_timeout_handle is not None: - self._read_timeout_handle.cancel() - self._read_timeout_handle = None - - def _reschedule_timeout(self) -> None: - timeout = self._read_timeout - if self._read_timeout_handle is not None: - self._read_timeout_handle.cancel() - - if timeout: - self._read_timeout_handle = self._loop.call_later( - timeout, self._on_read_timeout - ) - else: - self._read_timeout_handle = None - - def _on_read_timeout(self) -> None: - exc = ServerTimeoutError("Timeout on reading data from socket") - self.set_exception(exc) - if self._payload is not None: - self._payload.set_exception(exc) - - def data_received(self, data: bytes) -> None: - self._reschedule_timeout() - - if not data: - return - - # custom payload parser - if self._payload_parser is not None: - eof, tail = self._payload_parser.feed_data(data) - if eof: - self._payload = None - self._payload_parser = None - - if tail: - self.data_received(tail) - return - else: - if self._upgraded or self._parser is None: - # i.e. websocket connection, websocket parser is not set yet - self._tail += data - else: - # parse http messages - try: - messages, upgraded, tail = self._parser.feed_data(data) - except BaseException as exc: - if self.transport is not None: - # connection.release() could be called BEFORE - # data_received(), the transport is already - # closed in this case - self.transport.close() - # should_close is True after the call - self.set_exception(exc) - return - - self._upgraded = upgraded - - payload: Optional[StreamReader] = None - for message, payload in messages: - if message.should_close: - self._should_close = True - - self._payload = payload - - if self._skip_payload or message.code in (204, 304): - self.feed_data((message, EMPTY_PAYLOAD), 0) - else: - self.feed_data((message, payload), 0) - if payload is not None: - # new message(s) was processed - # register timeout handler unsubscribing - # either on end-of-stream or immediately for - # EMPTY_PAYLOAD - if payload is not EMPTY_PAYLOAD: - payload.on_eof(self._drop_timeout) - else: - self._drop_timeout() - - if tail: - if upgraded: - self.data_received(tail) - else: - self._tail = tail diff --git a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/fontTools/pens/recordingPen.py b/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/fontTools/pens/recordingPen.py deleted file mode 100644 index 6c3b6613211d76f0306876dceb6d3945920417f5..0000000000000000000000000000000000000000 --- a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/fontTools/pens/recordingPen.py +++ /dev/null @@ -1,179 +0,0 @@ -"""Pen recording operations that can be accessed or replayed.""" -from fontTools.pens.basePen import AbstractPen, DecomposingPen -from fontTools.pens.pointPen import AbstractPointPen - - -__all__ = [ - "replayRecording", - "RecordingPen", - "DecomposingRecordingPen", - "RecordingPointPen", -] - - -def replayRecording(recording, pen): - """Replay a recording, as produced by RecordingPen or DecomposingRecordingPen, - to a pen. - - Note that recording does not have to be produced by those pens. - It can be any iterable of tuples of method name and tuple-of-arguments. - Likewise, pen can be any objects receiving those method calls. - """ - for operator, operands in recording: - getattr(pen, operator)(*operands) - - -class RecordingPen(AbstractPen): - """Pen recording operations that can be accessed or replayed. - - The recording can be accessed as pen.value; or replayed using - pen.replay(otherPen). - - :Example: - - from fontTools.ttLib import TTFont - from fontTools.pens.recordingPen import RecordingPen - - glyph_name = 'dollar' - font_path = 'MyFont.otf' - - font = TTFont(font_path) - glyphset = font.getGlyphSet() - glyph = glyphset[glyph_name] - - pen = RecordingPen() - glyph.draw(pen) - print(pen.value) - """ - - def __init__(self): - self.value = [] - - def moveTo(self, p0): - self.value.append(("moveTo", (p0,))) - - def lineTo(self, p1): - self.value.append(("lineTo", (p1,))) - - def qCurveTo(self, *points): - self.value.append(("qCurveTo", points)) - - def curveTo(self, *points): - self.value.append(("curveTo", points)) - - def closePath(self): - self.value.append(("closePath", ())) - - def endPath(self): - self.value.append(("endPath", ())) - - def addComponent(self, glyphName, transformation): - self.value.append(("addComponent", (glyphName, transformation))) - - def addVarComponent(self, glyphName, transformation, location): - self.value.append(("addVarComponent", (glyphName, transformation, location))) - - def replay(self, pen): - replayRecording(self.value, pen) - - -class DecomposingRecordingPen(DecomposingPen, RecordingPen): - """Same as RecordingPen, except that it doesn't keep components - as references, but draws them decomposed as regular contours. - - The constructor takes a single 'glyphSet' positional argument, - a dictionary of glyph objects (i.e. with a 'draw' method) keyed - by thir name:: - - >>> class SimpleGlyph(object): - ... def draw(self, pen): - ... pen.moveTo((0, 0)) - ... pen.curveTo((1, 1), (2, 2), (3, 3)) - ... pen.closePath() - >>> class CompositeGlyph(object): - ... def draw(self, pen): - ... pen.addComponent('a', (1, 0, 0, 1, -1, 1)) - >>> glyphSet = {'a': SimpleGlyph(), 'b': CompositeGlyph()} - >>> for name, glyph in sorted(glyphSet.items()): - ... pen = DecomposingRecordingPen(glyphSet) - ... glyph.draw(pen) - ... print("{}: {}".format(name, pen.value)) - a: [('moveTo', ((0, 0),)), ('curveTo', ((1, 1), (2, 2), (3, 3))), ('closePath', ())] - b: [('moveTo', ((-1, 1),)), ('curveTo', ((0, 2), (1, 3), (2, 4))), ('closePath', ())] - """ - - # raises KeyError if base glyph is not found in glyphSet - skipMissingComponents = False - - -class RecordingPointPen(AbstractPointPen): - """PointPen recording operations that can be accessed or replayed. - - The recording can be accessed as pen.value; or replayed using - pointPen.replay(otherPointPen). - - :Example: - - from defcon import Font - from fontTools.pens.recordingPen import RecordingPointPen - - glyph_name = 'a' - font_path = 'MyFont.ufo' - - font = Font(font_path) - glyph = font[glyph_name] - - pen = RecordingPointPen() - glyph.drawPoints(pen) - print(pen.value) - - new_glyph = font.newGlyph('b') - pen.replay(new_glyph.getPointPen()) - """ - - def __init__(self): - self.value = [] - - def beginPath(self, identifier=None, **kwargs): - if identifier is not None: - kwargs["identifier"] = identifier - self.value.append(("beginPath", (), kwargs)) - - def endPath(self): - self.value.append(("endPath", (), {})) - - def addPoint( - self, pt, segmentType=None, smooth=False, name=None, identifier=None, **kwargs - ): - if identifier is not None: - kwargs["identifier"] = identifier - self.value.append(("addPoint", (pt, segmentType, smooth, name), kwargs)) - - def addComponent(self, baseGlyphName, transformation, identifier=None, **kwargs): - if identifier is not None: - kwargs["identifier"] = identifier - self.value.append(("addComponent", (baseGlyphName, transformation), kwargs)) - - def addVarComponent( - self, baseGlyphName, transformation, location, identifier=None, **kwargs - ): - if identifier is not None: - kwargs["identifier"] = identifier - self.value.append( - ("addVarComponent", (baseGlyphName, transformation, location), kwargs) - ) - - def replay(self, pointPen): - for operator, args, kwargs in self.value: - getattr(pointPen, operator)(*args, **kwargs) - - -if __name__ == "__main__": - pen = RecordingPen() - pen.moveTo((0, 0)) - pen.lineTo((0, 100)) - pen.curveTo((50, 75), (60, 50), (50, 25)) - pen.closePath() - from pprint import pprint - - pprint(pen.value) diff --git a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/gradio/templates/frontend/assets/TabItem.svelte_svelte_type_style_lang-1613842a.js b/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/gradio/templates/frontend/assets/TabItem.svelte_svelte_type_style_lang-1613842a.js deleted file mode 100644 index fe559a5455a273339038b0e192520894383a1a88..0000000000000000000000000000000000000000 --- a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/gradio/templates/frontend/assets/TabItem.svelte_svelte_type_style_lang-1613842a.js +++ /dev/null @@ -1,2 +0,0 @@ -import{S as G,i as H,s as J,V as K,G as j,H as I,C as g,M as q,g as k,E as v,z as N,X as Q,Y as V,Z as X,p as Y,t as Z,q as p,Q as w,r as F,v as L,f as C,ao as O,w as S,aq as E,I as A,J as P,K as B}from"./index-8c3da1d9.js";function M(n,e,l){const s=n.slice();return s[14]=e[l],s[16]=l,s}function R(n){let e,l=n[14].name+"",s,f,d,_;function a(){return n[12](n[14],n[16])}return{c(){e=j("button"),s=A(l),f=I(),g(e,"class","svelte-1g805jl")},m(u,m){k(u,e,m),v(e,s),v(e,f),d||(_=P(e,"click",a),d=!0)},p(u,m){n=u,m&8&&l!==(l=n[14].name+"")&&B(s,l)},d(u){u&&p(e),d=!1,_()}}}function U(n){let e,l=n[14].name+"",s,f;return{c(){e=j("button"),s=A(l),f=I(),g(e,"class","selected svelte-1g805jl")},m(d,_){k(d,e,_),v(e,s),v(e,f)},p(d,_){_&8&&l!==(l=d[14].name+"")&&B(s,l)},d(d){d&&p(e)}}}function z(n,e){let l,s;function f(a,u){return a[14].id===a[4]?U:R}let d=f(e),_=d(e);return{key:n,first:null,c(){l=C(),_.c(),s=C(),this.first=l},m(a,u){k(a,l,u),_.m(a,u),k(a,s,u)},p(a,u){e=a,d===(d=f(e))&&_?_.p(e,u):(_.d(1),_=d(e),_&&(_.c(),_.m(s.parentNode,s)))},d(a){a&&p(l),_.d(a),a&&p(s)}}}function W(n){let e,l,s=[],f=new Map,d,_,a,u=n[3];const m=t=>t[14].id;for(let t=0;tl(4,f=i));const o=S(0);w(n,o,i=>l(13,s=i));const r=F();L(x,{register_tab:i=>(c.push({name:i.name,id:i.id}),t.update(h=>h??i.id),l(3,c),c.length-1),unregister_tab:i=>{const h=c.findIndex(y=>y.id===i.id);c.splice(h,1),t.update(y=>y===i.id?c[h]?.id||c[c.length-1]?.id:y)},selected_tab:t,selected_tab_index:o});function T(i){l(9,b=i),E(t,f=i,f),E(o,s=c.findIndex(h=>h.id===i),s),r("change")}const D=(i,h)=>{T(i.id),r("select",{value:i.name,index:h})};return n.$$set=i=>{"visible"in i&&l(0,a=i.visible),"elem_id"in i&&l(1,u=i.elem_id),"elem_classes"in i&&l(2,m=i.elem_classes),"selected"in i&&l(9,b=i.selected),"$$scope"in i&&l(10,_=i.$$scope)},n.$$.update=()=>{n.$$.dirty&512&&b!==null&&T(b)},[a,u,m,c,f,t,o,r,T,b,_,d,D]}class te extends G{constructor(e){super(),H(this,e,$,W,J,{visible:0,elem_id:1,elem_classes:2,selected:9})}}export{te as T,x as a}; -//# sourceMappingURL=TabItem.svelte_svelte_type_style_lang-1613842a.js.map diff --git a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/importlib_resources/_legacy.py b/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/importlib_resources/_legacy.py deleted file mode 100644 index b1ea8105dad6e27eefd5a34f64dfee974a5c4f71..0000000000000000000000000000000000000000 --- a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/importlib_resources/_legacy.py +++ /dev/null @@ -1,120 +0,0 @@ -import functools -import os -import pathlib -import types -import warnings - -from typing import Union, Iterable, ContextManager, BinaryIO, TextIO, Any - -from . import _common - -Package = Union[types.ModuleType, str] -Resource = str - - -def deprecated(func): - @functools.wraps(func) - def wrapper(*args, **kwargs): - warnings.warn( - f"{func.__name__} is deprecated. Use files() instead. " - "Refer to https://importlib-resources.readthedocs.io" - "/en/latest/using.html#migrating-from-legacy for migration advice.", - DeprecationWarning, - stacklevel=2, - ) - return func(*args, **kwargs) - - return wrapper - - -def normalize_path(path: Any) -> str: - """Normalize a path by ensuring it is a string. - - If the resulting string contains path separators, an exception is raised. - """ - str_path = str(path) - parent, file_name = os.path.split(str_path) - if parent: - raise ValueError(f'{path!r} must be only a file name') - return file_name - - -@deprecated -def open_binary(package: Package, resource: Resource) -> BinaryIO: - """Return a file-like object opened for binary reading of the resource.""" - return (_common.files(package) / normalize_path(resource)).open('rb') - - -@deprecated -def read_binary(package: Package, resource: Resource) -> bytes: - """Return the binary contents of the resource.""" - return (_common.files(package) / normalize_path(resource)).read_bytes() - - -@deprecated -def open_text( - package: Package, - resource: Resource, - encoding: str = 'utf-8', - errors: str = 'strict', -) -> TextIO: - """Return a file-like object opened for text reading of the resource.""" - return (_common.files(package) / normalize_path(resource)).open( - 'r', encoding=encoding, errors=errors - ) - - -@deprecated -def read_text( - package: Package, - resource: Resource, - encoding: str = 'utf-8', - errors: str = 'strict', -) -> str: - """Return the decoded string of the resource. - - The decoding-related arguments have the same semantics as those of - bytes.decode(). - """ - with open_text(package, resource, encoding, errors) as fp: - return fp.read() - - -@deprecated -def contents(package: Package) -> Iterable[str]: - """Return an iterable of entries in `package`. - - Note that not all entries are resources. Specifically, directories are - not considered resources. Use `is_resource()` on each entry returned here - to check if it is a resource or not. - """ - return [path.name for path in _common.files(package).iterdir()] - - -@deprecated -def is_resource(package: Package, name: str) -> bool: - """True if `name` is a resource inside `package`. - - Directories are *not* resources. - """ - resource = normalize_path(name) - return any( - traversable.name == resource and traversable.is_file() - for traversable in _common.files(package).iterdir() - ) - - -@deprecated -def path( - package: Package, - resource: Resource, -) -> ContextManager[pathlib.Path]: - """A context manager providing a file path object to the resource. - - If the resource does not already exist on its own on the file system, - a temporary file will be created. If the file was created, the file - will be deleted upon exiting the context manager (no exception is - raised if the file was deleted prior to the context manager - exiting). - """ - return _common.as_file(_common.files(package) / normalize_path(resource)) diff --git a/spaces/leilevy/bingo/src/lib/isomorphic/node.ts b/spaces/leilevy/bingo/src/lib/isomorphic/node.ts deleted file mode 100644 index da213ad6a86181979f098309c374da02835db5a0..0000000000000000000000000000000000000000 --- a/spaces/leilevy/bingo/src/lib/isomorphic/node.ts +++ /dev/null @@ -1,26 +0,0 @@ -import Debug from 'debug' - -const { fetch, setGlobalDispatcher, ProxyAgent } = require('undici') -const { HttpsProxyAgent } = require('https-proxy-agent') -const ws = require('ws') - -const debug = Debug('bingo') - -const httpProxy = process.env.http_proxy || process.env.HTTP_PROXY || process.env.https_proxy || process.env.HTTPS_PROXY; -let WebSocket = ws.WebSocket - -if (httpProxy) { - setGlobalDispatcher(new ProxyAgent(httpProxy)) - const agent = new HttpsProxyAgent(httpProxy) - // @ts-ignore - WebSocket = class extends ws.WebSocket { - constructor(address: string | URL, options: typeof ws.WebSocket) { - super(address, { - ...options, - agent, - }) - } - } -} - -export default { fetch, WebSocket, debug } diff --git a/spaces/lemonshochu/JPEG_Artifacts_Removal/network_fbcnn.py b/spaces/lemonshochu/JPEG_Artifacts_Removal/network_fbcnn.py deleted file mode 100644 index 4f62b472b2cd49c83dc5b2f6f115897c02562b83..0000000000000000000000000000000000000000 --- a/spaces/lemonshochu/JPEG_Artifacts_Removal/network_fbcnn.py +++ /dev/null @@ -1,337 +0,0 @@ -from collections import OrderedDict -import torch -import torch.nn as nn -import numpy as np -import torch.nn.functional as F -import torchvision.models as models - -''' -# -------------------------------------------- -# Advanced nn.Sequential -# https://github.com/xinntao/BasicSR -# -------------------------------------------- -''' - - -def sequential(*args): - """Advanced nn.Sequential. - - Args: - nn.Sequential, nn.Module - - Returns: - nn.Sequential - """ - if len(args) == 1: - if isinstance(args[0], OrderedDict): - raise NotImplementedError('sequential does not support OrderedDict input.') - return args[0] # No sequential is needed. - modules = [] - for module in args: - if isinstance(module, nn.Sequential): - for submodule in module.children(): - modules.append(submodule) - elif isinstance(module, nn.Module): - modules.append(module) - return nn.Sequential(*modules) - -# -------------------------------------------- -# return nn.Sequantial of (Conv + BN + ReLU) -# -------------------------------------------- -def conv(in_channels=64, out_channels=64, kernel_size=3, stride=1, padding=1, bias=True, mode='CBR', negative_slope=0.2): - L = [] - for t in mode: - if t == 'C': - L.append(nn.Conv2d(in_channels=in_channels, out_channels=out_channels, kernel_size=kernel_size, stride=stride, padding=padding, bias=bias)) - elif t == 'T': - L.append(nn.ConvTranspose2d(in_channels=in_channels, out_channels=out_channels, kernel_size=kernel_size, stride=stride, padding=padding, bias=bias)) - elif t == 'B': - L.append(nn.BatchNorm2d(out_channels, momentum=0.9, eps=1e-04, affine=True)) - elif t == 'I': - L.append(nn.InstanceNorm2d(out_channels, affine=True)) - elif t == 'R': - L.append(nn.ReLU(inplace=True)) - elif t == 'r': - L.append(nn.ReLU(inplace=False)) - elif t == 'L': - L.append(nn.LeakyReLU(negative_slope=negative_slope, inplace=True)) - elif t == 'l': - L.append(nn.LeakyReLU(negative_slope=negative_slope, inplace=False)) - elif t == '2': - L.append(nn.PixelShuffle(upscale_factor=2)) - elif t == '3': - L.append(nn.PixelShuffle(upscale_factor=3)) - elif t == '4': - L.append(nn.PixelShuffle(upscale_factor=4)) - elif t == 'U': - L.append(nn.Upsample(scale_factor=2, mode='nearest')) - elif t == 'u': - L.append(nn.Upsample(scale_factor=3, mode='nearest')) - elif t == 'v': - L.append(nn.Upsample(scale_factor=4, mode='nearest')) - elif t == 'M': - L.append(nn.MaxPool2d(kernel_size=kernel_size, stride=stride, padding=0)) - elif t == 'A': - L.append(nn.AvgPool2d(kernel_size=kernel_size, stride=stride, padding=0)) - else: - raise NotImplementedError('Undefined type: '.format(t)) - return sequential(*L) - -# -------------------------------------------- -# Res Block: x + conv(relu(conv(x))) -# -------------------------------------------- -class ResBlock(nn.Module): - def __init__(self, in_channels=64, out_channels=64, kernel_size=3, stride=1, padding=1, bias=True, mode='CRC', negative_slope=0.2): - super(ResBlock, self).__init__() - - assert in_channels == out_channels, 'Only support in_channels==out_channels.' - if mode[0] in ['R', 'L']: - mode = mode[0].lower() + mode[1:] - - self.res = conv(in_channels, out_channels, kernel_size, stride, padding, bias, mode, negative_slope) - - def forward(self, x): - res = self.res(x) - return x + res - -# -------------------------------------------- -# conv + subp (+ relu) -# -------------------------------------------- -def upsample_pixelshuffle(in_channels=64, out_channels=3, kernel_size=3, stride=1, padding=1, bias=True, mode='2R', negative_slope=0.2): - assert len(mode)<4 and mode[0] in ['2', '3', '4'], 'mode examples: 2, 2R, 2BR, 3, ..., 4BR.' - up1 = conv(in_channels, out_channels * (int(mode[0]) ** 2), kernel_size, stride, padding, bias, mode='C'+mode, negative_slope=negative_slope) - return up1 - - -# -------------------------------------------- -# nearest_upsample + conv (+ R) -# -------------------------------------------- -def upsample_upconv(in_channels=64, out_channels=3, kernel_size=3, stride=1, padding=1, bias=True, mode='2R', negative_slope=0.2): - assert len(mode)<4 and mode[0] in ['2', '3', '4'], 'mode examples: 2, 2R, 2BR, 3, ..., 4BR' - if mode[0] == '2': - uc = 'UC' - elif mode[0] == '3': - uc = 'uC' - elif mode[0] == '4': - uc = 'vC' - mode = mode.replace(mode[0], uc) - up1 = conv(in_channels, out_channels, kernel_size, stride, padding, bias, mode=mode, negative_slope=negative_slope) - return up1 - - -# -------------------------------------------- -# convTranspose (+ relu) -# -------------------------------------------- -def upsample_convtranspose(in_channels=64, out_channels=3, kernel_size=2, stride=2, padding=0, bias=True, mode='2R', negative_slope=0.2): - assert len(mode)<4 and mode[0] in ['2', '3', '4'], 'mode examples: 2, 2R, 2BR, 3, ..., 4BR.' - kernel_size = int(mode[0]) - stride = int(mode[0]) - mode = mode.replace(mode[0], 'T') - up1 = conv(in_channels, out_channels, kernel_size, stride, padding, bias, mode, negative_slope) - return up1 - - -''' -# -------------------------------------------- -# Downsampler -# Kai Zhang, https://github.com/cszn/KAIR -# -------------------------------------------- -# downsample_strideconv -# downsample_maxpool -# downsample_avgpool -# -------------------------------------------- -''' - - -# -------------------------------------------- -# strideconv (+ relu) -# -------------------------------------------- -def downsample_strideconv(in_channels=64, out_channels=64, kernel_size=2, stride=2, padding=0, bias=True, mode='2R', negative_slope=0.2): - assert len(mode)<4 and mode[0] in ['2', '3', '4'], 'mode examples: 2, 2R, 2BR, 3, ..., 4BR.' - kernel_size = int(mode[0]) - stride = int(mode[0]) - mode = mode.replace(mode[0], 'C') - down1 = conv(in_channels, out_channels, kernel_size, stride, padding, bias, mode, negative_slope) - return down1 - - -# -------------------------------------------- -# maxpooling + conv (+ relu) -# -------------------------------------------- -def downsample_maxpool(in_channels=64, out_channels=64, kernel_size=3, stride=1, padding=0, bias=True, mode='2R', negative_slope=0.2): - assert len(mode)<4 and mode[0] in ['2', '3'], 'mode examples: 2, 2R, 2BR, 3, ..., 3BR.' - kernel_size_pool = int(mode[0]) - stride_pool = int(mode[0]) - mode = mode.replace(mode[0], 'MC') - pool = conv(kernel_size=kernel_size_pool, stride=stride_pool, mode=mode[0], negative_slope=negative_slope) - pool_tail = conv(in_channels, out_channels, kernel_size, stride, padding, bias, mode=mode[1:], negative_slope=negative_slope) - return sequential(pool, pool_tail) - - -# -------------------------------------------- -# averagepooling + conv (+ relu) -# -------------------------------------------- -def downsample_avgpool(in_channels=64, out_channels=64, kernel_size=3, stride=1, padding=1, bias=True, mode='2R', negative_slope=0.2): - assert len(mode)<4 and mode[0] in ['2', '3'], 'mode examples: 2, 2R, 2BR, 3, ..., 3BR.' - kernel_size_pool = int(mode[0]) - stride_pool = int(mode[0]) - mode = mode.replace(mode[0], 'AC') - pool = conv(kernel_size=kernel_size_pool, stride=stride_pool, mode=mode[0], negative_slope=negative_slope) - pool_tail = conv(in_channels, out_channels, kernel_size, stride, padding, bias, mode=mode[1:], negative_slope=negative_slope) - return sequential(pool, pool_tail) - - - -class QFAttention(nn.Module): - def __init__(self, in_channels=64, out_channels=64, kernel_size=3, stride=1, padding=1, bias=True, mode='CRC', negative_slope=0.2): - super(QFAttention, self).__init__() - - assert in_channels == out_channels, 'Only support in_channels==out_channels.' - if mode[0] in ['R', 'L']: - mode = mode[0].lower() + mode[1:] - - self.res = conv(in_channels, out_channels, kernel_size, stride, padding, bias, mode, negative_slope) - - def forward(self, x, gamma, beta): - gamma = gamma.unsqueeze(-1).unsqueeze(-1) - beta = beta.unsqueeze(-1).unsqueeze(-1) - res = (gamma)*self.res(x) + beta - return x + res - - -class FBCNN(nn.Module): - def __init__(self, in_nc=3, out_nc=3, nc=[64, 128, 256, 512], nb=4, act_mode='R', downsample_mode='strideconv', - upsample_mode='convtranspose'): - super(FBCNN, self).__init__() - - self.m_head = conv(in_nc, nc[0], bias=True, mode='C') - self.nb = nb - self.nc = nc - # downsample - if downsample_mode == 'avgpool': - downsample_block = downsample_avgpool - elif downsample_mode == 'maxpool': - downsample_block = downsample_maxpool - elif downsample_mode == 'strideconv': - downsample_block = downsample_strideconv - else: - raise NotImplementedError('downsample mode [{:s}] is not found'.format(downsample_mode)) - - self.m_down1 = sequential( - *[ResBlock(nc[0], nc[0], bias=True, mode='C' + act_mode + 'C') for _ in range(nb)], - downsample_block(nc[0], nc[1], bias=True, mode='2')) - self.m_down2 = sequential( - *[ResBlock(nc[1], nc[1], bias=True, mode='C' + act_mode + 'C') for _ in range(nb)], - downsample_block(nc[1], nc[2], bias=True, mode='2')) - self.m_down3 = sequential( - *[ResBlock(nc[2], nc[2], bias=True, mode='C' + act_mode + 'C') for _ in range(nb)], - downsample_block(nc[2], nc[3], bias=True, mode='2')) - - self.m_body_encoder = sequential( - *[ResBlock(nc[3], nc[3], bias=True, mode='C' + act_mode + 'C') for _ in range(nb)]) - - self.m_body_decoder = sequential( - *[ResBlock(nc[3], nc[3], bias=True, mode='C' + act_mode + 'C') for _ in range(nb)]) - - # upsample - if upsample_mode == 'upconv': - upsample_block = upsample_upconv - elif upsample_mode == 'pixelshuffle': - upsample_block = upsample_pixelshuffle - elif upsample_mode == 'convtranspose': - upsample_block = upsample_convtranspose - else: - raise NotImplementedError('upsample mode [{:s}] is not found'.format(upsample_mode)) - - self.m_up3 = nn.ModuleList([upsample_block(nc[3], nc[2], bias=True, mode='2'), - *[QFAttention(nc[2], nc[2], bias=True, mode='C' + act_mode + 'C') for _ in range(nb)]]) - - self.m_up2 = nn.ModuleList([upsample_block(nc[2], nc[1], bias=True, mode='2'), - *[QFAttention(nc[1], nc[1], bias=True, mode='C' + act_mode + 'C') for _ in range(nb)]]) - - self.m_up1 = nn.ModuleList([upsample_block(nc[1], nc[0], bias=True, mode='2'), - *[QFAttention(nc[0], nc[0], bias=True, mode='C' + act_mode + 'C') for _ in range(nb)]]) - - - self.m_tail = conv(nc[0], out_nc, bias=True, mode='C') - - - self.qf_pred = sequential(*[ResBlock(nc[3], nc[3], bias=True, mode='C' + act_mode + 'C') for _ in range(nb)], - torch.nn.AdaptiveAvgPool2d((1,1)), - torch.nn.Flatten(), - torch.nn.Linear(512, 512), - nn.ReLU(), - torch.nn.Linear(512, 512), - nn.ReLU(), - torch.nn.Linear(512, 1), - nn.Sigmoid() - ) - - self.qf_embed = sequential(torch.nn.Linear(1, 512), - nn.ReLU(), - torch.nn.Linear(512, 512), - nn.ReLU(), - torch.nn.Linear(512, 512), - nn.ReLU() - ) - - self.to_gamma_3 = sequential(torch.nn.Linear(512, nc[2]),nn.Sigmoid()) - self.to_beta_3 = sequential(torch.nn.Linear(512, nc[2]),nn.Tanh()) - self.to_gamma_2 = sequential(torch.nn.Linear(512, nc[1]),nn.Sigmoid()) - self.to_beta_2 = sequential(torch.nn.Linear(512, nc[1]),nn.Tanh()) - self.to_gamma_1 = sequential(torch.nn.Linear(512, nc[0]),nn.Sigmoid()) - self.to_beta_1 = sequential(torch.nn.Linear(512, nc[0]),nn.Tanh()) - - - def forward(self, x, qf_input=None): - - h, w = x.size()[-2:] - paddingBottom = int(np.ceil(h / 8) * 8 - h) - paddingRight = int(np.ceil(w / 8) * 8 - w) - x = nn.ReplicationPad2d((0, paddingRight, 0, paddingBottom))(x) - - x1 = self.m_head(x) - x2 = self.m_down1(x1) - x3 = self.m_down2(x2) - x4 = self.m_down3(x3) - x = self.m_body_encoder(x4) - qf = self.qf_pred(x) - x = self.m_body_decoder(x) - qf_embedding = self.qf_embed(qf_input) if qf_input is not None else self.qf_embed(qf) - gamma_3 = self.to_gamma_3(qf_embedding) - beta_3 = self.to_beta_3(qf_embedding) - - gamma_2 = self.to_gamma_2(qf_embedding) - beta_2 = self.to_beta_2(qf_embedding) - - gamma_1 = self.to_gamma_1(qf_embedding) - beta_1 = self.to_beta_1(qf_embedding) - - - x = x + x4 - x = self.m_up3[0](x) - for i in range(self.nb): - x = self.m_up3[i+1](x, gamma_3,beta_3) - - x = x + x3 - - x = self.m_up2[0](x) - for i in range(self.nb): - x = self.m_up2[i+1](x, gamma_2, beta_2) - x = x + x2 - - x = self.m_up1[0](x) - for i in range(self.nb): - x = self.m_up1[i+1](x, gamma_1, beta_1) - - x = x + x1 - x = self.m_tail(x) - x = x[..., :h, :w] - - return x, qf - -if __name__ == "__main__": - x = torch.randn(1, 3, 96, 96)#.cuda()#.to(torch.device('cuda')) - fbar=FBAR() - y,qf = fbar(x) - print(y.shape,qf.shape) diff --git a/spaces/lhkhiem28/A-recognition-system/source/libs.py b/spaces/lhkhiem28/A-recognition-system/source/libs.py deleted file mode 100644 index c6014a4328994b03fd60eb40d65538573f16c298..0000000000000000000000000000000000000000 --- a/spaces/lhkhiem28/A-recognition-system/source/libs.py +++ /dev/null @@ -1,9 +0,0 @@ -import os, sys -import warnings; warnings.filterwarnings("ignore") -import pytorch_lightning as pl -pl.seed_everything(23) - -import transformers -import viet_text_tools as vitools -import underthesea -import gradio as gr \ No newline at end of file diff --git a/spaces/lincquiQcaudo/Top-20-Diffusion/AMT Emulator V0.8.1 By Painter.epub.md b/spaces/lincquiQcaudo/Top-20-Diffusion/AMT Emulator V0.8.1 By Painter.epub.md deleted file mode 100644 index feb56f10688bc53bec38901e34ef2a7d8bfed797..0000000000000000000000000000000000000000 --- a/spaces/lincquiQcaudo/Top-20-Diffusion/AMT Emulator V0.8.1 By Painter.epub.md +++ /dev/null @@ -1,6 +0,0 @@ -

    AMT Emulator V0.8.1 By Painter.epub


    Download Ziphttps://bytlly.com/2uGy3s



    -
    -... because of #1737587 Package: cdi-api-1.2-8.1.module_f29+6921+ca3ed728 Old ... Update to 3.34.3 Package: gnome-epub-thumbnailer-1.6-1.fc32 Old package: ... package: xterm-349-1.fc32 Summary: Terminal emulator for the X Window ... New rebase https://github.com/containers/udica/releases/tag/v0.2.1 Package: ... 1fdad05405
    -
    -
    -

    diff --git a/spaces/lincquiQcaudo/Top-20-Diffusion/AutoCAD OEM 2017 Crack Universal Product Key [REPACK] Free.md b/spaces/lincquiQcaudo/Top-20-Diffusion/AutoCAD OEM 2017 Crack Universal Product Key [REPACK] Free.md deleted file mode 100644 index 06abaa657ed6dc7d0ae20a98f3e80dcc085051a4..0000000000000000000000000000000000000000 --- a/spaces/lincquiQcaudo/Top-20-Diffusion/AutoCAD OEM 2017 Crack Universal Product Key [REPACK] Free.md +++ /dev/null @@ -1,72 +0,0 @@ -
    -

    AutoCAD OEM 2017 Crack Universal Product Key Free: A Guide for Developers

    - -

    AutoCAD OEM 2017 is a software development platform that allows you to create and customize your own applications based on the AutoCAD technology. With AutoCAD OEM 2017, you can leverage the full power and functionality of AutoCAD, including 2D and 3D design, drafting, modeling, rendering, and more. You can also integrate your own features, tools, and interfaces into your applications, and distribute them to your customers or partners.

    -

    AutoCAD OEM 2017 Crack Universal Product Key Free


    Download Ziphttps://bytlly.com/2uGxLy



    - -

    However, to use AutoCAD OEM 2017, you need to activate it with a valid product key. A product key is a unique code that identifies your software license and enables you to install and run your software. Without a product key, you cannot use AutoCAD OEM 2017.

    - -

    In this article, we will show you how to get AutoCAD OEM 2017 crack universal product key free from a reliable source. We will also provide you with a list of product keys for all Autodesk 2017 products, so you can choose the one that matches your software version. By following this guide, you can activate any Autodesk product with crack and universal product key free.

    - -

    What is AutoCAD OEM 2017 crack universal product key free?

    - -

    AutoCAD OEM 2017 crack universal product key free is a combination of two programs that can help you activate AutoCAD OEM 2017 without paying for a license. These programs are:

    - -
      -
    • AutoCAD OEM 2017 crack: A crack is a program that modifies the original software code to bypass the activation process. A crack can make the software think that it has been activated with a valid product key, even if it has not.
    • -
    • AutoCAD OEM 2017 universal product key generator: A universal product key generator is a program that generates valid product keys for any Autodesk product, including AutoCAD OEM 2017. A universal product key generator can provide you with a product key that matches your software version and license type.
    • -
    - -

    By using these two programs together, you can get AutoCAD OEM 2017 crack universal product key free and activate your software without any hassle.

    -

    - -

    Where to download AutoCAD OEM 2017 crack universal product key free?

    - -

    The first step to get AutoCAD OEM 2017 crack universal product key free is to download these two programs from a trusted website. There are many websites that claim to offer free cracks and product keys for Autodesk products, but some of them may contain viruses, malware, or fake files that can harm your computer or steal your personal information. Therefore, you need to be careful and choose a reputable source.

    - -

    One of the best websites that we recommend is iggtech.com. This website provides high-quality cracks and product keys for Autodesk products of various versions. You can download X-force 2017, which is a crack and a universal product key generator for Autodesk 2017 products, from this website. X-force 2017 is compatible with Windows and Mac operating systems, and supports both 32-bit and 64-bit versions.

    - -

    To download X-force 2017 from iggtech.com, go to https://iggtech.com/download-x-force-2017-1/ in your browser. Click on the "Download Now" button on the homepage. You will be redirected to another page where you need to complete a captcha verification to prove that you are not a robot. After that, you will see a link to download X-force 2017 as a zip file. Click on the link and save the file to your computer.

    - -

    How to install AutoCAD OEM 2017 crack universal product key free?

    - -

    The next step to get AutoCAD OEM 2017 crack universal product key free is to install these two programs on your computer. Before you do that, make sure that you have installed AutoCAD OEM 2017 on your computer. If you don't have it yet, you can download it from the official Autodesk website or from any other source that you trust.

    - -

    After you have installed AutoCAD OEM 2017 on your computer, follow these steps to install X-force 2017:

    - -
      -
    • Extract the zip file that you downloaded from iggtech.com using WinRAR or any other file extraction software.
    • -
    • Open the extracted folder and locate the file named "xf-adsk2017_x64.exe" or "xf-adsk2017_x86.exe" depending on your system architecture.
    • -
    • Right-click on the file and select "Run as administrator". This will launch X-force 2017 as a command prompt window.
    • -
    • In the command prompt window, you will see a list of Autodesk products and their corresponding product keys. Find the product name and product key for AutoCAD OEM 2017. For example, if you have installed AutoCAD OEM 2017 as a stand alone product, the product name is "Autodesk AutoCAD" and the product key is "001I1". If you have installed AutoCAD OEM 2017 from the AutoCAD Design Suite Ultimate 2017, the product name is "Autodesk AutoCAD Design Suite Ultimate" and the product key is "769I1". Note down the product name and product key for later use.
    • -
    • Press any key to close X-force 2017.
    • -
    - -

    How to activate AutoCAD OEM 2017 with crack and universal product key?

    - -

    The final step to get AutoCAD OEM 2017 crack universal product key free is to activate your software with these two programs. To do that, follow these steps:

    - -
      -
    • Launch AutoCAD OEM 2017 on your computer.
    • -
    • You will see an activation screen asking you to enter your serial number and product key. Enter "666-69696969" as the serial number and enter the product key that you noted down from X-force 2017 in step 2. Click on "Next".
    • -
    • You will see another screen asking you to select your activation method. Choose "Request an activation code using an offline method" and click on "Next".
    • -
    • You will see another screen showing you an activation code request code. Copy this code or write it down somewhere.
    • -
    • Run X-force 2017 again as administrator (see step 2).
    • -
    • In the command prompt window, paste or type the activation code request code that you copied or wrote down in step 4.
    • -
    • Click on "Generate" button. This will generate an activation code for AutoCAD OEM 2017.
    • -
    • Copy this activation code or write it down somewhere.
    • -
    • Go back to AutoCAD OEM 2017 activation screen (see step 4) and paste or type the activation code that you copied or wrote down in step 8.
    • -
    • Click on "Next". This will activate AutoCAD OEM 2017 on your computer.
    • -
    • Congratulations! You have successfully activated AutoCAD OEM 2017 with crack and universal product key free!
    • -
    - -

    Conclusion

    - -

    In this article, we have shown you how to get AutoCAD OEM 2017 crack universal product key free from a reliable source. We have also provided you with a list of product keys for all Autodesk products of various versions. By following this guide, you can activate any Autodesk product with crack and universal product key free.

    - -

    We hope that this article has been helpful for you. If you have any questions or comments, please feel free to leave them below. Thank you for reading!

    -

    In this article, we have shown you how to get AutoCAD OEM 2017 crack universal product key free from a reliable source. We have also provided you with a list of product keys for all Autodesk products of various versions. By following this guide, you can activate any Autodesk product with crack and universal product key free.

    - -

    We hope that this article has been helpful for you. If you have any questions or comments, please feel free to leave them below. Thank you for reading!

    3cee63e6c2
    -
    -
    \ No newline at end of file diff --git a/spaces/lincquiQcaudo/Top-20-Diffusion/Deep Sea Obfuscator 4.4.4.86 Crack _BEST_ed Windshield.md b/spaces/lincquiQcaudo/Top-20-Diffusion/Deep Sea Obfuscator 4.4.4.86 Crack _BEST_ed Windshield.md deleted file mode 100644 index fba37ec6e2b7282bb96d2d8ac462ce8cc06abfac..0000000000000000000000000000000000000000 --- a/spaces/lincquiQcaudo/Top-20-Diffusion/Deep Sea Obfuscator 4.4.4.86 Crack _BEST_ed Windshield.md +++ /dev/null @@ -1,119 +0,0 @@ - -

    How Deep Sea Obfuscator 4.4.4.86 Can Help You Repair Your Cracked Windshield

    -

    If you have a cracked windshield, you might be wondering how to fix it without spending a lot of money or time. You might have heard of some DIY methods, such as using nail polish, super glue, or resin, but these are not very effective or durable. They can also damage your windshield further or create safety hazards.

    -

    Fortunately, there is a better solution: Deep Sea Obfuscator 4.4.4.86. This is a software tool that can help you repair your cracked windshield in a few simple steps. It works by obfuscating the crack, making it invisible to the naked eye and preventing it from spreading. It also strengthens the glass and restores its original clarity and shine.

    -

    deep sea obfuscator 4.4.4.86 cracked windshield


    Download Zip ❤❤❤ https://bytlly.com/2uGwlC



    -

    What is Deep Sea Obfuscator 4.4.4.86?

    -

    Deep Sea Obfuscator 4.4.4.86 is a software tool that can help you fix your cracked windshield with ease. It is designed to obfuscate any type of crack, whether it is small or large, straight or curved, single or multiple. It can also handle cracks that are located on the edge or corner of the windshield.

    -

    Obfuscation is a process of hiding or disguising something to make it less noticeable or understandable. In this case, Deep Sea Obfuscator 4.4.4.86 uses a sophisticated algorithm to generate a pattern of pixels that matches the color and brightness of the surrounding glass. This pattern is then applied to the crack, making it blend in with the rest of the windshield.

    -

    The result is a seamless and smooth surface that looks like new. You won't be able to see the crack anymore, and neither will anyone else. The obfuscated crack will also not affect the visibility or functionality of your windshield wipers, sensors, cameras, or other features.

    -

    How to Use Deep Sea Obfuscator 4.4.4.86?

    -

    Using Deep Sea Obfuscator 4.4.4.86 is very easy and fast. You don't need any special skills or equipment to use it. All you need is a computer, a USB cable, and a digital camera or smartphone.

    -

    Here are the steps to follow:

    -
      -
    1. Clean your windshield thoroughly with a soft cloth and some water or glass cleaner.
    2. -
    3. Take a clear and sharp picture of the crack with your digital camera or smartphone.
    4. -
    5. Connect your device to your computer with a USB cable and transfer the picture to your computer.
    6. -
    7. Download and install Deep Sea Obfuscator 4.4.4.86 from its official website.
    8. -
    9. Open the software and select the picture of the crack.
    10. -
    11. Adjust the settings according to your preferences and click on "Obfuscate".
    12. -
    13. Wait for a few seconds while the software processes the picture and generates a new one with the obfuscated crack.
    14. -
    15. Save the new picture to your computer and transfer it back to your device.
    16. -
    17. Display the new picture on your device's screen and place it behind the crack on your windshield.
    18. -
    19. Enjoy your repaired windshield!
    20. -
    -

    Why Choose Deep Sea Obfuscator 4.4.4.86?

    -

    There are many reasons why you should choose Deep Sea Obfuscator 4.4.4.86 over other methods of fixing your cracked windshield:

    -
      -
    • It is cheaper than replacing your windshield or hiring a professional service.
    • -
    • It is faster than waiting for an appointment or ordering a new windshield.
    • -
    • It is safer than driving with a cracked windshield or using unreliable DIY methods.
    • -
    • It is more effective than other software tools that claim to do the same thing.
    • -
    • It is more versatile than other software tools that can only handle certain types of cracks.
    • -
    • It is more reliable than other software tools that can produce poor quality results or fail to work at all.
    • -
    -

    Deep Sea Obfuscator 4.4.4.86 is the best solution for repairing your cracked windshield in a quick and easy way. It can save you money, time, and hassle while giving you a flawless and beautiful windshield that looks like new.

    -

    Conclusion

    -

    A cracked windshield can be a serious problem that can affect your safety, comfort, and appearance of your car. You don't have to live with it or spend a fortune to fix it.

    -

    -

    You can use Deep Sea Obfuscator 4.4.4.86 to repair your cracked windshield in minutes with just a few clicks of your mouse.

    -

    This software tool can obfuscate any type of crack on any part of your windshield, making it invisible and preventing it from spreading.

    -

    You won't have to worry about replacing your windshield or hiring a professional service ever again.

    -

    You can download Deep Sea Obfuscator 4.4.4.86 today and try it for yourself.

    - - ---> ServiceClient failure for DeepLeo[/ERROR] -

    How Does Deep Sea Obfuscator 4.4.4.86 Work?

    -

    Deep Sea Obfuscator 4.4.4.86 works by using a technique called steganography, which is the art of hiding information within other information. In this case, the software hides the crack within the glass by creating a pixel pattern that mimics the surrounding glass.

    -

    The software analyzes the picture of the crack and detects its shape, size, location, and orientation. It then generates a pixel pattern that matches the color and brightness of the glass around the crack. The pixel pattern is composed of tiny dots that are invisible to the human eye, but can be seen by a computer.

    -

    The software then applies the pixel pattern to the crack, covering it completely and making it blend in with the rest of the glass. The software also adjusts the pixel pattern according to the angle and distance of the device's screen from the windshield, ensuring that the obfuscation is consistent and realistic.

    -

    What are the Benefits of Deep Sea Obfuscator 4.4.4.86?

    -

    Deep Sea Obfuscator 4.4.4.86 offers many benefits for anyone who wants to repair their cracked windshield:

    -
      -
    • It is easy to use and requires no technical skills or knowledge.
    • -
    • It is fast and can obfuscate any crack in seconds.
    • -
    • It is effective and can obfuscate any type of crack on any part of the windshield.
    • -
    • It is durable and can prevent the crack from spreading or worsening.
    • -
    • It is reversible and can be removed at any time by deleting or changing the picture.
    • -
    -

    Deep Sea Obfuscator 4.4.4.86 is a smart and innovative solution that can help you fix your cracked windshield without hassle or expense.

    -

    Where to Get Deep Sea Obfuscator 4.4.4.86?

    -

    If you want to try Deep Sea Obfuscator 4.4.4.86 for yourself, you can download it from its official website for free. You can also get a premium version that offers more features and support for a small fee.

    -

    The official website also provides more information about the software, such as how it works, what it can do, and how to use it. You can also find testimonials from satisfied customers who have used Deep Sea Obfuscator 4.4.4.86 to repair their cracked windshields.

    -

    Don't wait any longer and get Deep Sea Obfuscator 4.4.4.86 today!

    -

    How to Install Deep Sea Obfuscator 4.4.4.86?

    -

    Installing Deep Sea Obfuscator 4.4.4.86 is very simple and straightforward. You don't need any special requirements or permissions to install it on your computer.

    -

    Here are the steps to follow:

    -
      -
    1. Go to the official website of Deep Sea Obfuscator 4.4.4.86 and click on the download button.
    2. -
    3. Choose the version that suits your operating system and click on the download link.
    4. -
    5. Save the file to your computer and double-click on it to run it.
    6. -
    7. Follow the instructions on the screen and agree to the terms and conditions.
    8. -
    9. Choose the destination folder and click on the install button.
    10. -
    11. Wait for a few minutes while the software installs on your computer.
    12. -
    13. Click on the finish button and launch the software.
    14. -
    -

    Congratulations! You have successfully installed Deep Sea Obfuscator 4.4.4.86 on your computer.

    -

    How to Uninstall Deep Sea Obfuscator 4.4.4.86?

    -

    If you want to uninstall Deep Sea Obfuscator 4.4.4.86 from your computer, you can do it easily and quickly. You don't need any special tools or skills to uninstall it.

    -

    Here are the steps to follow:

    -
      -
    1. Go to the start menu and click on the control panel.
    2. -
    3. Click on the programs and features option.
    4. -
    5. Find Deep Sea Obfuscator 4.4.4.86 in the list of programs and click on it.
    6. -
    7. Click on the uninstall button and confirm your choice.
    8. -
    9. Wait for a few minutes while the software uninstalls from your computer.
    10. -
    11. Click on the finish button and restart your computer.
    12. -
    -

    You have successfully uninstalled Deep Sea Obfuscator 4.4.4.86 from your computer.

    -

    How to Update Deep Sea Obfuscator 4.4.4.86?

    -

    Deep Sea Obfuscator 4.4.4.86 is constantly updated and improved by its developers to ensure its quality and performance. You can update the software easily and automatically whenever a new version is available.

    -

    Here are the steps to follow:

    -
      -
    1. Open Deep Sea Obfuscator 4.4.4.86 and click on the check for updates button.
    2. -
    3. If a new version is available, you will see a notification and a download link.
    4. -
    5. Click on the download link and save the file to your computer.
    6. -
    7. Close Deep Sea Obfuscator 4.4.4.86 and run the downloaded file.
    8. -
    9. Follow the instructions on the screen and agree to the terms and conditions.
    10. -
    11. Wait for a few minutes while the software updates on your computer.
    12. -
    13. Click on the finish button and launch the software.
    14. -
    -

    You have successfully updated Deep Sea Obfuscator 4.4.4.86 to the latest version.

    -

    How to Contact Deep Sea Obfuscator 4.4.4.86 Support?

    -

    If you have any questions, issues, or feedback about Deep Sea Obfuscator 4.4.4.86, you can contact its support team anytime. They are friendly, professional, and responsive, and they will help you solve any problem you might have with the software.

    -

    Here are the ways to contact Deep Sea Obfuscator 4.4.4.86 support:

    -
      -
    • Email: You can send an email to support@deepseaobfuscator.com and describe your issue or query in detail.
    • -
    • Phone: You can call +1-800-123-4567 and speak to a support agent directly.
    • -
    • Chat: You can visit the official website of Deep Sea Obfuscator 4.4.4.86 and click on the chat icon at the bottom right corner of the screen.
    • -
    • Forum: You can join the online community of Deep Sea Obfuscator 4.4.4.86 users and post your question or comment on the forum.
    • -
    -

    Deep Sea Obfuscator 4.4.4.86 support team is available 24/7 and will reply to you as soon as possible.

    -

    Conclusion

    -

    A cracked windshield can be a serious problem that can affect your safety, comfort, and appearance of your car. You don't have to live with it or spend a fortune to fix it.

    -

    You can use Deep Sea Obfuscator 4.4.4.86 to repair your cracked windshield in minutes with just a few clicks of your mouse.

    -

    This software tool can obfuscate any type of crack on any part of your windshield, making it invisible and preventing it from spreading.

    -

    You won't have to worry about replacing your windshield or hiring a professional service ever again.

    -

    You can download Deep Sea Obfuscator 4.4.4.86 today and try it for yourself.

    -

    Don't let a cracked windshield ruin your day. Get Deep Sea Obfuscator 4.4.4.86 and enjoy your ride!

    3cee63e6c2
    -
    -
    \ No newline at end of file diff --git a/spaces/lincquiQcaudo/Top-20-Diffusion/Geo 5 Crack Serial Keygen.md b/spaces/lincquiQcaudo/Top-20-Diffusion/Geo 5 Crack Serial Keygen.md deleted file mode 100644 index 9c42e7b4e72d7024d12a1aa6dcc0bcee21f4905e..0000000000000000000000000000000000000000 --- a/spaces/lincquiQcaudo/Top-20-Diffusion/Geo 5 Crack Serial Keygen.md +++ /dev/null @@ -1,6 +0,0 @@ -

    Geo 5 Crack Serial Keygen


    Download Ziphttps://bytlly.com/2uGwcB



    -
    -Licensed 5 place, Silver-red trim, 250 hrs. ... Contact Geo. ... two transmitters, two receivers mounted in nose, battery under co-pilot's Seat, Serial number 6496. 1fdad05405
    -
    -
    -

    diff --git a/spaces/lincquiQcaudo/Top-20-Diffusion/Harry Potter And The Prisoner Of Azkaban 1080p Bluray X264KATRG.md b/spaces/lincquiQcaudo/Top-20-Diffusion/Harry Potter And The Prisoner Of Azkaban 1080p Bluray X264KATRG.md deleted file mode 100644 index 6e2d226320360c80def602881fd9b15827c5bf42..0000000000000000000000000000000000000000 --- a/spaces/lincquiQcaudo/Top-20-Diffusion/Harry Potter And The Prisoner Of Azkaban 1080p Bluray X264KATRG.md +++ /dev/null @@ -1,6 +0,0 @@ -

    Harry Potter And The Prisoner Of Azkaban 1080p Bluray X264KATRG


    Download Filehttps://bytlly.com/2uGyBv



    - - 3cee63e6c2
    -
    -
    -

    diff --git a/spaces/lincquiQcaudo/Top-20-Diffusion/Microsoft Visio 2013 Keygen Torrent.md b/spaces/lincquiQcaudo/Top-20-Diffusion/Microsoft Visio 2013 Keygen Torrent.md deleted file mode 100644 index 2ab2f7dc79bc91b77ddf3bc1e549ed47da1c7c71..0000000000000000000000000000000000000000 --- a/spaces/lincquiQcaudo/Top-20-Diffusion/Microsoft Visio 2013 Keygen Torrent.md +++ /dev/null @@ -1,6 +0,0 @@ -

    microsoft visio 2013 keygen torrent


    Download File ····· https://bytlly.com/2uGwVm



    -
    -9TK4N-KBKDH-VQRJK-4X948-YPXKV, 0, MAK, Office 2013 ProPlus Vol ... 22QF4-N2R48-DMYBQ-D4DJX-RM3B4, 2241, MAK, Visio 2013 Pro Vol. 4d29de3e1b
    -
    -
    -

    diff --git a/spaces/manhkhanhUIT/Image_Restoration_Colorization/Face_Enhancement/models/__init__.py b/spaces/manhkhanhUIT/Image_Restoration_Colorization/Face_Enhancement/models/__init__.py deleted file mode 100644 index ad491ed6270caf03e6d1c34e56163f5ee8fbf2bc..0000000000000000000000000000000000000000 --- a/spaces/manhkhanhUIT/Image_Restoration_Colorization/Face_Enhancement/models/__init__.py +++ /dev/null @@ -1,44 +0,0 @@ -# Copyright (c) Microsoft Corporation. -# Licensed under the MIT License. - -import importlib -import torch - - -def find_model_using_name(model_name): - # Given the option --model [modelname], - # the file "models/modelname_model.py" - # will be imported. - model_filename = "models." + model_name + "_model" - modellib = importlib.import_module(model_filename) - - # In the file, the class called ModelNameModel() will - # be instantiated. It has to be a subclass of torch.nn.Module, - # and it is case-insensitive. - model = None - target_model_name = model_name.replace("_", "") + "model" - for name, cls in modellib.__dict__.items(): - if name.lower() == target_model_name.lower() and issubclass(cls, torch.nn.Module): - model = cls - - if model is None: - print( - "In %s.py, there should be a subclass of torch.nn.Module with class name that matches %s in lowercase." - % (model_filename, target_model_name) - ) - exit(0) - - return model - - -def get_option_setter(model_name): - model_class = find_model_using_name(model_name) - return model_class.modify_commandline_options - - -def create_model(opt): - model = find_model_using_name(opt.model) - instance = model(opt) - print("model [%s] was created" % (type(instance).__name__)) - - return instance diff --git a/spaces/marioboy/neil-breen/encoder/data_objects/random_cycler.py b/spaces/marioboy/neil-breen/encoder/data_objects/random_cycler.py deleted file mode 100644 index c405db6b27f46d874d8feb37e3f9c1e12c251109..0000000000000000000000000000000000000000 --- a/spaces/marioboy/neil-breen/encoder/data_objects/random_cycler.py +++ /dev/null @@ -1,37 +0,0 @@ -import random - -class RandomCycler: - """ - Creates an internal copy of a sequence and allows access to its items in a constrained random - order. For a source sequence of n items and one or several consecutive queries of a total - of m items, the following guarantees hold (one implies the other): - - Each item will be returned between m // n and ((m - 1) // n) + 1 times. - - Between two appearances of the same item, there may be at most 2 * (n - 1) other items. - """ - - def __init__(self, source): - if len(source) == 0: - raise Exception("Can't create RandomCycler from an empty collection") - self.all_items = list(source) - self.next_items = [] - - def sample(self, count: int): - shuffle = lambda l: random.sample(l, len(l)) - - out = [] - while count > 0: - if count >= len(self.all_items): - out.extend(shuffle(list(self.all_items))) - count -= len(self.all_items) - continue - n = min(count, len(self.next_items)) - out.extend(self.next_items[:n]) - count -= n - self.next_items = self.next_items[n:] - if len(self.next_items) == 0: - self.next_items = shuffle(list(self.all_items)) - return out - - def __next__(self): - return self.sample(1)[0] - diff --git a/spaces/matthoffner/chatbot-mini/components/Chat/ModelSelect.tsx b/spaces/matthoffner/chatbot-mini/components/Chat/ModelSelect.tsx deleted file mode 100644 index f0402e0706eb9fa3bc2588662c54c5932ae52dd1..0000000000000000000000000000000000000000 --- a/spaces/matthoffner/chatbot-mini/components/Chat/ModelSelect.tsx +++ /dev/null @@ -1,56 +0,0 @@ -import { IconExternalLink } from '@tabler/icons-react'; -import { useContext } from 'react'; - -import { useTranslation } from 'next-i18next'; - -import { OpenAIModel } from '@/types/openai'; - -import HomeContext from '@/pages/api/home/home.context'; - -export const ModelSelect = () => { - const { t } = useTranslation('chat'); - - const { - state: { selectedConversation, models, defaultModelId }, - handleUpdateConversation, - dispatch: homeDispatch, - } = useContext(HomeContext); - - const handleChange = (e: React.ChangeEvent) => { - selectedConversation && - handleUpdateConversation(selectedConversation, { - key: 'model', - value: models.find( - (model) => model.id === e.target.value, - ) as OpenAIModel, - }); - }; - - return ( -
    - -
    - -
    -
    - ); -}; diff --git a/spaces/maxmax20160403/sovits5.0/vits_decoder/alias/act.py b/spaces/maxmax20160403/sovits5.0/vits_decoder/alias/act.py deleted file mode 100644 index 308344fb6ccbc39317c584a3ee1fb2f29084678e..0000000000000000000000000000000000000000 --- a/spaces/maxmax20160403/sovits5.0/vits_decoder/alias/act.py +++ /dev/null @@ -1,129 +0,0 @@ -# Adapted from https://github.com/junjun3518/alias-free-torch under the Apache License 2.0 -# LICENSE is in incl_licenses directory. - -import torch -import torch.nn as nn -import torch.nn.functional as F - -from torch import sin, pow -from torch.nn import Parameter -from .resample import UpSample1d, DownSample1d - - -class Activation1d(nn.Module): - def __init__(self, - activation, - up_ratio: int = 2, - down_ratio: int = 2, - up_kernel_size: int = 12, - down_kernel_size: int = 12): - super().__init__() - self.up_ratio = up_ratio - self.down_ratio = down_ratio - self.act = activation - self.upsample = UpSample1d(up_ratio, up_kernel_size) - self.downsample = DownSample1d(down_ratio, down_kernel_size) - - # x: [B,C,T] - def forward(self, x): - x = self.upsample(x) - x = self.act(x) - x = self.downsample(x) - - return x - - -class SnakeBeta(nn.Module): - ''' - A modified Snake function which uses separate parameters for the magnitude of the periodic components - Shape: - - Input: (B, C, T) - - Output: (B, C, T), same shape as the input - Parameters: - - alpha - trainable parameter that controls frequency - - beta - trainable parameter that controls magnitude - References: - - This activation function is a modified version based on this paper by Liu Ziyin, Tilman Hartwig, Masahito Ueda: - https://arxiv.org/abs/2006.08195 - Examples: - >>> a1 = snakebeta(256) - >>> x = torch.randn(256) - >>> x = a1(x) - ''' - - def __init__(self, in_features, alpha=1.0, alpha_trainable=True, alpha_logscale=False): - ''' - Initialization. - INPUT: - - in_features: shape of the input - - alpha - trainable parameter that controls frequency - - beta - trainable parameter that controls magnitude - alpha is initialized to 1 by default, higher values = higher-frequency. - beta is initialized to 1 by default, higher values = higher-magnitude. - alpha will be trained along with the rest of your model. - ''' - super(SnakeBeta, self).__init__() - self.in_features = in_features - # initialize alpha - self.alpha_logscale = alpha_logscale - if self.alpha_logscale: # log scale alphas initialized to zeros - self.alpha = Parameter(torch.zeros(in_features) * alpha) - self.beta = Parameter(torch.zeros(in_features) * alpha) - else: # linear scale alphas initialized to ones - self.alpha = Parameter(torch.ones(in_features) * alpha) - self.beta = Parameter(torch.ones(in_features) * alpha) - self.alpha.requires_grad = alpha_trainable - self.beta.requires_grad = alpha_trainable - self.no_div_by_zero = 0.000000001 - - def forward(self, x): - ''' - Forward pass of the function. - Applies the function to the input elementwise. - SnakeBeta = x + 1/b * sin^2 (xa) - ''' - alpha = self.alpha.unsqueeze( - 0).unsqueeze(-1) # line up with x to [B, C, T] - beta = self.beta.unsqueeze(0).unsqueeze(-1) - if self.alpha_logscale: - alpha = torch.exp(alpha) - beta = torch.exp(beta) - x = x + (1.0 / (beta + self.no_div_by_zero)) * pow(sin(x * alpha), 2) - return x - - -class Mish(nn.Module): - """ - Mish activation function is proposed in "Mish: A Self - Regularized Non-Monotonic Neural Activation Function" - paper, https://arxiv.org/abs/1908.08681. - """ - - def __init__(self): - super().__init__() - - def forward(self, x): - return x * torch.tanh(F.softplus(x)) - - -class SnakeAlias(nn.Module): - def __init__(self, - channels, - up_ratio: int = 2, - down_ratio: int = 2, - up_kernel_size: int = 12, - down_kernel_size: int = 12): - super().__init__() - self.up_ratio = up_ratio - self.down_ratio = down_ratio - self.act = SnakeBeta(channels, alpha_logscale=True) - self.upsample = UpSample1d(up_ratio, up_kernel_size) - self.downsample = DownSample1d(down_ratio, down_kernel_size) - - # x: [B,C,T] - def forward(self, x): - x = self.upsample(x) - x = self.act(x) - x = self.downsample(x) - - return x \ No newline at end of file diff --git a/spaces/merve/anonymization/server-side/fill-in-the-blank/py/main.py b/spaces/merve/anonymization/server-side/fill-in-the-blank/py/main.py deleted file mode 100644 index 2ac15bda96de733df52cd7730895ae18baf20529..0000000000000000000000000000000000000000 --- a/spaces/merve/anonymization/server-side/fill-in-the-blank/py/main.py +++ /dev/null @@ -1,59 +0,0 @@ -import os -import json -import shutil - -from flask import Flask, request -from flask_cors import CORS - -import model_bert_large -import model_bert_zari_cda - -app = Flask(__name__) -CORS(app) - - -@app.route('/') -def hello_world(): - name = os.environ.get('NAME', 'Test') - print('[Hello]') - return 'Hello {}!'.format(name) - - -@app.route('/embed_test') -def embed_test(): - sentence = 'The dog went to the [MASK].' - print('[TEST] ', sentence) - return json.dumps(model_bert_large.get_embeddings(sentence)) - - -@app.route('/embed', methods=['POST']) -def embed(): - data = json.loads(request.data) - sentence = data['sentence'] - print('[BASE] ' + sentence) - return json.dumps(model_bert_large.get_embeddings(sentence)) - -@app.route('/embed_zari_cda', methods=['POST']) -def embed_zari_cda(): - data = json.loads(request.data) - sentence = data['sentence'] - print('[ZARI] ' + sentence) - return json.dumps(model_bert_zari_cda.get_embeddings(sentence)) - - -@app.route('/embed_group_top', methods=['POST']) -def embed_group_top(): - data = json.loads(request.data) - tokens = data['tokens'] - return json.dumps(model_bert_large.get_embedding_group_top(tokens)) - -@app.route('/get_embedding_group_top_low_mem', methods=['POST']) -def embed_group(): - data = json.loads(request.data) - tokens = data['tokens'] - return json.dumps(model_bert_large.get_embedding_group(tokens)) - -if __name__ == '__main__': - app.run(debug=True, host='0.0.0.0', port=int(os.environ.get('PORT', 5004))) - - diff --git a/spaces/merve/data-leak/public/anonymization/style.css b/spaces/merve/data-leak/public/anonymization/style.css deleted file mode 100644 index c20c6ed13484b78e2cc2128cd255f4d3b4cda152..0000000000000000000000000000000000000000 --- a/spaces/merve/data-leak/public/anonymization/style.css +++ /dev/null @@ -1,344 +0,0 @@ - -.tooltip { - top: -1000px; - position: fixed; - padding: 10px; - background: rgba(255, 255, 255, .90); - border: 1px solid lightgray; - pointer-events: none; - font-size: 14px; - width: 267px; -} -.tooltip-hidden{ - opacity: 0; - transition: all .3s; - transition-delay: .1s; -} - -@media (max-width: 590px){ - div.tooltip{ - bottom: -1px; - width: calc(100%); - left: -1px !important; - right: -1px !important; - top: auto !important; - width: auto !important; - } -} - - -.domain{ - display: none; -} - -text{ - /*pointer-events: none;*/ - /*text-shadow: 0 1px 0 #fff, 1px 0 0 #fff, 0 -1px 0 #fff, -1px 0 0 #fff;*/ -} - - - -.note{ - font-size: 12px; - color: #999; - margin-top: 60px; -} - -h1{ - font-weight: 100; - font-size: 34px; - margin-bottom: .5em; - line-height: 1.3em; - margin-top: 1.4em; - text-align: center; - font-family: "Google Sans", sans-serif; -} - -.mono{ - font-family: monospace; -} - - -svg{ - overflow: visible; -} - - - - -.axis{ - font-size: 12px; - pointer-events: none; -} -.axis{ - color: #888; - -} -.axis text, .slider-label-container{ - fill: #888; - color: #888; - font-family: 'Roboto', Helvetica, sans-serif; - font-size: 12px; -} - -.axis text.bold, .slider-label-container{ - color: #3C4043; - fill: #3C4043; - font-weight: 500; - -} -.axis line{ - stroke: #ccc; -} - -div.axis b{ - margin-bottom: -10px; - display: block; -} - -.init-hidden{ - opacity: 0; -} - -.slider-label-container{ - font-weight: 500; -} - - - -.highlight{ - color: #fff; - padding-left: 3px; - padding-right: 3px; - padding-top: 1px; - padding-bottom: 1px; - border-radius: 3px; -} - -.highlight.blue{ background: blue; } -.highlight.orange{ background: #ffd890; } -.highlight.yellow{ background: #ff0; color: #000; } -.highlight.purple{ background: #CB10CB; } -.highlight.purple{ background: #FF7AFF; color: #000;} -.highlight.grey{ background: #ccc; color: #000;} -.highlight.box{ - border: 1px solid #ff6200; - border-radius: 5px; - color: #000; - padding-bottom: 2px; - white-space: nowrap; -} -.highlight.purple-box{ - border: 1px solid #b0b; -} -.highlight.grey-box{ - border: 1px solid #ccc; -} -.highlight.box.square{ - border-radius: 0px; -} -.highlight.blue-box{ border: 2px solid #007276; } - - - -.circle{ - background: #eee; - border: 1px solid #ccc; - font-family: monospace; - padding-left: 4px; - padding-right: 4px; - padding-top: 1px; - padding-bottom: 1px; - - border-radius: 100px; -} - - -.strikethrough{ - text-decoration: line-through; - color: #000; -} - - -.annotations path{ - fill: none; - stroke: black; -} - - - -rect.unique{ - stroke: #ff6200; - stroke-width: 1px; - fill: #ffd890; - - animation-duration: 1s; - animation-name: xstrokeblink; - display: inline-block; - animation-iteration-count: infinite; - animation-direction: alternate; -} - - -@keyframes strokeblink { - from { - /*fill: black;*/ - stroke-width: 1px; - } - - to { - /*fill: green;*/ - stroke-width: 1px; - } -} - - - - - -.inline-line{ - border: 1px #f0f solid; - width: 20px; - display: inline-block; - position: relative; - top: -5px; -} - -.slider-label-container{ - width: 240px; -} -.slider-label{ - font-size: smaller; - margin-left: 2px; -} - -.slider-text-label{ - margin-left: 5px; - white-space: nowrap; -} - - -g.student:hover circle{ - stroke-width: 2px; -} - -g{ - /*opacity: 1 !important;*/ -} - -.inactive{ - opacity: 0 !important; - pointer-events: none; -} - -input[type="range" i] { - background-color:#def5ef; - -webkit-appearance: none; - height:20px; - width:240px; - overflow: hidden; -} - -input[type='range']::-webkit-slider-thumb { - -webkit-appearance: none; - width: 16px; - height: 20px; - cursor: ew-resize; - background: #007276; - box-shadow: -200px 0 0 200px #7ed3c9; - border: 1px solid #333; -} - -input:focus { - outline-width: 0; -} - - - - -.estimate{ - opacity: 0; - pointer-events: none -} - -.estimate.active{ - opacity: .70; - pointer-events: all; -} - -.est-text{ - text-shadow: 0 2px 0 rgba(255,255,255,1), 2px 0 0 rgba(255,255,255,1), 0 -2px 0 rgba(255,255,255,1), -2px 0 0 rgba(255,255,255,1); -} - - - - -@media (max-width: 590px){ - text{ - font-size: 120% !important; - } -} - - -.slider{ - user-select: none; - -webkit-tap-highlight-color: transparent; -} - -.button-container{ - border: 1px solid #888; - display: inline-block; - padding: 10px 20px; - cursor: pointer; - text-align: center; - border-radius: 10px; - user-select: none; - -webkit-tap-highlight-color: transparent; - margin: 0px auto; -/* color: #888; - font-family: 'Roboto', Helvetica, sans-serif; - font-size: 12px; - font-weight: 500;*/ - position: relative; - left: -20px; -} - -.button-container:hover{ - background: #ddd; -} - -.button-outer{ - text-align: center; - margin-top: 20px; -} - -.pointer{ - height: 0px; - position: relative; -} -.pointer div { - overflow: visible; - content: ""; - background-image: url(https://pair-code.github.io/interpretability/bert-tree/pointer.svg); - width: 27px; - height: 27px; - position: absolute; - left: 165px; - top: -35px; -} - -a{ - color: rgb(60, 64, 67); -} -a:hover{ - color: #000; -} - - - - - - - - - diff --git a/spaces/merve/data-leak/public/uncertainty-calibration/style.css b/spaces/merve/data-leak/public/uncertainty-calibration/style.css deleted file mode 100644 index 8073cf0a59eac0be0e293b35af5255c40c063e21..0000000000000000000000000000000000000000 --- a/spaces/merve/data-leak/public/uncertainty-calibration/style.css +++ /dev/null @@ -1,89 +0,0 @@ -svg{ - overflow: visible; -} - -text{ - fill: #202124; - user-select: none; -} - -.domain{ - display: none; -} - -.thresholds, .threshold > g{ - cursor: pointer; -} - -svg{ - user-select: none; -} - -text.axis-label .legend-text{ - font-family: 'Roboto'; - font-style: normal; - font-size: 16px; - line-height: 20px; - /* identical to box height, or 125% */ - - fill: #000; -} - -.axis text{ - font-size: 10px; -} - -text{ - text-shadow: 0 1px 0 #fff, 1px 0 0 #fff, 0 -1px 0 #fff, -1px 0 0 #fff; -} - - - - -.bucket text{ - /*text-shadow: 0 1px 0 #000, 1px 0 0 #000, 0 -1px 0 #000, -1px 0 0 #000;*/ - /*fill: #fff;*/ - font-size: 11px; -} - - -.big-text{ - font-variant-numeric: tabular-nums; - font-size: 16px; -} - -#card{ - display: flex; - flex-direction: column; - align-items: flex-start; - padding: 24px 24px; - gap: 6px; - - background: #EDF4EC; - border: 1px solid #34A853; - box-sizing: border-box; - border-radius: 4px; -} - -text.val-text{ - background: #DFE9E1; - border: 1px solid #476C63; - box-sizing: border-box; - border-radius: 4px; - fill: #2A4C4A; - text-shadow: none; -} - -.val-box{ - fill: #DFE9E1; - stroke: #476C63; - opacity: 1; -} - -.legend-title{ - fill: #002622; -} - -h3 { - color: #00695C; -} \ No newline at end of file diff --git a/spaces/merve/uncertainty-calibration/public/uncertainty-calibration/draw_weathergraph.js b/spaces/merve/uncertainty-calibration/public/uncertainty-calibration/draw_weathergraph.js deleted file mode 100644 index 068615fb14b8e5d27869a0d270d8f0c5580e4fcc..0000000000000000000000000000000000000000 --- a/spaces/merve/uncertainty-calibration/public/uncertainty-calibration/draw_weathergraph.js +++ /dev/null @@ -1,264 +0,0 @@ -window.drawWeatherGraph = function (graphSel, fig_height, fig_width){ - - var threshold = .4 - - var thresholds = [0, .2, .4, .6, .8, 1].map((val, i) => { - var isLocked = val == 0 || val == 1 - return {val, i, isLocked, origVal: val} - }) - - var c = d3.conventions({ - sel: graphSel.html('').append('div'), - height: fig_height, - totalWidth: fig_width, - margin: {top: 100, bottom: 100} - }); - - var {predictionSel, weatherGroupSel} = (function(){ - c.y.domain([0,9]).clamp(true); - - // x-Axis - c.xAxis.ticks(5).tickFormat(d3.format('.2f')) - c.yAxis.ticks(0) - d3.drawAxis(c) - c.svg.select('.x') - .translate(-40, 1) - .selectAll('line').translate(20, 1) - - // x-Axis label - c.svg.append('text.axis-label') - .translate([c.width/2, -50]) - .at({textAnchor: 'middle'}) - .at({fill: '#000', fontSize: 14}) - .text('Model Score'); - - // Weather icons - var weatherGroupSel = c.svg.appendMany('g.weatherdata', weatherdata) - .translate(d => [c.x(d.score), c.y(d.h)]) - //.call(d3.attachTooltip) - // .on("mouseover", function(d) { - // ttSel.html(""); - // var gtSel = ttSel.append("div").html(`ground truth: ${d.label}`); - // ttSel.classed("tt-text", true); - // }) - - weatherGroupSel.append('text.icon') - .text(function(d,i){return emojis[d.label];}) - .at({fontSize: 18, textAnchor: 'middle', dy: 8}) - - // Add prediction circles - weatherGroupSel.append('circle.prediction') - .at({cx: 0, cy: 0, r: 14, opacity: 0, fillOpacity: 0, stroke: 'red'}); - weatherGroupSel.append('path.prediction') - .at({d: d => ['M', -10, 10, 'L', 10, -10].join(' '), stroke: 'red', opacity: 0}) - - var predictionSel = c.svg.selectAll('.prediction'); - - return {predictionSel, weatherGroupSel} - })() - - var {thresholdSel, messageSel, setThreshold} = (function(){ - var thresholdSel = c.svg.append('g.threshold') - - var thresholdGroupSel = thresholdSel.append('g') - .call(d3.drag().on('drag', - () => renderThreshold(c.x.invert(d3.clamp(0, d3.event.x, c.width)))) - ) - - var thesholdTextSel = thresholdGroupSel.append('g.axis').append('text') - .at({ - textAnchor: 'middle', - dy: '.33em', - y: c.height + 30 - }) - .text('Threshold') - - var rw = 16 - thresholdGroupSel.append('rect') - .at({ - width: rw, - x: -rw/2, - y: -10, - height: c.height + 30, - fillOpacity: .07, - }) - - var pathSel = thresholdGroupSel.append('path') - .at({ - stroke: '#000', - strokeDasharray: '2 2', - fill: 'none', - d: `M 0 -10 V ` + (c.height + 20), - }) - - - var accuracyValBox = thresholdSel.append('rect.val-box') - .at({width: 55, height: 20, x: c.width/2 + 32.5, y: c.height + 65, rx: 3, ry: 3}) - - var accuracySel = thresholdSel.append('text.big-text') - .at({x: c.width/2 - 10, y: c.height + 80, textAnchor: 'middle'}) - - var accuracyValSel = thresholdSel.append('text.val-text') - .at({x: c.width/2 + 60, y: c.height + 80, textAnchor: 'middle'}) - - - var messageSel = thresholdSel.append('text.tmessage') - .at({x: c.width/2, y: c.height + 120, textAnchor: 'middle'}) - - function renderThreshold(t){ - if (isNaN(t)) return // TODO debug this - - thresholdGroupSel.translate(c.x(t), 0) - - predictionSel.at({opacity: d => isClassifiedCorrectly(d, t) ? 0 : 1}) - - var acc = d3.mean( - weatherdata, - d => isClassifiedCorrectly(d, t) - ) - accuracySel.text('Accuracy: '); - accuracyValSel.text(d3.format('.1%')(acc)) - messageSel.text('Try dragging the threshold to find the highest accuracy.') - thesholdTextSel.text('Threshold: ' + d3.format('.2f')(t)) - - threshold = t - - function isClassifiedCorrectly(d,t) { - return d.score >= t ? d.label == 1 : d.label == 0; - }; - } - - renderThreshold(threshold) - - var timer = null - function setThreshold(newThreshold, duration){ - var interpolateFn = d3.interpolate(threshold, newThreshold) - - if (timer) timer.stop() - timer = d3.timer(ms => { - var t = Math.min(ms/duration, 1) - if (t == 1) timer.stop() - - renderThreshold(interpolateFn(t)) - }) - } - - return {thresholdSel, messageSel, setThreshold} - })() - - function drawTrueLegend(c){ - var truthAxis = c.svg.append('g').translate([fig_width + 40, 1]) - truthAxis.append('text.legend-title').text('Truth') // TODO: Maybe more of a label? "what actually happened?" or just remove this legend - .at({textAnchor: 'middle', fontWeight: 500, x: 20}) - - truthAxis.append('g').translate([20, 40]) - .append('text.legend-text').text('Sunny').parent() - .at({fontSize: 15}) - .append('text').text(emojis[0]) - .at({fontSize: 25, x: -30, y: 5}) - - truthAxis.append('g').translate([20, 80]) - .append('text.legend-text').text('Rainy').parent() - .at({fontSize: 15}) - .append('text').text(emojis[1]) - .at({fontSize: 25, x: -30, y: 5}) - } - drawTrueLegend(c); - - - var {thresholdsGroupSel, renderThresholds, setThresholds} = (function(){ - var valsCache = [] - var drag = d3.drag() - .on('drag', function(){ - var val = d3.clamp(0, c.x.invert(d3.mouse(c.svg.node())[0]), 1) - - // Force thresholds to stay sorted - valsCache[valsCache.activeIndex] = val - _.sortBy(valsCache).forEach((val, i) => thresholds[i].val = val) - - renderThresholds() - }) - .on('start', d => { - valsCache = thresholds.map(d => d.val) - valsCache.activeIndex = d.i - }) - - var thresholdsGroupSel = c.svg.append('g') - - thresholdsGroupSel.append('text.axis-label') - .text('Calibrated Model Score') - .translate([c.width/2, c.height + 50]) - .at({textAnchor: 'middle'}) - .at({fill: '#000', fontSize: 14}) - - thresholdsSel = thresholdsGroupSel.appendMany('g.thresholds', thresholds) - .call(drag) - .st({pointerEvents: d => d.isLocked ? 'none' : ''}) - - thresholdsSel.append('g.axis').append('text') - .at({ - textAnchor: 'middle', - dy: '.33em', - y: c.height + 20 - }) - .text(d => d3.format('.2f')(d.origVal)) - - var rw = 16 - thresholdsSel.append('rect') - .at({ - width: rw, - x: -rw/2, - height: c.height + 10, - fillOpacity: d => d.isLocked ? 0 : .07, - }) - - var pathSel = thresholdsSel.append('path') - .at({ - stroke: '#000', - strokeDasharray: '2 2', - fill: 'none', - }) - - function renderThresholds(){ - if (thresholds.some(d => isNaN(d.val))) return - - thresholdsSel - .translate(d => c.x(d.val) + .5, 0) - - pathSel.at({ - d: d => [ - 'M', 0, c.height + 10, - 'L', 0, 0, - 'L', c.x(d.origVal - d.val), -12, - ].join(' ') - }) - - if (window.calibrationCurve) calibrationCurve.renderBuckets() - } - - renderThresholds() - - var timer = null - function setThresholds(newThresholds, duration){ - var interpolateFns = thresholds - .map((d, i) => d3.interpolate(d.val, newThresholds[i])) - - if (timer) timer.stop() - timer = d3.timer(ms => { - var t = Math.min(ms/duration, 1) - if (t == 1) timer.stop() - - thresholds.forEach((d, i) => d.val = interpolateFns[i](t)) - - renderThresholds() - }) - } - - return {thresholdsGroupSel, renderThresholds, setThresholds} - })() - - return {c, thresholdSel, messageSel, setThreshold, predictionSel, thresholds, thresholdsGroupSel, renderThresholds, setThresholds, weatherGroupSel}; - -} - -if (window.init) window.init() \ No newline at end of file diff --git a/spaces/mfkeles/Track-Anything/tracker/model/aggregate.py b/spaces/mfkeles/Track-Anything/tracker/model/aggregate.py deleted file mode 100644 index 7622391fb3ac9aa8b515df88cf3ea5297b367538..0000000000000000000000000000000000000000 --- a/spaces/mfkeles/Track-Anything/tracker/model/aggregate.py +++ /dev/null @@ -1,17 +0,0 @@ -import torch -import torch.nn.functional as F - - -# Soft aggregation from STM -def aggregate(prob, dim, return_logits=False): - new_prob = torch.cat([ - torch.prod(1-prob, dim=dim, keepdim=True), - prob - ], dim).clamp(1e-7, 1-1e-7) - logits = torch.log((new_prob /(1-new_prob))) - prob = F.softmax(logits, dim=dim) - - if return_logits: - return logits, prob - else: - return prob \ No newline at end of file diff --git a/spaces/mikeee/radiobee-aligner/docs/build/html/searchindex.js b/spaces/mikeee/radiobee-aligner/docs/build/html/searchindex.js deleted file mode 100644 index 3355b6eac667f1b4a80e183616033b6d0abb7066..0000000000000000000000000000000000000000 --- a/spaces/mikeee/radiobee-aligner/docs/build/html/searchindex.js +++ /dev/null @@ -1 +0,0 @@ -Search.setIndex({docnames:["examples","index","intro","modules","radiobee","userguide","userguide-zh"],envversion:{"sphinx.domains.c":2,"sphinx.domains.changeset":1,"sphinx.domains.citation":1,"sphinx.domains.cpp":4,"sphinx.domains.index":1,"sphinx.domains.javascript":2,"sphinx.domains.math":2,"sphinx.domains.python":3,"sphinx.domains.rst":2,"sphinx.domains.std":2,sphinx:56},filenames:["examples.rst","index.rst","intro.rst","modules.rst","radiobee.rst","userguide.rst","userguide-zh.rst"],objects:{},objnames:{},objtypes:{},terms:{"1":[5,6],"10":2,"12":[5,6],"2":[5,6],"200":[5,6],"2000":[5,6],"3":[0,2],"316287378":[5,6],"4":[5,6],"500":6,"8":[5,6],"\u4e00\u822c\u65e0\u9700\u7406\u4f1a\u8fd9\u4e9b\u53c2\u6570":6,"\u4e2d\u82f1\u975e\u7a7a\u884c\u9650\u5236\u5728":6,"\u4e3a\u4e2d\u82f1\u6587\u6df7\u5408\u6587\u672c\u53ca\u8bd5\u7740\u5206\u79bb\u4e2d\u82f1\u6587":6,"\u4e3a\u7a7a\u767d\u65f6":6,"\u4e86\u89e3\u8fd9\u4e9b\u5bf9\u9f50\u5de5\u5177":6,"\u4ee5\u5185":6,"\u4ee5\u540e\u53ef\u80fd\u4f1a\u652f\u6301":6,"\u4f18\u8d28\u5bf9":6,"\u4f7f\u7528\u8bf4\u660e":1,"\u5176\u4ed6\u8bed\u8a00\u5bf9\u7684\u5bf9\u9f50":6,"\u5219\u4f1a\u89c6":6,"\u5219\u9650\u5236\u5728":6,"\u53e6\u4e00\u65b9\u9762":6,"\u53ef\u4ee5\u53f3\u51fb\u62f7\u51fa\u56fe\u7684\u94fe\u63a5\u7528\u6d4f\u89c8\u5668\u72ec\u7acb\u8bbf\u95ee\u62f7\u51fa\u6765\u7684\u94fe\u63a5\u6216\u53f3\u51fb\u5b58\u76d8\u518d\u7528\u770b\u56fe\u7a0b\u5e8f\u6253\u5f00\u5b58\u76d8\u7684\u56fe\u6587\u4ef6":6,"\u548c":6,"\u5acc\u56fe\u592a\u5c0f\u7684\u8bdd":6,"\u5b58\u4e0b\u6709\u5173\u53c2\u6570\u67e5\u770b\u6216\u901a\u77e5\u5f00\u53d1\u8005":6,"\u5bf9\u7ea6\u97005\u5206\u949f":6,"\u5feb\u5bf9\u6a21\u5f0f\u76ee\u524d\u4ec5\u652f\u6301\u4e2d\u82f1":6,"\u662f":6,"\u6700\u5c0f":6,"\u7136\u540e\u8fdb\u884c\u5bf9\u9f50":6,"\u7684\u5b6a\u751f\u5144\u5f1f":6,"\u7684\u5efa\u8bae\u503c":6,"\u76ee\u524d\u4ec5\u652f\u6301\u7eaf\u6587\u672c\u6587\u4ef6\u4e0a\u8f7d":6,"\u7b2c\u4e8c\u6b21\u4e0a\u8f7d\u6587\u4ef6\u524d\u8bf7\u70b9\u51fb":6,"\u7b49":6,"\u7b49\u683c\u5f0f":6,"\u82f1\u4e2d":6,"\u82f1\u4e2d\u5bf9\u9f50":6,"\u8bbe\u5927\u4e9b\u5219\u4f1a\u5f97\u5230\u5c11\u4e00\u4e9b\u5bf9\u9f50\u5bf9\u56e0\u4e3a\u53ef\u80fd\u9519\u5931\u4e86\u4e00\u4e9b":6,"\u8bbe\u5927\u4e9b\u6216":6,"\u8bbe\u5c0f\u4e9b\u53ef\u4ee5\u5f97\u5230\u66f4\u591a\u7684\u5bf9\u9f50\u5bf9\u4f46\u4e5f\u4f1a\u6709\u66f4\u591a":6,"\u8bbe\u5c0f\u4e9b\u6216":6,"\u8bef\u62a5\u5bf9":6,"\u8bf7\u52a0\u5165qq\u7fa4":6,"\u8fd0\u884c\u51fa\u9519\u65f6\u53ef\u4ee5\u70b9\u51fb":6,"\u9519\u8bef\u5224\u65ad\u4e3a\u5bf9\u9f50\u7684\u5bf9":6,"do":5,"new":5,As:0,For:0,If:[2,5],On:5,The:[2,5],To:5,about:5,ad:2,address:5,aim:2,align:[0,2,5,6],align_s:[1,3],align_text:[1,3],also:5,although:2,amend_avec:[1,3],an:2,app:[1,3],applic:2,approxim:2,ar:[2,5],attempt:5,been:[0,2],befor:5,better:5,blank:5,browser:5,built:0,bumblebe:[5,6],can:5,candid:5,cannot:0,cat:2,chines:5,clear:[5,6],click:[0,5],cmat2tset:[1,3],co:0,contact:2,content:3,copi:5,csv:[5,6],current:2,de:2,develop:[2,5],dl_type:[5,6],docterm_scor:[1,3],docx:[5,6],download:0,dual:2,dualtext:2,e:2,ebook:2,educ:2,en2zh:[1,3],en2zh_token:[1,3],en:[2,5],english:5,epsilon:[5,6],esp:[5,6],etc:[2,5],exampl:[1,2,5],experiment:2,fals:5,fast:2,file2text:[1,3],file:[5,6],files2df:[1,3],find:2,first:5,fix:0,flag:[5,6],format:5,full:2,further:2,g:2,gen_aset:[1,3],gen_eps_minsampl:[1,3],gen_model:[1,3],gen_pset:[1,3],gen_row_align:[1,3],go:5,good:5,gradio:[0,2],group:5,ha:[0,2],hand:5,have:[0,5],help:2,henc:0,hf:0,how:1,html:[5,6],http:0,huggingfac:0,identifi:5,idf_typ:[5,6],imag:5,implement:2,index:1,inform:5,insert_spac:[1,3],instal:1,interfac:2,interpolate_pset:[1,3],introduc:2,introduct:1,ja:2,join:5,just:0,know:5,languag:2,languang:5,larger:5,later:5,laugnag:2,learn:2,left:5,limit:[1,5],line:[0,5],lists2cmat:[1,3],loadtext:[1,3],look:5,machin:2,mai:[0,5],mani:2,md:[5,6],mdx_e2c:[1,3],method:0,mikee:0,min_sampl:[5,6],minimum:5,miss:5,mix:5,mode:2,modul:[1,3],more:5,motiv:1,need:5,non:5,norm:[5,6],normal:5,now:0,number:5,off:0,one:0,onli:2,onlin:0,open:5,other:[2,5],output:5,packag:[0,1,3],page:1,pair:[2,5],paragraph:2,particular:2,pdf:[5,6],permit:2,pip:0,pleas:5,plot_cmat:[1,3],plot_df:[1,3],posit:5,power:2,problem:0,proced:[],proceed:5,process_upload:[1,3],properli:2,provid:2,publish:0,pure:5,pypi:0,python:2,qq:5,radiobe:[0,2,5,6],requir:2,result:5,right:5,row:0,ru:2,run:0,save:5,search:1,seem:0,seg_text:[1,3],select:5,sentenc:2,separ:5,should:5,shuffle_s:[1,3],sibl:5,slow:2,smaller:5,smatrix:[1,3],someth:5,space:0,srt:[5,6],submit:[0,5],submodul:[1,3],subsequ:5,suggest:[0,5],support:[2,5],tab:5,tabl:0,taken:0,tend:5,term:2,testrun:0,text:[2,5],tf_type:[5,6],them:5,time:2,tmx:2,touch:5,track:2,translat:2,treat:5,trim_df:[1,3],troubl:0,two:2,txt:[5,6],unless:5,until:0,upload:5,us:[0,1],usag:1,valu:5,version:0,welcom:2,what:5,when:[2,5],willing:2,wrong:5,yet:0,you:[2,5],zh:[2,5],zip:0},titles:["Examples","Welcome to radiobee\u2019s documentation!","Introduction","radiobee","radiobee package","How to use","\u4f7f\u7528\u8bf4\u660e"],titleterms:{"\u4f7f\u7528\u8bf4\u660e":6,align_s:4,align_text:4,amend_avec:4,app:4,cmat2tset:4,content:[1,4],docterm_scor:4,document:1,en2zh:4,en2zh_token:4,exampl:0,file2text:4,files2df:4,gen_aset:4,gen_eps_minsampl:4,gen_model:4,gen_pset:4,gen_row_align:4,how:5,indic:1,insert_spac:4,instal:0,interpolate_pset:4,introduct:2,limit:2,lists2cmat:4,loadtext:4,mdx_e2c:4,modul:4,motiv:2,packag:4,plot_cmat:4,plot_df:4,process_upload:4,radiobe:[1,3,4],s:1,seg_text:4,shuffle_s:4,smatrix:4,submodul:4,tabl:1,trim_df:4,us:5,usag:0,welcom:1}}) \ No newline at end of file diff --git a/spaces/mipbkhn/PaddyDoctorPublic/gradio_article.md b/spaces/mipbkhn/PaddyDoctorPublic/gradio_article.md deleted file mode 100644 index b67cfb549fad24b7e41602cbd66943f98f09905d..0000000000000000000000000000000000000000 --- a/spaces/mipbkhn/PaddyDoctorPublic/gradio_article.md +++ /dev/null @@ -1,14 +0,0 @@ -[![alt text](https://raw.githubusercontent.com/paddydoc/paddydoc.github.io/main/assets/img/paddy-disease-farmer.jpg)](link_url) - -## Description -This project aims to revolutionize paddy cultivation practices prevalent in Asian countries, where vulnerabilities to diseases and pests result in substantial yield losses of up to 70%. Traditionally, expert supervision has been pivotal in managing these issues, but due to limited availability and high costs, a more innovative approach is warranted. - -At the core of the project lies the automation of the crucial process of disease identification in paddy crops using state-of-the-art computer vision techniques. This cutting-edge solution draws inspiration from successful applications in diverse domains, where computer vision has showcased remarkable potential. - -By leveraging a comprehensive training dataset comprising 10,407 labeled images, encompassing ten distinct classes ranging from various disease categories to normal leaves, the goal is to build a model that can seamlessly replicate the expertise of crop protection professionals. - -Notably, the project doesn't stop at imagery alone; it delves deeper by incorporating additional metadata associated with each image, such as the paddy variety and age. The final challenge is to successfully classify users' input images into nine disease categories or, alternatively, identify a healthy, normal leaf. - -## Examples - -The example images provided below are not from the training set. Users can also upload images from their local device or via the phone's camera. diff --git a/spaces/monra/freegpt-webui/client/css/dropdown.css b/spaces/monra/freegpt-webui/client/css/dropdown.css deleted file mode 100644 index 302e911e84d171c55384732f759a79ce195abca5..0000000000000000000000000000000000000000 --- a/spaces/monra/freegpt-webui/client/css/dropdown.css +++ /dev/null @@ -1,10 +0,0 @@ -.dropdown { - border: 1px solid var(--conversations); -} - -@media screen and (max-width: 990px) { - .dropdown { - padding: 4px 8px; - font-size: 0.75rem; - } -} diff --git a/spaces/mshukor/UnIVAL/fairseq/fairseq/benchmark/dummy_dataset.py b/spaces/mshukor/UnIVAL/fairseq/fairseq/benchmark/dummy_dataset.py deleted file mode 100644 index 2f051754af55966e26850e94c121e0ff439bfd28..0000000000000000000000000000000000000000 --- a/spaces/mshukor/UnIVAL/fairseq/fairseq/benchmark/dummy_dataset.py +++ /dev/null @@ -1,36 +0,0 @@ -import numpy as np -from fairseq.data import FairseqDataset - - -class DummyDataset(FairseqDataset): - def __init__(self, batch, num_items, item_size): - super().__init__() - self.batch = batch - self.num_items = num_items - self.item_size = item_size - - def __getitem__(self, index): - return index - - def __len__(self): - return self.num_items - - def collater(self, samples): - return self.batch - - @property - def sizes(self): - return np.array([self.item_size] * self.num_items) - - def num_tokens(self, index): - return self.item_size - - def size(self, index): - return self.item_size - - def ordered_indices(self): - return np.arange(self.num_items) - - @property - def supports_prefetch(self): - return False diff --git a/spaces/muellerzr/accelerate-presentation/Accelerate_files/libs/revealjs/plugin/zoom/zoom.esm.js b/spaces/muellerzr/accelerate-presentation/Accelerate_files/libs/revealjs/plugin/zoom/zoom.esm.js deleted file mode 100644 index c0e8d7b67a77b0cea6243f12af884ec35c5a5b2b..0000000000000000000000000000000000000000 --- a/spaces/muellerzr/accelerate-presentation/Accelerate_files/libs/revealjs/plugin/zoom/zoom.esm.js +++ /dev/null @@ -1,4 +0,0 @@ -/*! - * reveal.js Zoom plugin - */ -var e={id:"zoom",init:function(e){e.getRevealElement().addEventListener("mousedown",(function(n){var o=/Linux/.test(window.navigator.platform)?"ctrl":"alt",i=(e.getConfig().zoomKey?e.getConfig().zoomKey:o)+"Key",d=e.getConfig().zoomLevel?e.getConfig().zoomLevel:2;n[i]&&!e.isOverview()&&(n.preventDefault(),t.to({x:n.clientX,y:n.clientY,scale:d,pan:!1}))}))},destroy:function(){t.reset()}},t=function(){var e=1,n=0,o=0,i=-1,d=-1,l="transform"in document.body.style;function s(t,n){var o=r();if(t.width=t.width||1,t.height=t.height||1,t.x-=(window.innerWidth-t.width*n)/2,t.y-=(window.innerHeight-t.height*n)/2,l)if(1===n)document.body.style.transform="";else{var i=o.x+"px "+o.y+"px",d="translate("+-t.x+"px,"+-t.y+"px) scale("+n+")";document.body.style.transformOrigin=i,document.body.style.transform=d}else 1===n?(document.body.style.position="",document.body.style.left="",document.body.style.top="",document.body.style.width="",document.body.style.height="",document.body.style.zoom=""):(document.body.style.position="relative",document.body.style.left=-(o.x+t.x)/n+"px",document.body.style.top=-(o.y+t.y)/n+"px",document.body.style.width=100*n+"%",document.body.style.height=100*n+"%",document.body.style.zoom=n);e=n,document.documentElement.classList&&(1!==e?document.documentElement.classList.add("zoomed"):document.documentElement.classList.remove("zoomed"))}function c(){var t=.12*window.innerWidth,i=.12*window.innerHeight,d=r();owindow.innerHeight-i&&window.scroll(d.x,d.y+(1-(window.innerHeight-o)/i)*(14/e)),nwindow.innerWidth-t&&window.scroll(d.x+(1-(window.innerWidth-n)/t)*(14/e),d.y)}function r(){return{x:void 0!==window.scrollX?window.scrollX:window.pageXOffset,y:void 0!==window.scrollY?window.scrollY:window.pageYOffset}}return l&&(document.body.style.transition="transform 0.8s ease"),document.addEventListener("keyup",(function(n){1!==e&&27===n.keyCode&&t.out()})),document.addEventListener("mousemove",(function(t){1!==e&&(n=t.clientX,o=t.clientY)})),{to:function(n){if(1!==e)t.out();else{if(n.x=n.x||0,n.y=n.y||0,n.element){var o=n.element.getBoundingClientRect();n.x=o.left-20,n.y=o.top-20,n.width=o.width+40,n.height=o.height+40}void 0!==n.width&&void 0!==n.height&&(n.scale=Math.max(Math.min(window.innerWidth/n.width,window.innerHeight/n.height),1)),n.scale>1&&(n.x*=n.scale,n.y*=n.scale,s(n,n.scale),!1!==n.pan&&(i=setTimeout((function(){d=setInterval(c,1e3/60)}),800)))}},out:function(){clearTimeout(i),clearInterval(d),s({x:0,y:0},1),e=1},magnify:function(e){this.to(e)},reset:function(){this.out()},zoomLevel:function(){return e}}}();export default function(){return e} diff --git a/spaces/nasttam/Image-and-3D-Model-Creator/PIFu/lib/train_util.py b/spaces/nasttam/Image-and-3D-Model-Creator/PIFu/lib/train_util.py deleted file mode 100644 index 7d48cc7beba640703e744112aa2ec458a195a16b..0000000000000000000000000000000000000000 --- a/spaces/nasttam/Image-and-3D-Model-Creator/PIFu/lib/train_util.py +++ /dev/null @@ -1,204 +0,0 @@ -import torch -import numpy as np -from .mesh_util import * -from .sample_util import * -from .geometry import * -import cv2 -from PIL import Image -from tqdm import tqdm - -def reshape_multiview_tensors(image_tensor, calib_tensor): - # Careful here! Because we put single view and multiview together, - # the returned tensor.shape is 5-dim: [B, num_views, C, W, H] - # So we need to convert it back to 4-dim [B*num_views, C, W, H] - # Don't worry classifier will handle multi-view cases - image_tensor = image_tensor.view( - image_tensor.shape[0] * image_tensor.shape[1], - image_tensor.shape[2], - image_tensor.shape[3], - image_tensor.shape[4] - ) - calib_tensor = calib_tensor.view( - calib_tensor.shape[0] * calib_tensor.shape[1], - calib_tensor.shape[2], - calib_tensor.shape[3] - ) - - return image_tensor, calib_tensor - - -def reshape_sample_tensor(sample_tensor, num_views): - if num_views == 1: - return sample_tensor - # Need to repeat sample_tensor along the batch dim num_views times - sample_tensor = sample_tensor.unsqueeze(dim=1) - sample_tensor = sample_tensor.repeat(1, num_views, 1, 1) - sample_tensor = sample_tensor.view( - sample_tensor.shape[0] * sample_tensor.shape[1], - sample_tensor.shape[2], - sample_tensor.shape[3] - ) - return sample_tensor - - -def gen_mesh(opt, net, cuda, data, save_path, use_octree=True): - image_tensor = data['img'].to(device=cuda) - calib_tensor = data['calib'].to(device=cuda) - - net.filter(image_tensor) - - b_min = data['b_min'] - b_max = data['b_max'] - try: - save_img_path = save_path[:-4] + '.png' - save_img_list = [] - for v in range(image_tensor.shape[0]): - save_img = (np.transpose(image_tensor[v].detach().cpu().numpy(), (1, 2, 0)) * 0.5 + 0.5)[:, :, ::-1] * 255.0 - save_img_list.append(save_img) - save_img = np.concatenate(save_img_list, axis=1) - Image.fromarray(np.uint8(save_img[:,:,::-1])).save(save_img_path) - - verts, faces, _, _ = reconstruction( - net, cuda, calib_tensor, opt.resolution, b_min, b_max, use_octree=use_octree) - verts_tensor = torch.from_numpy(verts.T).unsqueeze(0).to(device=cuda).float() - xyz_tensor = net.projection(verts_tensor, calib_tensor[:1]) - uv = xyz_tensor[:, :2, :] - color = index(image_tensor[:1], uv).detach().cpu().numpy()[0].T - color = color * 0.5 + 0.5 - save_obj_mesh_with_color(save_path, verts, faces, color) - except Exception as e: - print(e) - print('Can not create marching cubes at this time.') - -def gen_mesh_color(opt, netG, netC, cuda, data, save_path, use_octree=True): - image_tensor = data['img'].to(device=cuda) - calib_tensor = data['calib'].to(device=cuda) - - netG.filter(image_tensor) - netC.filter(image_tensor) - netC.attach(netG.get_im_feat()) - - b_min = data['b_min'] - b_max = data['b_max'] - try: - save_img_path = save_path[:-4] + '.png' - save_img_list = [] - for v in range(image_tensor.shape[0]): - save_img = (np.transpose(image_tensor[v].detach().cpu().numpy(), (1, 2, 0)) * 0.5 + 0.5)[:, :, ::-1] * 255.0 - save_img_list.append(save_img) - save_img = np.concatenate(save_img_list, axis=1) - Image.fromarray(np.uint8(save_img[:,:,::-1])).save(save_img_path) - - verts, faces, _, _ = reconstruction( - netG, cuda, calib_tensor, opt.resolution, b_min, b_max, use_octree=use_octree) - - # Now Getting colors - verts_tensor = torch.from_numpy(verts.T).unsqueeze(0).to(device=cuda).float() - verts_tensor = reshape_sample_tensor(verts_tensor, opt.num_views) - color = np.zeros(verts.shape) - interval = 10000 - for i in range(len(color) // interval): - left = i * interval - right = i * interval + interval - if i == len(color) // interval - 1: - right = -1 - netC.query(verts_tensor[:, :, left:right], calib_tensor) - rgb = netC.get_preds()[0].detach().cpu().numpy() * 0.5 + 0.5 - color[left:right] = rgb.T - - save_obj_mesh_with_color(save_path, verts, faces, color) - except Exception as e: - print(e) - print('Can not create marching cubes at this time.') - -def adjust_learning_rate(optimizer, epoch, lr, schedule, gamma): - """Sets the learning rate to the initial LR decayed by schedule""" - if epoch in schedule: - lr *= gamma - for param_group in optimizer.param_groups: - param_group['lr'] = lr - return lr - - -def compute_acc(pred, gt, thresh=0.5): - ''' - return: - IOU, precision, and recall - ''' - with torch.no_grad(): - vol_pred = pred > thresh - vol_gt = gt > thresh - - union = vol_pred | vol_gt - inter = vol_pred & vol_gt - - true_pos = inter.sum().float() - - union = union.sum().float() - if union == 0: - union = 1 - vol_pred = vol_pred.sum().float() - if vol_pred == 0: - vol_pred = 1 - vol_gt = vol_gt.sum().float() - if vol_gt == 0: - vol_gt = 1 - return true_pos / union, true_pos / vol_pred, true_pos / vol_gt - - -def calc_error(opt, net, cuda, dataset, num_tests): - if num_tests > len(dataset): - num_tests = len(dataset) - with torch.no_grad(): - erorr_arr, IOU_arr, prec_arr, recall_arr = [], [], [], [] - for idx in tqdm(range(num_tests)): - data = dataset[idx * len(dataset) // num_tests] - # retrieve the data - image_tensor = data['img'].to(device=cuda) - calib_tensor = data['calib'].to(device=cuda) - sample_tensor = data['samples'].to(device=cuda).unsqueeze(0) - if opt.num_views > 1: - sample_tensor = reshape_sample_tensor(sample_tensor, opt.num_views) - label_tensor = data['labels'].to(device=cuda).unsqueeze(0) - - res, error = net.forward(image_tensor, sample_tensor, calib_tensor, labels=label_tensor) - - IOU, prec, recall = compute_acc(res, label_tensor) - - # print( - # '{0}/{1} | Error: {2:06f} IOU: {3:06f} prec: {4:06f} recall: {5:06f}' - # .format(idx, num_tests, error.item(), IOU.item(), prec.item(), recall.item())) - erorr_arr.append(error.item()) - IOU_arr.append(IOU.item()) - prec_arr.append(prec.item()) - recall_arr.append(recall.item()) - - return np.average(erorr_arr), np.average(IOU_arr), np.average(prec_arr), np.average(recall_arr) - -def calc_error_color(opt, netG, netC, cuda, dataset, num_tests): - if num_tests > len(dataset): - num_tests = len(dataset) - with torch.no_grad(): - error_color_arr = [] - - for idx in tqdm(range(num_tests)): - data = dataset[idx * len(dataset) // num_tests] - # retrieve the data - image_tensor = data['img'].to(device=cuda) - calib_tensor = data['calib'].to(device=cuda) - color_sample_tensor = data['color_samples'].to(device=cuda).unsqueeze(0) - - if opt.num_views > 1: - color_sample_tensor = reshape_sample_tensor(color_sample_tensor, opt.num_views) - - rgb_tensor = data['rgbs'].to(device=cuda).unsqueeze(0) - - netG.filter(image_tensor) - _, errorC = netC.forward(image_tensor, netG.get_im_feat(), color_sample_tensor, calib_tensor, labels=rgb_tensor) - - # print('{0}/{1} | Error inout: {2:06f} | Error color: {3:06f}' - # .format(idx, num_tests, errorG.item(), errorC.item())) - error_color_arr.append(errorC.item()) - - return np.average(error_color_arr) - diff --git a/spaces/nathanTQ/ChatDev/camel/utils.py b/spaces/nathanTQ/ChatDev/camel/utils.py deleted file mode 100644 index baad22f9559bbcba938e9f6d78e4533a5340a169..0000000000000000000000000000000000000000 --- a/spaces/nathanTQ/ChatDev/camel/utils.py +++ /dev/null @@ -1,220 +0,0 @@ -# =========== Copyright 2023 @ CAMEL-AI.org. All Rights Reserved. =========== -# Licensed under the Apache License, Version 2.0 (the “License”); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an “AS IS” BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -# =========== Copyright 2023 @ CAMEL-AI.org. All Rights Reserved. =========== -import os -import re -import zipfile -from functools import wraps -from typing import Any, Callable, List, Optional, Set, TypeVar - -import requests -import tiktoken - -from camel.messages import OpenAIMessage -from camel.typing import ModelType, TaskType - -F = TypeVar('F', bound=Callable[..., Any]) - -import time - - -def count_tokens_openai_chat_models( - messages: List[OpenAIMessage], - encoding: Any, -) -> int: - r"""Counts the number of tokens required to generate an OpenAI chat based - on a given list of messages. - - Args: - messages (List[OpenAIMessage]): The list of messages. - encoding (Any): The encoding method to use. - - Returns: - int: The number of tokens required. - """ - num_tokens = 0 - for message in messages: - # message follows {role/name}\n{content}\n - num_tokens += 4 - for key, value in message.items(): - num_tokens += len(encoding.encode(value)) - if key == "name": # if there's a name, the role is omitted - num_tokens += -1 # role is always 1 token - num_tokens += 2 # every reply is primed with assistant - return num_tokens - - -def num_tokens_from_messages( - messages: List[OpenAIMessage], - model: ModelType, -) -> int: - r"""Returns the number of tokens used by a list of messages. - - Args: - messages (List[OpenAIMessage]): The list of messages to count the - number of tokens for. - model (ModelType): The OpenAI model used to encode the messages. - - Returns: - int: The total number of tokens used by the messages. - - Raises: - NotImplementedError: If the specified `model` is not implemented. - - References: - - https://github.com/openai/openai-python/blob/main/chatml.md - - https://platform.openai.com/docs/models/gpt-4 - - https://platform.openai.com/docs/models/gpt-3-5 - """ - try: - value_for_tiktoken = model.value_for_tiktoken - encoding = tiktoken.encoding_for_model(value_for_tiktoken) - except KeyError: - encoding = tiktoken.get_encoding("cl100k_base") - - if model in { - ModelType.GPT_3_5_TURBO, ModelType.GPT_4, ModelType.GPT_4_32k, - ModelType.STUB - }: - return count_tokens_openai_chat_models(messages, encoding) - else: - raise NotImplementedError( - f"`num_tokens_from_messages`` is not presently implemented " - f"for model {model}. " - f"See https://github.com/openai/openai-python/blob/main/chatml.md " - f"for information on how messages are converted to tokens. " - f"See https://platform.openai.com/docs/models/gpt-4" - f"or https://platform.openai.com/docs/models/gpt-3-5" - f"for information about openai chat models.") - - -def get_model_token_limit(model: ModelType) -> int: - r"""Returns the maximum token limit for a given model. - - Args: - model (ModelType): The type of the model. - - Returns: - int: The maximum token limit for the given model. - """ - if model == ModelType.GPT_3_5_TURBO: - return 16384 - elif model == ModelType.GPT_4: - return 8192 - elif model == ModelType.GPT_4_32k: - return 32768 - elif model == ModelType.STUB: - return 4096 - else: - raise ValueError("Unknown model type") - - -def openai_api_key_required(func: F) -> F: - r"""Decorator that checks if the OpenAI API key is available in the - environment variables. - - Args: - func (callable): The function to be wrapped. - - Returns: - callable: The decorated function. - - Raises: - ValueError: If the OpenAI API key is not found in the environment - variables. - """ - - @wraps(func) - def wrapper(self, *args, **kwargs): - from camel.agents.chat_agent import ChatAgent - if not isinstance(self, ChatAgent): - raise ValueError("Expected ChatAgent") - if self.model == ModelType.STUB: - return func(self, *args, **kwargs) - elif 'OPENAI_API_KEY' in os.environ: - return func(self, *args, **kwargs) - else: - raise ValueError('OpenAI API key not found.') - - return wrapper - - -def print_text_animated(text, delay: float = 0.005, end: str = ""): - r"""Prints the given text with an animated effect. - - Args: - text (str): The text to print. - delay (float, optional): The delay between each character printed. - (default: :obj:`0.02`) - end (str, optional): The end character to print after the text. - (default: :obj:`""`) - """ - for char in text: - print(char, end=end, flush=True) - time.sleep(delay) - print('\n') - - -def get_prompt_template_key_words(template: str) -> Set[str]: - r"""Given a string template containing curly braces {}, return a set of - the words inside the braces. - - Args: - template (str): A string containing curly braces. - - Returns: - List[str]: A list of the words inside the curly braces. - - Example: - >>> get_prompt_template_key_words('Hi, {name}! How are you {status}?') - {'name', 'status'} - """ - return set(re.findall(r'{([^}]*)}', template)) - - -def get_first_int(string: str) -> Optional[int]: - r"""Returns the first integer number found in the given string. - - If no integer number is found, returns None. - - Args: - string (str): The input string. - - Returns: - int or None: The first integer number found in the string, or None if - no integer number is found. - """ - match = re.search(r'\d+', string) - if match: - return int(match.group()) - else: - return None - - -def download_tasks(task: TaskType, folder_path: str) -> None: - # Define the path to save the zip file - zip_file_path = os.path.join(folder_path, "tasks.zip") - - # Download the zip file from the Google Drive link - response = requests.get("https://huggingface.co/datasets/camel-ai/" - f"metadata/resolve/main/{task.value}_tasks.zip") - - # Save the zip file - with open(zip_file_path, "wb") as f: - f.write(response.content) - - with zipfile.ZipFile(zip_file_path, "r") as zip_ref: - zip_ref.extractall(folder_path) - - # Delete the zip file - os.remove(zip_file_path) diff --git a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Chew-WGA 0.9.md b/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Chew-WGA 0.9.md deleted file mode 100644 index 12b2d0eb79c17d4515b15762744280c85c2eff79..0000000000000000000000000000000000000000 --- a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Chew-WGA 0.9.md +++ /dev/null @@ -1,44 +0,0 @@ -
    -

    How to Activate Windows 7 with Chew-WGA 0.9

    -

    If you have an unlicensed version of Windows 7 and you want to activate it without buying a product key, you can use Chew-WGA 0.9, a simple and effective activator that allows you to easily activate Windows 7 for free.

    -

    Chew-WGA 0.9


    Download Zip ✶✶✶ https://urlcod.com/2uIciS



    -

    Chew-WGA 0.9 is a program that bypasses the Windows Genuine Advantage (WGA) protection system and makes your Windows 7 copy appear genuine. This way, you can enjoy all the features and benefits of a licensed Windows 7, such as downloading updates and add-ons from the official Microsoft website.

    -

    Chew-WGA 0.9 is compatible with all versions and languages of Windows 7 (both x32 and x64) and does not introduce significant changes to the boot sector or the system files. It also has a reliable mechanism for making corrections and a full uninstaller in case you want to remove it.

    -

    How to Use Chew-WGA 0.9

    -

    Using Chew-WGA 0.9 to activate Windows 7 is very easy and straightforward. Just follow these steps:

    -
      -
    1. Download Chew-WGA 0.9 from this link. The password for the archive is windows.[^1^]
    2. -
    3. Disable your antivirus program before running the activator, as it may detect it as a virus and block it.
    4. -
    5. Run CW.EXE as Administrator and click Apply.
    6. -
    7. The program will prompt you to reboot your computer. Agree and wait for the reboot.
    8. -
    9. After the reboot, your Windows 7 will be activated and you will see a message saying "Windows is activated".
    10. -
    11. You can now re-enable your antivirus program and enjoy your activated Windows 7.
    12. -
    -

    Note: If you want to uninstall Chew-WGA 0.9, you can run CW.EXE again and click Restore.

    -

    Why Choose Chew-WGA 0.9

    -

    There are many reasons why Chew-WGA 0.9 is one of the best activators for Windows 7. Here are some of them:

    -
      -
    • It is very handy and simple to use, with no complicated settings or options.
    • -
    • It works with any version and language of Windows 7, including Home, Professional, Ultimate, etc.
    • -
    • It does not harm your computer or cause any system errors or crashes.
    • -
    • It does not require an internet connection or a product key to activate Windows 7.
    • -
    • It allows you to download free license updates and add-ons from the official Microsoft website.
    • -
    • It has a low file size and does not consume much disk space or memory.
    • -
    -

    If you are looking for a quick and easy way to activate Windows 7 for free, Chew-WGA 0.9 is the perfect solution for you. Download it now and enjoy your genuine Windows 7!

    -

    - -

    FAQs about Chew-WGA 0.9

    -

    You may have some questions or doubts about Chew-WGA 0.9 and how it works. Here are some of the most frequently asked questions and their answers:

    -

    Is Chew-WGA 0.9 safe to use?

    -

    Yes, Chew-WGA 0.9 is safe to use and does not contain any viruses or malware. However, some antivirus programs may falsely detect it as a threat and block it. To avoid this, you should disable your antivirus program before running the activator and re-enable it after the activation is completed.

    -

    Is Chew-WGA 0.9 legal to use?

    -

    No, Chew-WGA 0.9 is not legal to use and violates the terms and conditions of Microsoft. By using this activator, you are bypassing the WGA protection system and making your Windows 7 copy appear genuine without paying for a license. This is considered piracy and may result in legal consequences.

    -

    Will Chew-WGA 0.9 affect the performance of my computer?

    -

    No, Chew-WGA 0.9 will not affect the performance of your computer or cause any system errors or crashes. It only makes minor changes to the system files and the boot sector to activate Windows 7 and does not consume much disk space or memory.

    -

    Will Chew-WGA 0.9 work with future updates of Windows 7?

    -

    Yes, Chew-WGA 0.9 will work with future updates of Windows 7 and will not be detected or removed by them. However, you should be careful when installing updates that may affect the WGA protection system or the activation status of your Windows 7. You can always run Chew-WGA 0.9 again if you encounter any problems with the activation.

    -

    Can I uninstall Chew-WGA 0.9 if I want to?

    -

    Yes, you can uninstall Chew-WGA 0.9 if you want to by running CW.EXE again and clicking Restore. This will restore your Windows 7 to its original state before the activation and remove any traces of Chew-WGA 0.9 from your system.

    cec2833e83
    -
    -
    \ No newline at end of file diff --git a/spaces/nickloughren/Robot-or-Not/test.py b/spaces/nickloughren/Robot-or-Not/test.py deleted file mode 100644 index 981cb7a806b7e998ba8504a9c26dd1ba788b0c66..0000000000000000000000000000000000000000 --- a/spaces/nickloughren/Robot-or-Not/test.py +++ /dev/null @@ -1,6 +0,0 @@ -# import torch -# x = torch.rand(5, 3) -print(21) -import fastai -print(fastai.__version__) - diff --git a/spaces/nikitaPDL2023/assignment4/detectron2/tools/visualize_data.py b/spaces/nikitaPDL2023/assignment4/detectron2/tools/visualize_data.py deleted file mode 100644 index fd0ba8347bfd34fc8fac5ffef9aee10915ad1820..0000000000000000000000000000000000000000 --- a/spaces/nikitaPDL2023/assignment4/detectron2/tools/visualize_data.py +++ /dev/null @@ -1,94 +0,0 @@ -#!/usr/bin/env python -# Copyright (c) Facebook, Inc. and its affiliates. -import argparse -import os -from itertools import chain -import cv2 -import tqdm - -from detectron2.config import get_cfg -from detectron2.data import DatasetCatalog, MetadataCatalog, build_detection_train_loader -from detectron2.data import detection_utils as utils -from detectron2.data.build import filter_images_with_few_keypoints -from detectron2.utils.logger import setup_logger -from detectron2.utils.visualizer import Visualizer - - -def setup(args): - cfg = get_cfg() - if args.config_file: - cfg.merge_from_file(args.config_file) - cfg.merge_from_list(args.opts) - cfg.DATALOADER.NUM_WORKERS = 0 - cfg.freeze() - return cfg - - -def parse_args(in_args=None): - parser = argparse.ArgumentParser(description="Visualize ground-truth data") - parser.add_argument( - "--source", - choices=["annotation", "dataloader"], - required=True, - help="visualize the annotations or the data loader (with pre-processing)", - ) - parser.add_argument("--config-file", metavar="FILE", help="path to config file") - parser.add_argument("--output-dir", default="./", help="path to output directory") - parser.add_argument("--show", action="store_true", help="show output in a window") - parser.add_argument( - "opts", - help="Modify config options using the command-line", - default=None, - nargs=argparse.REMAINDER, - ) - return parser.parse_args(in_args) - - -if __name__ == "__main__": - args = parse_args() - logger = setup_logger() - logger.info("Arguments: " + str(args)) - cfg = setup(args) - - dirname = args.output_dir - os.makedirs(dirname, exist_ok=True) - metadata = MetadataCatalog.get(cfg.DATASETS.TRAIN[0]) - - def output(vis, fname): - if args.show: - print(fname) - cv2.imshow("window", vis.get_image()[:, :, ::-1]) - cv2.waitKey() - else: - filepath = os.path.join(dirname, fname) - print("Saving to {} ...".format(filepath)) - vis.save(filepath) - - scale = 1.0 - if args.source == "dataloader": - train_data_loader = build_detection_train_loader(cfg) - for batch in train_data_loader: - for per_image in batch: - # Pytorch tensor is in (C, H, W) format - img = per_image["image"].permute(1, 2, 0).cpu().detach().numpy() - img = utils.convert_image_to_rgb(img, cfg.INPUT.FORMAT) - - visualizer = Visualizer(img, metadata=metadata, scale=scale) - target_fields = per_image["instances"].get_fields() - labels = [metadata.thing_classes[i] for i in target_fields["gt_classes"]] - vis = visualizer.overlay_instances( - labels=labels, - boxes=target_fields.get("gt_boxes", None), - masks=target_fields.get("gt_masks", None), - keypoints=target_fields.get("gt_keypoints", None), - ) - output(vis, str(per_image["image_id"]) + ".jpg") - else: - dicts = list(chain.from_iterable([DatasetCatalog.get(k) for k in cfg.DATASETS.TRAIN])) - if cfg.MODEL.KEYPOINT_ON: - dicts = filter_images_with_few_keypoints(dicts, 1) - for dic in tqdm.tqdm(dicts): - img = utils.read_image(dic["file_name"], "RGB") - visualizer = Visualizer(img, metadata=metadata, scale=scale) - vis = visualizer.draw_dataset_dict(dic) - output(vis, os.path.basename(dic["file_name"])) diff --git a/spaces/niro-private/chatCSV/explorer.py b/spaces/niro-private/chatCSV/explorer.py deleted file mode 100644 index fed01c682dddcbd4e74a7e8543fb1e47386b1651..0000000000000000000000000000000000000000 --- a/spaces/niro-private/chatCSV/explorer.py +++ /dev/null @@ -1,12 +0,0 @@ -import streamlit as st - - -def view_document(supabase): - # Get the document from the database - response = supabase.table("documents").select("content").execute() - st.write("**This feature is in active development**") - # Display a list of elements from the documents - # If the user clicks on an element, display the content of the document - for idx, document in enumerate(response.data): - if st.button(document['content'][:50].replace("\n", " "), key=idx): - continue diff --git a/spaces/nsarrazin/agents-js-oasst/svelte.config.js b/spaces/nsarrazin/agents-js-oasst/svelte.config.js deleted file mode 100644 index ec7d9f4b849dffa31ab8ca5a4908eed07fc82fbc..0000000000000000000000000000000000000000 --- a/spaces/nsarrazin/agents-js-oasst/svelte.config.js +++ /dev/null @@ -1,21 +0,0 @@ -import adapter from "@sveltejs/adapter-node"; -import { vitePreprocess } from '@sveltejs/kit/vite'; -import dotenv from "dotenv"; - -dotenv.config({ path: "./.env.local" }); - -/** @type {import('@sveltejs/kit').Config} */ -const config = { - // Consult https://kit.svelte.dev/docs/integrations#preprocessors - // for more information about preprocessors - preprocess: vitePreprocess(), - - kit: { - // adapter-auto only supports some environments, see https://kit.svelte.dev/docs/adapter-auto for a list. - // If your environment is not supported or you settled on a specific environment, switch out the adapter. - // See https://kit.svelte.dev/docs/adapters for more information about adapters. - adapter: adapter() - } -}; - -export default config; diff --git a/spaces/nus-cs5647-team-5/Mandarin_Tone_Evaluation/utils/config.py b/spaces/nus-cs5647-team-5/Mandarin_Tone_Evaluation/utils/config.py deleted file mode 100644 index 09841f329dd7d888fa4dfecc3646a7e273907cba..0000000000000000000000000000000000000000 --- a/spaces/nus-cs5647-team-5/Mandarin_Tone_Evaluation/utils/config.py +++ /dev/null @@ -1,47 +0,0 @@ -import json - -DEFAULT_CONFIG_FILENAME = 'asrt_config.json' -_config_dict = None -_pinyin_dict = None -_pinyin_list = None - -def load_config_file(filename: str) -> dict: - ''' - 加载json配置文件 - - 参数:\\ - filename: 文件名 - - 返回:\\ - 配置信息字典 - ''' - global _config_dict - if _config_dict is not None: - return _config_dict - - with open(filename,'r', encoding="utf-8") as file_pointer: - _config_dict = json.load(file_pointer) - return _config_dict - -def load_pinyin_dict(filename: str) -> tuple: - ''' - 加载拼音列表和拼音字典 - - 拼音列表:用于下标索引转拼音 \\ - 拼音字典:用于拼音索引转下标 - ''' - global _pinyin_list, _pinyin_dict - if _pinyin_dict is not None and _pinyin_list is not None: - return _pinyin_list, _pinyin_dict - - _pinyin_list = list() - _pinyin_dict = dict() - with open(filename, 'r', encoding='utf-8') as file_pointer: - lines = file_pointer.read().split('\n') - for line in lines: - if len(line) == 0: - continue - tokens = line.split('\t') - _pinyin_list.append(tokens[0]) - _pinyin_dict[tokens[0]] = len(_pinyin_list) - 1 - return _pinyin_list, _pinyin_dict diff --git a/spaces/odettecantswim/vits-models-genshin/text/cantonese.py b/spaces/odettecantswim/vits-models-genshin/text/cantonese.py deleted file mode 100644 index b66d12138b81b70b86f18217d24a08fce76305c0..0000000000000000000000000000000000000000 --- a/spaces/odettecantswim/vits-models-genshin/text/cantonese.py +++ /dev/null @@ -1,59 +0,0 @@ -import re -import cn2an -import opencc - - -converter = opencc.OpenCC('jyutjyu') - -# List of (Latin alphabet, ipa) pairs: -_latin_to_ipa = [(re.compile('%s' % x[0]), x[1]) for x in [ - ('A', 'ei˥'), - ('B', 'biː˥'), - ('C', 'siː˥'), - ('D', 'tiː˥'), - ('E', 'iː˥'), - ('F', 'e˥fuː˨˩'), - ('G', 'tsiː˥'), - ('H', 'ɪk̚˥tsʰyː˨˩'), - ('I', 'ɐi˥'), - ('J', 'tsei˥'), - ('K', 'kʰei˥'), - ('L', 'e˥llou˨˩'), - ('M', 'ɛːm˥'), - ('N', 'ɛːn˥'), - ('O', 'ou˥'), - ('P', 'pʰiː˥'), - ('Q', 'kʰiːu˥'), - ('R', 'aː˥lou˨˩'), - ('S', 'ɛː˥siː˨˩'), - ('T', 'tʰiː˥'), - ('U', 'juː˥'), - ('V', 'wiː˥'), - ('W', 'tʊk̚˥piː˥juː˥'), - ('X', 'ɪk̚˥siː˨˩'), - ('Y', 'waːi˥'), - ('Z', 'iː˨sɛːt̚˥') -]] - - -def number_to_cantonese(text): - return re.sub(r'\d+(?:\.?\d+)?', lambda x: cn2an.an2cn(x.group()), text) - - -def latin_to_ipa(text): - for regex, replacement in _latin_to_ipa: - text = re.sub(regex, replacement, text) - return text - - -def cantonese_to_ipa(text): - text = number_to_cantonese(text.upper()) - text = converter.convert(text).replace('-','').replace('$',' ') - text = re.sub(r'[A-Z]', lambda x: latin_to_ipa(x.group())+' ', text) - text = re.sub(r'[、;:]', ',', text) - text = re.sub(r'\s*,\s*', ', ', text) - text = re.sub(r'\s*。\s*', '. ', text) - text = re.sub(r'\s*?\s*', '? ', text) - text = re.sub(r'\s*!\s*', '! ', text) - text = re.sub(r'\s*$', '', text) - return text diff --git a/spaces/oscars47/Thinking_Parrot_1.1.0/app.py b/spaces/oscars47/Thinking_Parrot_1.1.0/app.py deleted file mode 100644 index a344f0e985b15ca1398f4bbcc3a5756aa927cdef..0000000000000000000000000000000000000000 --- a/spaces/oscars47/Thinking_Parrot_1.1.0/app.py +++ /dev/null @@ -1,123 +0,0 @@ -# file to run wesbite for 1.1.0 version -# using gradio for GUI -import gradio as gr -import numpy as np -import keras, sys - -# call custom scripts -from dataprepsn import TextData, TextDataText - - -# read in mastertext DataPrep obj -index = 1 # for shakespeare cleaning -maxChar = 100 # based on model -MASTER_PATH = 'master.txt' -# read in model -model = keras.models.load_model('model_1.1.0.hdf5') - -# helper function to intepret probabilities -def sample(preds, temperature=1.0): - # helper function to sample an index from a probability array - # rescale data - preds = np.asarray(preds).astype('float64') - #preds = np.log(preds) / temperature - exp_preds = np.exp(1/temperature)*preds - preds = exp_preds / np.sum(exp_preds) - # create multinomial distribution; run experiment 10 times, select most probable outcome - probas = np.random.multinomial(10, preds, 1) - return np.argmax(probas) - -def generate_text_text(input, text_len): - index =1 #for shakes cleaning - maxChar=100 - td = TextData(MASTER_PATH, index, maxChar) - alphabet = td.alphabet - int_to_char = td.int_to_char - char_to_int = td.char_to_int - - # input is the cleaned text - td_sample = TextDataText(input, index, maxChar) - input = td_sample.clean_text - - # make sure at least 40 characters for training - if len(input) < 3: - raise ValueError('Input must have >= 3 characters. You have %i.' %(maxChar, len(input))) - print('input:') - print(input) - print('-----------------') - print('output:') - # need to prepare input - if len(input) >= maxChar: - # grab last maxChar characters - sentence = input[-maxChar:] - else: - sentence = '' # initialize sentence - # compute diff - diff = maxChar - len(input) - for i in range(diff): - sentence+='£' - sentence+=input - #sentence = input - #print(sentence) - - # initalize generated string - generated = '' - # don't append input - # generated += input - - # randomly pick diversity parameter - diversities = [0.2, 0.5, 1.0, 1.2] - div_index = int(np.random.random()*(len(diversities))) - diversity = diversities[div_index] - #print('diversity:', diversity) - #sys.stdout.write(input) - - # generate text_len characters worth of test - for i in range(text_len): - # prepare chosen sentence as part of new dataset - x_pred = np.zeros((1, len(sentence), len(alphabet))) - for t, char in enumerate(sentence): - if char != '£': - x_pred[0, t, char_to_int[char]] = 1.0 - - # use the current model to predict what outputs are - preds = model.predict(x_pred, verbose=0)[0] - # call the function above to interpret the probabilities and add a degree of freedom - next_index = sample(preds, diversity) - #convert predicted number to character - next_char = int_to_char[next_index] - - # append to existing string so as to build it up - generated += next_char - # append new character to previous sentence and delete the old one in front; now we train on predictions - sentence = sentence[1:] + next_char - - # print the new character as we create it - sys.stdout.write(next_char) - sys.stdout.flush() - print() - return generated - -# call hugging space interactive interface; use Blocks - -with gr.Blocks() as think: - # have intro blurb - gr.Markdown("Hi! I'm Thinking Parrot 1.1.0, a text generating AI! 🦜" ) - - # have accordian blurb - with gr.Accordion("Click for more details!"): - gr.Markdown("Simply type at least 3 characters into the box labeled 'Your Input Text' below and then select the number of output characters you want (note: try lower values for a faster response). Then click 'Think'! My response will appear in the box labeled 'My Response'.") - - # setup user interface - input = [gr.Textbox(label = 'Your Input Text'), gr.Slider(minimum=10, maximum =400, label='Number of output characters', step=10)] - output = gr.Textbox(label = 'My Response') - think_btn = gr.Button('Think!') - think_btn.click(fn= generate_text_text, inputs = input, outputs = output) - -# enable queing if heavy traffic -think.queue(concurrency_count=3) -think.launch() - -#for testing -# input = input('enter text') -# generate_text_text(input, 400) \ No newline at end of file diff --git a/spaces/patgpt4/MusicGen/audiocraft/modules/__init__.py b/spaces/patgpt4/MusicGen/audiocraft/modules/__init__.py deleted file mode 100644 index 81ba30f6466ff91b90490a4fb92f7d3d0d00144d..0000000000000000000000000000000000000000 --- a/spaces/patgpt4/MusicGen/audiocraft/modules/__init__.py +++ /dev/null @@ -1,20 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -# flake8: noqa -from .conv import ( - NormConv1d, - NormConv2d, - NormConvTranspose1d, - NormConvTranspose2d, - StreamableConv1d, - StreamableConvTranspose1d, - pad_for_conv1d, - pad1d, - unpad1d, -) -from .lstm import StreamableLSTM -from .seanet import SEANetEncoder, SEANetDecoder diff --git a/spaces/pierreant-p/huggingfab/global.css b/spaces/pierreant-p/huggingfab/global.css deleted file mode 100644 index 3ed562248ccfb08a3af9b14e77e5a34cc8c0c583..0000000000000000000000000000000000000000 --- a/spaces/pierreant-p/huggingfab/global.css +++ /dev/null @@ -1,63 +0,0 @@ -html, body { - position: relative; - width: 100%; - height: 100%; - margin: 0; - padding: 0; -} - -body { - color: #333; - box-sizing: border-box; - font-family: -apple-system, BlinkMacSystemFont, "Segoe UI", Roboto, Oxygen-Sans, Ubuntu, Cantarell, "Helvetica Neue", sans-serif; -} - -a { - color: rgb(0,100,200); - text-decoration: none; -} - -a:hover { - text-decoration: underline; -} - -a:visited { - color: rgb(0,80,160); -} - -label { - display: block; -} - -input, button, select, textarea { - font-family: inherit; - font-size: inherit; - -webkit-padding: 0.4em 0; - padding: 0.4em; - margin: 0 0 0.5em 0; - box-sizing: border-box; - border: 1px solid #ccc; - border-radius: 2px; -} - -input:disabled { - color: #ccc; -} - -button { - color: #333; - background-color: #f4f4f4; - outline: none; -} - -button:disabled { - color: #999; -} - -button:not(:disabled):active { - background-color: #ddd; -} - -button:focus { - border-color: #666; -} diff --git a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_internal/network/auth.py b/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_internal/network/auth.py deleted file mode 100644 index 94a82fa6618270d3a16f521a0fcf710a15a8aebc..0000000000000000000000000000000000000000 --- a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_internal/network/auth.py +++ /dev/null @@ -1,561 +0,0 @@ -"""Network Authentication Helpers - -Contains interface (MultiDomainBasicAuth) and associated glue code for -providing credentials in the context of network requests. -""" -import logging -import os -import shutil -import subprocess -import sysconfig -import typing -import urllib.parse -from abc import ABC, abstractmethod -from functools import lru_cache -from os.path import commonprefix -from pathlib import Path -from typing import Any, Dict, List, NamedTuple, Optional, Tuple - -from pip._vendor.requests.auth import AuthBase, HTTPBasicAuth -from pip._vendor.requests.models import Request, Response -from pip._vendor.requests.utils import get_netrc_auth - -from pip._internal.utils.logging import getLogger -from pip._internal.utils.misc import ( - ask, - ask_input, - ask_password, - remove_auth_from_url, - split_auth_netloc_from_url, -) -from pip._internal.vcs.versioncontrol import AuthInfo - -logger = getLogger(__name__) - -KEYRING_DISABLED = False - - -class Credentials(NamedTuple): - url: str - username: str - password: str - - -class KeyRingBaseProvider(ABC): - """Keyring base provider interface""" - - has_keyring: bool - - @abstractmethod - def get_auth_info(self, url: str, username: Optional[str]) -> Optional[AuthInfo]: - ... - - @abstractmethod - def save_auth_info(self, url: str, username: str, password: str) -> None: - ... - - -class KeyRingNullProvider(KeyRingBaseProvider): - """Keyring null provider""" - - has_keyring = False - - def get_auth_info(self, url: str, username: Optional[str]) -> Optional[AuthInfo]: - return None - - def save_auth_info(self, url: str, username: str, password: str) -> None: - return None - - -class KeyRingPythonProvider(KeyRingBaseProvider): - """Keyring interface which uses locally imported `keyring`""" - - has_keyring = True - - def __init__(self) -> None: - import keyring - - self.keyring = keyring - - def get_auth_info(self, url: str, username: Optional[str]) -> Optional[AuthInfo]: - # Support keyring's get_credential interface which supports getting - # credentials without a username. This is only available for - # keyring>=15.2.0. - if hasattr(self.keyring, "get_credential"): - logger.debug("Getting credentials from keyring for %s", url) - cred = self.keyring.get_credential(url, username) - if cred is not None: - return cred.username, cred.password - return None - - if username is not None: - logger.debug("Getting password from keyring for %s", url) - password = self.keyring.get_password(url, username) - if password: - return username, password - return None - - def save_auth_info(self, url: str, username: str, password: str) -> None: - self.keyring.set_password(url, username, password) - - -class KeyRingCliProvider(KeyRingBaseProvider): - """Provider which uses `keyring` cli - - Instead of calling the keyring package installed alongside pip - we call keyring on the command line which will enable pip to - use which ever installation of keyring is available first in - PATH. - """ - - has_keyring = True - - def __init__(self, cmd: str) -> None: - self.keyring = cmd - - def get_auth_info(self, url: str, username: Optional[str]) -> Optional[AuthInfo]: - # This is the default implementation of keyring.get_credential - # https://github.com/jaraco/keyring/blob/97689324abcf01bd1793d49063e7ca01e03d7d07/keyring/backend.py#L134-L139 - if username is not None: - password = self._get_password(url, username) - if password is not None: - return username, password - return None - - def save_auth_info(self, url: str, username: str, password: str) -> None: - return self._set_password(url, username, password) - - def _get_password(self, service_name: str, username: str) -> Optional[str]: - """Mirror the implementation of keyring.get_password using cli""" - if self.keyring is None: - return None - - cmd = [self.keyring, "get", service_name, username] - env = os.environ.copy() - env["PYTHONIOENCODING"] = "utf-8" - res = subprocess.run( - cmd, - stdin=subprocess.DEVNULL, - stdout=subprocess.PIPE, - env=env, - ) - if res.returncode: - return None - return res.stdout.decode("utf-8").strip(os.linesep) - - def _set_password(self, service_name: str, username: str, password: str) -> None: - """Mirror the implementation of keyring.set_password using cli""" - if self.keyring is None: - return None - env = os.environ.copy() - env["PYTHONIOENCODING"] = "utf-8" - subprocess.run( - [self.keyring, "set", service_name, username], - input=f"{password}{os.linesep}".encode("utf-8"), - env=env, - check=True, - ) - return None - - -@lru_cache(maxsize=None) -def get_keyring_provider(provider: str) -> KeyRingBaseProvider: - logger.verbose("Keyring provider requested: %s", provider) - - # keyring has previously failed and been disabled - if KEYRING_DISABLED: - provider = "disabled" - if provider in ["import", "auto"]: - try: - impl = KeyRingPythonProvider() - logger.verbose("Keyring provider set: import") - return impl - except ImportError: - pass - except Exception as exc: - # In the event of an unexpected exception - # we should warn the user - msg = "Installed copy of keyring fails with exception %s" - if provider == "auto": - msg = msg + ", trying to find a keyring executable as a fallback" - logger.warning(msg, exc, exc_info=logger.isEnabledFor(logging.DEBUG)) - if provider in ["subprocess", "auto"]: - cli = shutil.which("keyring") - if cli and cli.startswith(sysconfig.get_path("scripts")): - # all code within this function is stolen from shutil.which implementation - @typing.no_type_check - def PATH_as_shutil_which_determines_it() -> str: - path = os.environ.get("PATH", None) - if path is None: - try: - path = os.confstr("CS_PATH") - except (AttributeError, ValueError): - # os.confstr() or CS_PATH is not available - path = os.defpath - # bpo-35755: Don't use os.defpath if the PATH environment variable is - # set to an empty string - - return path - - scripts = Path(sysconfig.get_path("scripts")) - - paths = [] - for path in PATH_as_shutil_which_determines_it().split(os.pathsep): - p = Path(path) - try: - if not p.samefile(scripts): - paths.append(path) - except FileNotFoundError: - pass - - path = os.pathsep.join(paths) - - cli = shutil.which("keyring", path=path) - - if cli: - logger.verbose("Keyring provider set: subprocess with executable %s", cli) - return KeyRingCliProvider(cli) - - logger.verbose("Keyring provider set: disabled") - return KeyRingNullProvider() - - -class MultiDomainBasicAuth(AuthBase): - def __init__( - self, - prompting: bool = True, - index_urls: Optional[List[str]] = None, - keyring_provider: str = "auto", - ) -> None: - self.prompting = prompting - self.index_urls = index_urls - self.keyring_provider = keyring_provider # type: ignore[assignment] - self.passwords: Dict[str, AuthInfo] = {} - # When the user is prompted to enter credentials and keyring is - # available, we will offer to save them. If the user accepts, - # this value is set to the credentials they entered. After the - # request authenticates, the caller should call - # ``save_credentials`` to save these. - self._credentials_to_save: Optional[Credentials] = None - - @property - def keyring_provider(self) -> KeyRingBaseProvider: - return get_keyring_provider(self._keyring_provider) - - @keyring_provider.setter - def keyring_provider(self, provider: str) -> None: - # The free function get_keyring_provider has been decorated with - # functools.cache. If an exception occurs in get_keyring_auth that - # cache will be cleared and keyring disabled, take that into account - # if you want to remove this indirection. - self._keyring_provider = provider - - @property - def use_keyring(self) -> bool: - # We won't use keyring when --no-input is passed unless - # a specific provider is requested because it might require - # user interaction - return self.prompting or self._keyring_provider not in ["auto", "disabled"] - - def _get_keyring_auth( - self, - url: Optional[str], - username: Optional[str], - ) -> Optional[AuthInfo]: - """Return the tuple auth for a given url from keyring.""" - # Do nothing if no url was provided - if not url: - return None - - try: - return self.keyring_provider.get_auth_info(url, username) - except Exception as exc: - logger.warning( - "Keyring is skipped due to an exception: %s", - str(exc), - ) - global KEYRING_DISABLED - KEYRING_DISABLED = True - get_keyring_provider.cache_clear() - return None - - def _get_index_url(self, url: str) -> Optional[str]: - """Return the original index URL matching the requested URL. - - Cached or dynamically generated credentials may work against - the original index URL rather than just the netloc. - - The provided url should have had its username and password - removed already. If the original index url had credentials then - they will be included in the return value. - - Returns None if no matching index was found, or if --no-index - was specified by the user. - """ - if not url or not self.index_urls: - return None - - url = remove_auth_from_url(url).rstrip("/") + "/" - parsed_url = urllib.parse.urlsplit(url) - - candidates = [] - - for index in self.index_urls: - index = index.rstrip("/") + "/" - parsed_index = urllib.parse.urlsplit(remove_auth_from_url(index)) - if parsed_url == parsed_index: - return index - - if parsed_url.netloc != parsed_index.netloc: - continue - - candidate = urllib.parse.urlsplit(index) - candidates.append(candidate) - - if not candidates: - return None - - candidates.sort( - reverse=True, - key=lambda candidate: commonprefix( - [ - parsed_url.path, - candidate.path, - ] - ).rfind("/"), - ) - - return urllib.parse.urlunsplit(candidates[0]) - - def _get_new_credentials( - self, - original_url: str, - *, - allow_netrc: bool = True, - allow_keyring: bool = False, - ) -> AuthInfo: - """Find and return credentials for the specified URL.""" - # Split the credentials and netloc from the url. - url, netloc, url_user_password = split_auth_netloc_from_url( - original_url, - ) - - # Start with the credentials embedded in the url - username, password = url_user_password - if username is not None and password is not None: - logger.debug("Found credentials in url for %s", netloc) - return url_user_password - - # Find a matching index url for this request - index_url = self._get_index_url(url) - if index_url: - # Split the credentials from the url. - index_info = split_auth_netloc_from_url(index_url) - if index_info: - index_url, _, index_url_user_password = index_info - logger.debug("Found index url %s", index_url) - - # If an index URL was found, try its embedded credentials - if index_url and index_url_user_password[0] is not None: - username, password = index_url_user_password - if username is not None and password is not None: - logger.debug("Found credentials in index url for %s", netloc) - return index_url_user_password - - # Get creds from netrc if we still don't have them - if allow_netrc: - netrc_auth = get_netrc_auth(original_url) - if netrc_auth: - logger.debug("Found credentials in netrc for %s", netloc) - return netrc_auth - - # If we don't have a password and keyring is available, use it. - if allow_keyring: - # The index url is more specific than the netloc, so try it first - # fmt: off - kr_auth = ( - self._get_keyring_auth(index_url, username) or - self._get_keyring_auth(netloc, username) - ) - # fmt: on - if kr_auth: - logger.debug("Found credentials in keyring for %s", netloc) - return kr_auth - - return username, password - - def _get_url_and_credentials( - self, original_url: str - ) -> Tuple[str, Optional[str], Optional[str]]: - """Return the credentials to use for the provided URL. - - If allowed, netrc and keyring may be used to obtain the - correct credentials. - - Returns (url_without_credentials, username, password). Note - that even if the original URL contains credentials, this - function may return a different username and password. - """ - url, netloc, _ = split_auth_netloc_from_url(original_url) - - # Try to get credentials from original url - username, password = self._get_new_credentials(original_url) - - # If credentials not found, use any stored credentials for this netloc. - # Do this if either the username or the password is missing. - # This accounts for the situation in which the user has specified - # the username in the index url, but the password comes from keyring. - if (username is None or password is None) and netloc in self.passwords: - un, pw = self.passwords[netloc] - # It is possible that the cached credentials are for a different username, - # in which case the cache should be ignored. - if username is None or username == un: - username, password = un, pw - - if username is not None or password is not None: - # Convert the username and password if they're None, so that - # this netloc will show up as "cached" in the conditional above. - # Further, HTTPBasicAuth doesn't accept None, so it makes sense to - # cache the value that is going to be used. - username = username or "" - password = password or "" - - # Store any acquired credentials. - self.passwords[netloc] = (username, password) - - assert ( - # Credentials were found - (username is not None and password is not None) - # Credentials were not found - or (username is None and password is None) - ), f"Could not load credentials from url: {original_url}" - - return url, username, password - - def __call__(self, req: Request) -> Request: - # Get credentials for this request - url, username, password = self._get_url_and_credentials(req.url) - - # Set the url of the request to the url without any credentials - req.url = url - - if username is not None and password is not None: - # Send the basic auth with this request - req = HTTPBasicAuth(username, password)(req) - - # Attach a hook to handle 401 responses - req.register_hook("response", self.handle_401) - - return req - - # Factored out to allow for easy patching in tests - def _prompt_for_password( - self, netloc: str - ) -> Tuple[Optional[str], Optional[str], bool]: - username = ask_input(f"User for {netloc}: ") if self.prompting else None - if not username: - return None, None, False - if self.use_keyring: - auth = self._get_keyring_auth(netloc, username) - if auth and auth[0] is not None and auth[1] is not None: - return auth[0], auth[1], False - password = ask_password("Password: ") - return username, password, True - - # Factored out to allow for easy patching in tests - def _should_save_password_to_keyring(self) -> bool: - if ( - not self.prompting - or not self.use_keyring - or not self.keyring_provider.has_keyring - ): - return False - return ask("Save credentials to keyring [y/N]: ", ["y", "n"]) == "y" - - def handle_401(self, resp: Response, **kwargs: Any) -> Response: - # We only care about 401 responses, anything else we want to just - # pass through the actual response - if resp.status_code != 401: - return resp - - username, password = None, None - - # Query the keyring for credentials: - if self.use_keyring: - username, password = self._get_new_credentials( - resp.url, - allow_netrc=False, - allow_keyring=True, - ) - - # We are not able to prompt the user so simply return the response - if not self.prompting and not username and not password: - return resp - - parsed = urllib.parse.urlparse(resp.url) - - # Prompt the user for a new username and password - save = False - if not username and not password: - username, password, save = self._prompt_for_password(parsed.netloc) - - # Store the new username and password to use for future requests - self._credentials_to_save = None - if username is not None and password is not None: - self.passwords[parsed.netloc] = (username, password) - - # Prompt to save the password to keyring - if save and self._should_save_password_to_keyring(): - self._credentials_to_save = Credentials( - url=parsed.netloc, - username=username, - password=password, - ) - - # Consume content and release the original connection to allow our new - # request to reuse the same one. - # The result of the assignment isn't used, it's just needed to consume - # the content. - _ = resp.content - resp.raw.release_conn() - - # Add our new username and password to the request - req = HTTPBasicAuth(username or "", password or "")(resp.request) - req.register_hook("response", self.warn_on_401) - - # On successful request, save the credentials that were used to - # keyring. (Note that if the user responded "no" above, this member - # is not set and nothing will be saved.) - if self._credentials_to_save: - req.register_hook("response", self.save_credentials) - - # Send our new request - new_resp = resp.connection.send(req, **kwargs) - new_resp.history.append(resp) - - return new_resp - - def warn_on_401(self, resp: Response, **kwargs: Any) -> None: - """Response callback to warn about incorrect credentials.""" - if resp.status_code == 401: - logger.warning( - "401 Error, Credentials not correct for %s", - resp.request.url, - ) - - def save_credentials(self, resp: Response, **kwargs: Any) -> None: - """Response callback to save credentials on success.""" - assert ( - self.keyring_provider.has_keyring - ), "should never reach here without keyring" - - creds = self._credentials_to_save - self._credentials_to_save = None - if creds and resp.status_code < 400: - try: - logger.info("Saving credentials to keyring") - self.keyring_provider.save_auth_info( - creds.url, creds.username, creds.password - ) - except Exception: - logger.exception("Failed to save credentials") diff --git a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/colorama/tests/ansi_test.py b/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/colorama/tests/ansi_test.py deleted file mode 100644 index 0a20c80f882066e0e1323b0c7f61e22913c32e35..0000000000000000000000000000000000000000 --- a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/colorama/tests/ansi_test.py +++ /dev/null @@ -1,76 +0,0 @@ -# Copyright Jonathan Hartley 2013. BSD 3-Clause license, see LICENSE file. -import sys -from unittest import TestCase, main - -from ..ansi import Back, Fore, Style -from ..ansitowin32 import AnsiToWin32 - -stdout_orig = sys.stdout -stderr_orig = sys.stderr - - -class AnsiTest(TestCase): - - def setUp(self): - # sanity check: stdout should be a file or StringIO object. - # It will only be AnsiToWin32 if init() has previously wrapped it - self.assertNotEqual(type(sys.stdout), AnsiToWin32) - self.assertNotEqual(type(sys.stderr), AnsiToWin32) - - def tearDown(self): - sys.stdout = stdout_orig - sys.stderr = stderr_orig - - - def testForeAttributes(self): - self.assertEqual(Fore.BLACK, '\033[30m') - self.assertEqual(Fore.RED, '\033[31m') - self.assertEqual(Fore.GREEN, '\033[32m') - self.assertEqual(Fore.YELLOW, '\033[33m') - self.assertEqual(Fore.BLUE, '\033[34m') - self.assertEqual(Fore.MAGENTA, '\033[35m') - self.assertEqual(Fore.CYAN, '\033[36m') - self.assertEqual(Fore.WHITE, '\033[37m') - self.assertEqual(Fore.RESET, '\033[39m') - - # Check the light, extended versions. - self.assertEqual(Fore.LIGHTBLACK_EX, '\033[90m') - self.assertEqual(Fore.LIGHTRED_EX, '\033[91m') - self.assertEqual(Fore.LIGHTGREEN_EX, '\033[92m') - self.assertEqual(Fore.LIGHTYELLOW_EX, '\033[93m') - self.assertEqual(Fore.LIGHTBLUE_EX, '\033[94m') - self.assertEqual(Fore.LIGHTMAGENTA_EX, '\033[95m') - self.assertEqual(Fore.LIGHTCYAN_EX, '\033[96m') - self.assertEqual(Fore.LIGHTWHITE_EX, '\033[97m') - - - def testBackAttributes(self): - self.assertEqual(Back.BLACK, '\033[40m') - self.assertEqual(Back.RED, '\033[41m') - self.assertEqual(Back.GREEN, '\033[42m') - self.assertEqual(Back.YELLOW, '\033[43m') - self.assertEqual(Back.BLUE, '\033[44m') - self.assertEqual(Back.MAGENTA, '\033[45m') - self.assertEqual(Back.CYAN, '\033[46m') - self.assertEqual(Back.WHITE, '\033[47m') - self.assertEqual(Back.RESET, '\033[49m') - - # Check the light, extended versions. - self.assertEqual(Back.LIGHTBLACK_EX, '\033[100m') - self.assertEqual(Back.LIGHTRED_EX, '\033[101m') - self.assertEqual(Back.LIGHTGREEN_EX, '\033[102m') - self.assertEqual(Back.LIGHTYELLOW_EX, '\033[103m') - self.assertEqual(Back.LIGHTBLUE_EX, '\033[104m') - self.assertEqual(Back.LIGHTMAGENTA_EX, '\033[105m') - self.assertEqual(Back.LIGHTCYAN_EX, '\033[106m') - self.assertEqual(Back.LIGHTWHITE_EX, '\033[107m') - - - def testStyleAttributes(self): - self.assertEqual(Style.DIM, '\033[2m') - self.assertEqual(Style.NORMAL, '\033[22m') - self.assertEqual(Style.BRIGHT, '\033[1m') - - -if __name__ == '__main__': - main() diff --git a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/pygments/unistring.py b/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/pygments/unistring.py deleted file mode 100644 index 39f6baeedfb8ec129e0076cc3eb94dd5bef92ed0..0000000000000000000000000000000000000000 --- a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/pygments/unistring.py +++ /dev/null @@ -1,153 +0,0 @@ -""" - pygments.unistring - ~~~~~~~~~~~~~~~~~~ - - Strings of all Unicode characters of a certain category. - Used for matching in Unicode-aware languages. Run to regenerate. - - Inspired by chartypes_create.py from the MoinMoin project. - - :copyright: Copyright 2006-2023 by the Pygments team, see AUTHORS. - :license: BSD, see LICENSE for details. -""" - -Cc = '\x00-\x1f\x7f-\x9f' - -Cf = '\xad\u0600-\u0605\u061c\u06dd\u070f\u08e2\u180e\u200b-\u200f\u202a-\u202e\u2060-\u2064\u2066-\u206f\ufeff\ufff9-\ufffb\U000110bd\U000110cd\U0001bca0-\U0001bca3\U0001d173-\U0001d17a\U000e0001\U000e0020-\U000e007f' - -Cn = '\u0378-\u0379\u0380-\u0383\u038b\u038d\u03a2\u0530\u0557-\u0558\u058b-\u058c\u0590\u05c8-\u05cf\u05eb-\u05ee\u05f5-\u05ff\u061d\u070e\u074b-\u074c\u07b2-\u07bf\u07fb-\u07fc\u082e-\u082f\u083f\u085c-\u085d\u085f\u086b-\u089f\u08b5\u08be-\u08d2\u0984\u098d-\u098e\u0991-\u0992\u09a9\u09b1\u09b3-\u09b5\u09ba-\u09bb\u09c5-\u09c6\u09c9-\u09ca\u09cf-\u09d6\u09d8-\u09db\u09de\u09e4-\u09e5\u09ff-\u0a00\u0a04\u0a0b-\u0a0e\u0a11-\u0a12\u0a29\u0a31\u0a34\u0a37\u0a3a-\u0a3b\u0a3d\u0a43-\u0a46\u0a49-\u0a4a\u0a4e-\u0a50\u0a52-\u0a58\u0a5d\u0a5f-\u0a65\u0a77-\u0a80\u0a84\u0a8e\u0a92\u0aa9\u0ab1\u0ab4\u0aba-\u0abb\u0ac6\u0aca\u0ace-\u0acf\u0ad1-\u0adf\u0ae4-\u0ae5\u0af2-\u0af8\u0b00\u0b04\u0b0d-\u0b0e\u0b11-\u0b12\u0b29\u0b31\u0b34\u0b3a-\u0b3b\u0b45-\u0b46\u0b49-\u0b4a\u0b4e-\u0b55\u0b58-\u0b5b\u0b5e\u0b64-\u0b65\u0b78-\u0b81\u0b84\u0b8b-\u0b8d\u0b91\u0b96-\u0b98\u0b9b\u0b9d\u0ba0-\u0ba2\u0ba5-\u0ba7\u0bab-\u0bad\u0bba-\u0bbd\u0bc3-\u0bc5\u0bc9\u0bce-\u0bcf\u0bd1-\u0bd6\u0bd8-\u0be5\u0bfb-\u0bff\u0c0d\u0c11\u0c29\u0c3a-\u0c3c\u0c45\u0c49\u0c4e-\u0c54\u0c57\u0c5b-\u0c5f\u0c64-\u0c65\u0c70-\u0c77\u0c8d\u0c91\u0ca9\u0cb4\u0cba-\u0cbb\u0cc5\u0cc9\u0cce-\u0cd4\u0cd7-\u0cdd\u0cdf\u0ce4-\u0ce5\u0cf0\u0cf3-\u0cff\u0d04\u0d0d\u0d11\u0d45\u0d49\u0d50-\u0d53\u0d64-\u0d65\u0d80-\u0d81\u0d84\u0d97-\u0d99\u0db2\u0dbc\u0dbe-\u0dbf\u0dc7-\u0dc9\u0dcb-\u0dce\u0dd5\u0dd7\u0de0-\u0de5\u0df0-\u0df1\u0df5-\u0e00\u0e3b-\u0e3e\u0e5c-\u0e80\u0e83\u0e85-\u0e86\u0e89\u0e8b-\u0e8c\u0e8e-\u0e93\u0e98\u0ea0\u0ea4\u0ea6\u0ea8-\u0ea9\u0eac\u0eba\u0ebe-\u0ebf\u0ec5\u0ec7\u0ece-\u0ecf\u0eda-\u0edb\u0ee0-\u0eff\u0f48\u0f6d-\u0f70\u0f98\u0fbd\u0fcd\u0fdb-\u0fff\u10c6\u10c8-\u10cc\u10ce-\u10cf\u1249\u124e-\u124f\u1257\u1259\u125e-\u125f\u1289\u128e-\u128f\u12b1\u12b6-\u12b7\u12bf\u12c1\u12c6-\u12c7\u12d7\u1311\u1316-\u1317\u135b-\u135c\u137d-\u137f\u139a-\u139f\u13f6-\u13f7\u13fe-\u13ff\u169d-\u169f\u16f9-\u16ff\u170d\u1715-\u171f\u1737-\u173f\u1754-\u175f\u176d\u1771\u1774-\u177f\u17de-\u17df\u17ea-\u17ef\u17fa-\u17ff\u180f\u181a-\u181f\u1879-\u187f\u18ab-\u18af\u18f6-\u18ff\u191f\u192c-\u192f\u193c-\u193f\u1941-\u1943\u196e-\u196f\u1975-\u197f\u19ac-\u19af\u19ca-\u19cf\u19db-\u19dd\u1a1c-\u1a1d\u1a5f\u1a7d-\u1a7e\u1a8a-\u1a8f\u1a9a-\u1a9f\u1aae-\u1aaf\u1abf-\u1aff\u1b4c-\u1b4f\u1b7d-\u1b7f\u1bf4-\u1bfb\u1c38-\u1c3a\u1c4a-\u1c4c\u1c89-\u1c8f\u1cbb-\u1cbc\u1cc8-\u1ccf\u1cfa-\u1cff\u1dfa\u1f16-\u1f17\u1f1e-\u1f1f\u1f46-\u1f47\u1f4e-\u1f4f\u1f58\u1f5a\u1f5c\u1f5e\u1f7e-\u1f7f\u1fb5\u1fc5\u1fd4-\u1fd5\u1fdc\u1ff0-\u1ff1\u1ff5\u1fff\u2065\u2072-\u2073\u208f\u209d-\u209f\u20c0-\u20cf\u20f1-\u20ff\u218c-\u218f\u2427-\u243f\u244b-\u245f\u2b74-\u2b75\u2b96-\u2b97\u2bc9\u2bff\u2c2f\u2c5f\u2cf4-\u2cf8\u2d26\u2d28-\u2d2c\u2d2e-\u2d2f\u2d68-\u2d6e\u2d71-\u2d7e\u2d97-\u2d9f\u2da7\u2daf\u2db7\u2dbf\u2dc7\u2dcf\u2dd7\u2ddf\u2e4f-\u2e7f\u2e9a\u2ef4-\u2eff\u2fd6-\u2fef\u2ffc-\u2fff\u3040\u3097-\u3098\u3100-\u3104\u3130\u318f\u31bb-\u31bf\u31e4-\u31ef\u321f\u32ff\u4db6-\u4dbf\u9ff0-\u9fff\ua48d-\ua48f\ua4c7-\ua4cf\ua62c-\ua63f\ua6f8-\ua6ff\ua7ba-\ua7f6\ua82c-\ua82f\ua83a-\ua83f\ua878-\ua87f\ua8c6-\ua8cd\ua8da-\ua8df\ua954-\ua95e\ua97d-\ua97f\ua9ce\ua9da-\ua9dd\ua9ff\uaa37-\uaa3f\uaa4e-\uaa4f\uaa5a-\uaa5b\uaac3-\uaada\uaaf7-\uab00\uab07-\uab08\uab0f-\uab10\uab17-\uab1f\uab27\uab2f\uab66-\uab6f\uabee-\uabef\uabfa-\uabff\ud7a4-\ud7af\ud7c7-\ud7ca\ud7fc-\ud7ff\ufa6e-\ufa6f\ufada-\ufaff\ufb07-\ufb12\ufb18-\ufb1c\ufb37\ufb3d\ufb3f\ufb42\ufb45\ufbc2-\ufbd2\ufd40-\ufd4f\ufd90-\ufd91\ufdc8-\ufdef\ufdfe-\ufdff\ufe1a-\ufe1f\ufe53\ufe67\ufe6c-\ufe6f\ufe75\ufefd-\ufefe\uff00\uffbf-\uffc1\uffc8-\uffc9\uffd0-\uffd1\uffd8-\uffd9\uffdd-\uffdf\uffe7\uffef-\ufff8\ufffe-\uffff\U0001000c\U00010027\U0001003b\U0001003e\U0001004e-\U0001004f\U0001005e-\U0001007f\U000100fb-\U000100ff\U00010103-\U00010106\U00010134-\U00010136\U0001018f\U0001019c-\U0001019f\U000101a1-\U000101cf\U000101fe-\U0001027f\U0001029d-\U0001029f\U000102d1-\U000102df\U000102fc-\U000102ff\U00010324-\U0001032c\U0001034b-\U0001034f\U0001037b-\U0001037f\U0001039e\U000103c4-\U000103c7\U000103d6-\U000103ff\U0001049e-\U0001049f\U000104aa-\U000104af\U000104d4-\U000104d7\U000104fc-\U000104ff\U00010528-\U0001052f\U00010564-\U0001056e\U00010570-\U000105ff\U00010737-\U0001073f\U00010756-\U0001075f\U00010768-\U000107ff\U00010806-\U00010807\U00010809\U00010836\U00010839-\U0001083b\U0001083d-\U0001083e\U00010856\U0001089f-\U000108a6\U000108b0-\U000108df\U000108f3\U000108f6-\U000108fa\U0001091c-\U0001091e\U0001093a-\U0001093e\U00010940-\U0001097f\U000109b8-\U000109bb\U000109d0-\U000109d1\U00010a04\U00010a07-\U00010a0b\U00010a14\U00010a18\U00010a36-\U00010a37\U00010a3b-\U00010a3e\U00010a49-\U00010a4f\U00010a59-\U00010a5f\U00010aa0-\U00010abf\U00010ae7-\U00010aea\U00010af7-\U00010aff\U00010b36-\U00010b38\U00010b56-\U00010b57\U00010b73-\U00010b77\U00010b92-\U00010b98\U00010b9d-\U00010ba8\U00010bb0-\U00010bff\U00010c49-\U00010c7f\U00010cb3-\U00010cbf\U00010cf3-\U00010cf9\U00010d28-\U00010d2f\U00010d3a-\U00010e5f\U00010e7f-\U00010eff\U00010f28-\U00010f2f\U00010f5a-\U00010fff\U0001104e-\U00011051\U00011070-\U0001107e\U000110c2-\U000110cc\U000110ce-\U000110cf\U000110e9-\U000110ef\U000110fa-\U000110ff\U00011135\U00011147-\U0001114f\U00011177-\U0001117f\U000111ce-\U000111cf\U000111e0\U000111f5-\U000111ff\U00011212\U0001123f-\U0001127f\U00011287\U00011289\U0001128e\U0001129e\U000112aa-\U000112af\U000112eb-\U000112ef\U000112fa-\U000112ff\U00011304\U0001130d-\U0001130e\U00011311-\U00011312\U00011329\U00011331\U00011334\U0001133a\U00011345-\U00011346\U00011349-\U0001134a\U0001134e-\U0001134f\U00011351-\U00011356\U00011358-\U0001135c\U00011364-\U00011365\U0001136d-\U0001136f\U00011375-\U000113ff\U0001145a\U0001145c\U0001145f-\U0001147f\U000114c8-\U000114cf\U000114da-\U0001157f\U000115b6-\U000115b7\U000115de-\U000115ff\U00011645-\U0001164f\U0001165a-\U0001165f\U0001166d-\U0001167f\U000116b8-\U000116bf\U000116ca-\U000116ff\U0001171b-\U0001171c\U0001172c-\U0001172f\U00011740-\U000117ff\U0001183c-\U0001189f\U000118f3-\U000118fe\U00011900-\U000119ff\U00011a48-\U00011a4f\U00011a84-\U00011a85\U00011aa3-\U00011abf\U00011af9-\U00011bff\U00011c09\U00011c37\U00011c46-\U00011c4f\U00011c6d-\U00011c6f\U00011c90-\U00011c91\U00011ca8\U00011cb7-\U00011cff\U00011d07\U00011d0a\U00011d37-\U00011d39\U00011d3b\U00011d3e\U00011d48-\U00011d4f\U00011d5a-\U00011d5f\U00011d66\U00011d69\U00011d8f\U00011d92\U00011d99-\U00011d9f\U00011daa-\U00011edf\U00011ef9-\U00011fff\U0001239a-\U000123ff\U0001246f\U00012475-\U0001247f\U00012544-\U00012fff\U0001342f-\U000143ff\U00014647-\U000167ff\U00016a39-\U00016a3f\U00016a5f\U00016a6a-\U00016a6d\U00016a70-\U00016acf\U00016aee-\U00016aef\U00016af6-\U00016aff\U00016b46-\U00016b4f\U00016b5a\U00016b62\U00016b78-\U00016b7c\U00016b90-\U00016e3f\U00016e9b-\U00016eff\U00016f45-\U00016f4f\U00016f7f-\U00016f8e\U00016fa0-\U00016fdf\U00016fe2-\U00016fff\U000187f2-\U000187ff\U00018af3-\U0001afff\U0001b11f-\U0001b16f\U0001b2fc-\U0001bbff\U0001bc6b-\U0001bc6f\U0001bc7d-\U0001bc7f\U0001bc89-\U0001bc8f\U0001bc9a-\U0001bc9b\U0001bca4-\U0001cfff\U0001d0f6-\U0001d0ff\U0001d127-\U0001d128\U0001d1e9-\U0001d1ff\U0001d246-\U0001d2df\U0001d2f4-\U0001d2ff\U0001d357-\U0001d35f\U0001d379-\U0001d3ff\U0001d455\U0001d49d\U0001d4a0-\U0001d4a1\U0001d4a3-\U0001d4a4\U0001d4a7-\U0001d4a8\U0001d4ad\U0001d4ba\U0001d4bc\U0001d4c4\U0001d506\U0001d50b-\U0001d50c\U0001d515\U0001d51d\U0001d53a\U0001d53f\U0001d545\U0001d547-\U0001d549\U0001d551\U0001d6a6-\U0001d6a7\U0001d7cc-\U0001d7cd\U0001da8c-\U0001da9a\U0001daa0\U0001dab0-\U0001dfff\U0001e007\U0001e019-\U0001e01a\U0001e022\U0001e025\U0001e02b-\U0001e7ff\U0001e8c5-\U0001e8c6\U0001e8d7-\U0001e8ff\U0001e94b-\U0001e94f\U0001e95a-\U0001e95d\U0001e960-\U0001ec70\U0001ecb5-\U0001edff\U0001ee04\U0001ee20\U0001ee23\U0001ee25-\U0001ee26\U0001ee28\U0001ee33\U0001ee38\U0001ee3a\U0001ee3c-\U0001ee41\U0001ee43-\U0001ee46\U0001ee48\U0001ee4a\U0001ee4c\U0001ee50\U0001ee53\U0001ee55-\U0001ee56\U0001ee58\U0001ee5a\U0001ee5c\U0001ee5e\U0001ee60\U0001ee63\U0001ee65-\U0001ee66\U0001ee6b\U0001ee73\U0001ee78\U0001ee7d\U0001ee7f\U0001ee8a\U0001ee9c-\U0001eea0\U0001eea4\U0001eeaa\U0001eebc-\U0001eeef\U0001eef2-\U0001efff\U0001f02c-\U0001f02f\U0001f094-\U0001f09f\U0001f0af-\U0001f0b0\U0001f0c0\U0001f0d0\U0001f0f6-\U0001f0ff\U0001f10d-\U0001f10f\U0001f16c-\U0001f16f\U0001f1ad-\U0001f1e5\U0001f203-\U0001f20f\U0001f23c-\U0001f23f\U0001f249-\U0001f24f\U0001f252-\U0001f25f\U0001f266-\U0001f2ff\U0001f6d5-\U0001f6df\U0001f6ed-\U0001f6ef\U0001f6fa-\U0001f6ff\U0001f774-\U0001f77f\U0001f7d9-\U0001f7ff\U0001f80c-\U0001f80f\U0001f848-\U0001f84f\U0001f85a-\U0001f85f\U0001f888-\U0001f88f\U0001f8ae-\U0001f8ff\U0001f90c-\U0001f90f\U0001f93f\U0001f971-\U0001f972\U0001f977-\U0001f979\U0001f97b\U0001f9a3-\U0001f9af\U0001f9ba-\U0001f9bf\U0001f9c3-\U0001f9cf\U0001fa00-\U0001fa5f\U0001fa6e-\U0001ffff\U0002a6d7-\U0002a6ff\U0002b735-\U0002b73f\U0002b81e-\U0002b81f\U0002cea2-\U0002ceaf\U0002ebe1-\U0002f7ff\U0002fa1e-\U000e0000\U000e0002-\U000e001f\U000e0080-\U000e00ff\U000e01f0-\U000effff\U000ffffe-\U000fffff\U0010fffe-\U0010ffff' - -Co = '\ue000-\uf8ff\U000f0000-\U000ffffd\U00100000-\U0010fffd' - -Cs = '\ud800-\udbff\\\udc00\udc01-\udfff' - -Ll = 'a-z\xb5\xdf-\xf6\xf8-\xff\u0101\u0103\u0105\u0107\u0109\u010b\u010d\u010f\u0111\u0113\u0115\u0117\u0119\u011b\u011d\u011f\u0121\u0123\u0125\u0127\u0129\u012b\u012d\u012f\u0131\u0133\u0135\u0137-\u0138\u013a\u013c\u013e\u0140\u0142\u0144\u0146\u0148-\u0149\u014b\u014d\u014f\u0151\u0153\u0155\u0157\u0159\u015b\u015d\u015f\u0161\u0163\u0165\u0167\u0169\u016b\u016d\u016f\u0171\u0173\u0175\u0177\u017a\u017c\u017e-\u0180\u0183\u0185\u0188\u018c-\u018d\u0192\u0195\u0199-\u019b\u019e\u01a1\u01a3\u01a5\u01a8\u01aa-\u01ab\u01ad\u01b0\u01b4\u01b6\u01b9-\u01ba\u01bd-\u01bf\u01c6\u01c9\u01cc\u01ce\u01d0\u01d2\u01d4\u01d6\u01d8\u01da\u01dc-\u01dd\u01df\u01e1\u01e3\u01e5\u01e7\u01e9\u01eb\u01ed\u01ef-\u01f0\u01f3\u01f5\u01f9\u01fb\u01fd\u01ff\u0201\u0203\u0205\u0207\u0209\u020b\u020d\u020f\u0211\u0213\u0215\u0217\u0219\u021b\u021d\u021f\u0221\u0223\u0225\u0227\u0229\u022b\u022d\u022f\u0231\u0233-\u0239\u023c\u023f-\u0240\u0242\u0247\u0249\u024b\u024d\u024f-\u0293\u0295-\u02af\u0371\u0373\u0377\u037b-\u037d\u0390\u03ac-\u03ce\u03d0-\u03d1\u03d5-\u03d7\u03d9\u03db\u03dd\u03df\u03e1\u03e3\u03e5\u03e7\u03e9\u03eb\u03ed\u03ef-\u03f3\u03f5\u03f8\u03fb-\u03fc\u0430-\u045f\u0461\u0463\u0465\u0467\u0469\u046b\u046d\u046f\u0471\u0473\u0475\u0477\u0479\u047b\u047d\u047f\u0481\u048b\u048d\u048f\u0491\u0493\u0495\u0497\u0499\u049b\u049d\u049f\u04a1\u04a3\u04a5\u04a7\u04a9\u04ab\u04ad\u04af\u04b1\u04b3\u04b5\u04b7\u04b9\u04bb\u04bd\u04bf\u04c2\u04c4\u04c6\u04c8\u04ca\u04cc\u04ce-\u04cf\u04d1\u04d3\u04d5\u04d7\u04d9\u04db\u04dd\u04df\u04e1\u04e3\u04e5\u04e7\u04e9\u04eb\u04ed\u04ef\u04f1\u04f3\u04f5\u04f7\u04f9\u04fb\u04fd\u04ff\u0501\u0503\u0505\u0507\u0509\u050b\u050d\u050f\u0511\u0513\u0515\u0517\u0519\u051b\u051d\u051f\u0521\u0523\u0525\u0527\u0529\u052b\u052d\u052f\u0560-\u0588\u10d0-\u10fa\u10fd-\u10ff\u13f8-\u13fd\u1c80-\u1c88\u1d00-\u1d2b\u1d6b-\u1d77\u1d79-\u1d9a\u1e01\u1e03\u1e05\u1e07\u1e09\u1e0b\u1e0d\u1e0f\u1e11\u1e13\u1e15\u1e17\u1e19\u1e1b\u1e1d\u1e1f\u1e21\u1e23\u1e25\u1e27\u1e29\u1e2b\u1e2d\u1e2f\u1e31\u1e33\u1e35\u1e37\u1e39\u1e3b\u1e3d\u1e3f\u1e41\u1e43\u1e45\u1e47\u1e49\u1e4b\u1e4d\u1e4f\u1e51\u1e53\u1e55\u1e57\u1e59\u1e5b\u1e5d\u1e5f\u1e61\u1e63\u1e65\u1e67\u1e69\u1e6b\u1e6d\u1e6f\u1e71\u1e73\u1e75\u1e77\u1e79\u1e7b\u1e7d\u1e7f\u1e81\u1e83\u1e85\u1e87\u1e89\u1e8b\u1e8d\u1e8f\u1e91\u1e93\u1e95-\u1e9d\u1e9f\u1ea1\u1ea3\u1ea5\u1ea7\u1ea9\u1eab\u1ead\u1eaf\u1eb1\u1eb3\u1eb5\u1eb7\u1eb9\u1ebb\u1ebd\u1ebf\u1ec1\u1ec3\u1ec5\u1ec7\u1ec9\u1ecb\u1ecd\u1ecf\u1ed1\u1ed3\u1ed5\u1ed7\u1ed9\u1edb\u1edd\u1edf\u1ee1\u1ee3\u1ee5\u1ee7\u1ee9\u1eeb\u1eed\u1eef\u1ef1\u1ef3\u1ef5\u1ef7\u1ef9\u1efb\u1efd\u1eff-\u1f07\u1f10-\u1f15\u1f20-\u1f27\u1f30-\u1f37\u1f40-\u1f45\u1f50-\u1f57\u1f60-\u1f67\u1f70-\u1f7d\u1f80-\u1f87\u1f90-\u1f97\u1fa0-\u1fa7\u1fb0-\u1fb4\u1fb6-\u1fb7\u1fbe\u1fc2-\u1fc4\u1fc6-\u1fc7\u1fd0-\u1fd3\u1fd6-\u1fd7\u1fe0-\u1fe7\u1ff2-\u1ff4\u1ff6-\u1ff7\u210a\u210e-\u210f\u2113\u212f\u2134\u2139\u213c-\u213d\u2146-\u2149\u214e\u2184\u2c30-\u2c5e\u2c61\u2c65-\u2c66\u2c68\u2c6a\u2c6c\u2c71\u2c73-\u2c74\u2c76-\u2c7b\u2c81\u2c83\u2c85\u2c87\u2c89\u2c8b\u2c8d\u2c8f\u2c91\u2c93\u2c95\u2c97\u2c99\u2c9b\u2c9d\u2c9f\u2ca1\u2ca3\u2ca5\u2ca7\u2ca9\u2cab\u2cad\u2caf\u2cb1\u2cb3\u2cb5\u2cb7\u2cb9\u2cbb\u2cbd\u2cbf\u2cc1\u2cc3\u2cc5\u2cc7\u2cc9\u2ccb\u2ccd\u2ccf\u2cd1\u2cd3\u2cd5\u2cd7\u2cd9\u2cdb\u2cdd\u2cdf\u2ce1\u2ce3-\u2ce4\u2cec\u2cee\u2cf3\u2d00-\u2d25\u2d27\u2d2d\ua641\ua643\ua645\ua647\ua649\ua64b\ua64d\ua64f\ua651\ua653\ua655\ua657\ua659\ua65b\ua65d\ua65f\ua661\ua663\ua665\ua667\ua669\ua66b\ua66d\ua681\ua683\ua685\ua687\ua689\ua68b\ua68d\ua68f\ua691\ua693\ua695\ua697\ua699\ua69b\ua723\ua725\ua727\ua729\ua72b\ua72d\ua72f-\ua731\ua733\ua735\ua737\ua739\ua73b\ua73d\ua73f\ua741\ua743\ua745\ua747\ua749\ua74b\ua74d\ua74f\ua751\ua753\ua755\ua757\ua759\ua75b\ua75d\ua75f\ua761\ua763\ua765\ua767\ua769\ua76b\ua76d\ua76f\ua771-\ua778\ua77a\ua77c\ua77f\ua781\ua783\ua785\ua787\ua78c\ua78e\ua791\ua793-\ua795\ua797\ua799\ua79b\ua79d\ua79f\ua7a1\ua7a3\ua7a5\ua7a7\ua7a9\ua7af\ua7b5\ua7b7\ua7b9\ua7fa\uab30-\uab5a\uab60-\uab65\uab70-\uabbf\ufb00-\ufb06\ufb13-\ufb17\uff41-\uff5a\U00010428-\U0001044f\U000104d8-\U000104fb\U00010cc0-\U00010cf2\U000118c0-\U000118df\U00016e60-\U00016e7f\U0001d41a-\U0001d433\U0001d44e-\U0001d454\U0001d456-\U0001d467\U0001d482-\U0001d49b\U0001d4b6-\U0001d4b9\U0001d4bb\U0001d4bd-\U0001d4c3\U0001d4c5-\U0001d4cf\U0001d4ea-\U0001d503\U0001d51e-\U0001d537\U0001d552-\U0001d56b\U0001d586-\U0001d59f\U0001d5ba-\U0001d5d3\U0001d5ee-\U0001d607\U0001d622-\U0001d63b\U0001d656-\U0001d66f\U0001d68a-\U0001d6a5\U0001d6c2-\U0001d6da\U0001d6dc-\U0001d6e1\U0001d6fc-\U0001d714\U0001d716-\U0001d71b\U0001d736-\U0001d74e\U0001d750-\U0001d755\U0001d770-\U0001d788\U0001d78a-\U0001d78f\U0001d7aa-\U0001d7c2\U0001d7c4-\U0001d7c9\U0001d7cb\U0001e922-\U0001e943' - -Lm = '\u02b0-\u02c1\u02c6-\u02d1\u02e0-\u02e4\u02ec\u02ee\u0374\u037a\u0559\u0640\u06e5-\u06e6\u07f4-\u07f5\u07fa\u081a\u0824\u0828\u0971\u0e46\u0ec6\u10fc\u17d7\u1843\u1aa7\u1c78-\u1c7d\u1d2c-\u1d6a\u1d78\u1d9b-\u1dbf\u2071\u207f\u2090-\u209c\u2c7c-\u2c7d\u2d6f\u2e2f\u3005\u3031-\u3035\u303b\u309d-\u309e\u30fc-\u30fe\ua015\ua4f8-\ua4fd\ua60c\ua67f\ua69c-\ua69d\ua717-\ua71f\ua770\ua788\ua7f8-\ua7f9\ua9cf\ua9e6\uaa70\uaadd\uaaf3-\uaaf4\uab5c-\uab5f\uff70\uff9e-\uff9f\U00016b40-\U00016b43\U00016f93-\U00016f9f\U00016fe0-\U00016fe1' - -Lo = '\xaa\xba\u01bb\u01c0-\u01c3\u0294\u05d0-\u05ea\u05ef-\u05f2\u0620-\u063f\u0641-\u064a\u066e-\u066f\u0671-\u06d3\u06d5\u06ee-\u06ef\u06fa-\u06fc\u06ff\u0710\u0712-\u072f\u074d-\u07a5\u07b1\u07ca-\u07ea\u0800-\u0815\u0840-\u0858\u0860-\u086a\u08a0-\u08b4\u08b6-\u08bd\u0904-\u0939\u093d\u0950\u0958-\u0961\u0972-\u0980\u0985-\u098c\u098f-\u0990\u0993-\u09a8\u09aa-\u09b0\u09b2\u09b6-\u09b9\u09bd\u09ce\u09dc-\u09dd\u09df-\u09e1\u09f0-\u09f1\u09fc\u0a05-\u0a0a\u0a0f-\u0a10\u0a13-\u0a28\u0a2a-\u0a30\u0a32-\u0a33\u0a35-\u0a36\u0a38-\u0a39\u0a59-\u0a5c\u0a5e\u0a72-\u0a74\u0a85-\u0a8d\u0a8f-\u0a91\u0a93-\u0aa8\u0aaa-\u0ab0\u0ab2-\u0ab3\u0ab5-\u0ab9\u0abd\u0ad0\u0ae0-\u0ae1\u0af9\u0b05-\u0b0c\u0b0f-\u0b10\u0b13-\u0b28\u0b2a-\u0b30\u0b32-\u0b33\u0b35-\u0b39\u0b3d\u0b5c-\u0b5d\u0b5f-\u0b61\u0b71\u0b83\u0b85-\u0b8a\u0b8e-\u0b90\u0b92-\u0b95\u0b99-\u0b9a\u0b9c\u0b9e-\u0b9f\u0ba3-\u0ba4\u0ba8-\u0baa\u0bae-\u0bb9\u0bd0\u0c05-\u0c0c\u0c0e-\u0c10\u0c12-\u0c28\u0c2a-\u0c39\u0c3d\u0c58-\u0c5a\u0c60-\u0c61\u0c80\u0c85-\u0c8c\u0c8e-\u0c90\u0c92-\u0ca8\u0caa-\u0cb3\u0cb5-\u0cb9\u0cbd\u0cde\u0ce0-\u0ce1\u0cf1-\u0cf2\u0d05-\u0d0c\u0d0e-\u0d10\u0d12-\u0d3a\u0d3d\u0d4e\u0d54-\u0d56\u0d5f-\u0d61\u0d7a-\u0d7f\u0d85-\u0d96\u0d9a-\u0db1\u0db3-\u0dbb\u0dbd\u0dc0-\u0dc6\u0e01-\u0e30\u0e32-\u0e33\u0e40-\u0e45\u0e81-\u0e82\u0e84\u0e87-\u0e88\u0e8a\u0e8d\u0e94-\u0e97\u0e99-\u0e9f\u0ea1-\u0ea3\u0ea5\u0ea7\u0eaa-\u0eab\u0ead-\u0eb0\u0eb2-\u0eb3\u0ebd\u0ec0-\u0ec4\u0edc-\u0edf\u0f00\u0f40-\u0f47\u0f49-\u0f6c\u0f88-\u0f8c\u1000-\u102a\u103f\u1050-\u1055\u105a-\u105d\u1061\u1065-\u1066\u106e-\u1070\u1075-\u1081\u108e\u1100-\u1248\u124a-\u124d\u1250-\u1256\u1258\u125a-\u125d\u1260-\u1288\u128a-\u128d\u1290-\u12b0\u12b2-\u12b5\u12b8-\u12be\u12c0\u12c2-\u12c5\u12c8-\u12d6\u12d8-\u1310\u1312-\u1315\u1318-\u135a\u1380-\u138f\u1401-\u166c\u166f-\u167f\u1681-\u169a\u16a0-\u16ea\u16f1-\u16f8\u1700-\u170c\u170e-\u1711\u1720-\u1731\u1740-\u1751\u1760-\u176c\u176e-\u1770\u1780-\u17b3\u17dc\u1820-\u1842\u1844-\u1878\u1880-\u1884\u1887-\u18a8\u18aa\u18b0-\u18f5\u1900-\u191e\u1950-\u196d\u1970-\u1974\u1980-\u19ab\u19b0-\u19c9\u1a00-\u1a16\u1a20-\u1a54\u1b05-\u1b33\u1b45-\u1b4b\u1b83-\u1ba0\u1bae-\u1baf\u1bba-\u1be5\u1c00-\u1c23\u1c4d-\u1c4f\u1c5a-\u1c77\u1ce9-\u1cec\u1cee-\u1cf1\u1cf5-\u1cf6\u2135-\u2138\u2d30-\u2d67\u2d80-\u2d96\u2da0-\u2da6\u2da8-\u2dae\u2db0-\u2db6\u2db8-\u2dbe\u2dc0-\u2dc6\u2dc8-\u2dce\u2dd0-\u2dd6\u2dd8-\u2dde\u3006\u303c\u3041-\u3096\u309f\u30a1-\u30fa\u30ff\u3105-\u312f\u3131-\u318e\u31a0-\u31ba\u31f0-\u31ff\u3400-\u4db5\u4e00-\u9fef\ua000-\ua014\ua016-\ua48c\ua4d0-\ua4f7\ua500-\ua60b\ua610-\ua61f\ua62a-\ua62b\ua66e\ua6a0-\ua6e5\ua78f\ua7f7\ua7fb-\ua801\ua803-\ua805\ua807-\ua80a\ua80c-\ua822\ua840-\ua873\ua882-\ua8b3\ua8f2-\ua8f7\ua8fb\ua8fd-\ua8fe\ua90a-\ua925\ua930-\ua946\ua960-\ua97c\ua984-\ua9b2\ua9e0-\ua9e4\ua9e7-\ua9ef\ua9fa-\ua9fe\uaa00-\uaa28\uaa40-\uaa42\uaa44-\uaa4b\uaa60-\uaa6f\uaa71-\uaa76\uaa7a\uaa7e-\uaaaf\uaab1\uaab5-\uaab6\uaab9-\uaabd\uaac0\uaac2\uaadb-\uaadc\uaae0-\uaaea\uaaf2\uab01-\uab06\uab09-\uab0e\uab11-\uab16\uab20-\uab26\uab28-\uab2e\uabc0-\uabe2\uac00-\ud7a3\ud7b0-\ud7c6\ud7cb-\ud7fb\uf900-\ufa6d\ufa70-\ufad9\ufb1d\ufb1f-\ufb28\ufb2a-\ufb36\ufb38-\ufb3c\ufb3e\ufb40-\ufb41\ufb43-\ufb44\ufb46-\ufbb1\ufbd3-\ufd3d\ufd50-\ufd8f\ufd92-\ufdc7\ufdf0-\ufdfb\ufe70-\ufe74\ufe76-\ufefc\uff66-\uff6f\uff71-\uff9d\uffa0-\uffbe\uffc2-\uffc7\uffca-\uffcf\uffd2-\uffd7\uffda-\uffdc\U00010000-\U0001000b\U0001000d-\U00010026\U00010028-\U0001003a\U0001003c-\U0001003d\U0001003f-\U0001004d\U00010050-\U0001005d\U00010080-\U000100fa\U00010280-\U0001029c\U000102a0-\U000102d0\U00010300-\U0001031f\U0001032d-\U00010340\U00010342-\U00010349\U00010350-\U00010375\U00010380-\U0001039d\U000103a0-\U000103c3\U000103c8-\U000103cf\U00010450-\U0001049d\U00010500-\U00010527\U00010530-\U00010563\U00010600-\U00010736\U00010740-\U00010755\U00010760-\U00010767\U00010800-\U00010805\U00010808\U0001080a-\U00010835\U00010837-\U00010838\U0001083c\U0001083f-\U00010855\U00010860-\U00010876\U00010880-\U0001089e\U000108e0-\U000108f2\U000108f4-\U000108f5\U00010900-\U00010915\U00010920-\U00010939\U00010980-\U000109b7\U000109be-\U000109bf\U00010a00\U00010a10-\U00010a13\U00010a15-\U00010a17\U00010a19-\U00010a35\U00010a60-\U00010a7c\U00010a80-\U00010a9c\U00010ac0-\U00010ac7\U00010ac9-\U00010ae4\U00010b00-\U00010b35\U00010b40-\U00010b55\U00010b60-\U00010b72\U00010b80-\U00010b91\U00010c00-\U00010c48\U00010d00-\U00010d23\U00010f00-\U00010f1c\U00010f27\U00010f30-\U00010f45\U00011003-\U00011037\U00011083-\U000110af\U000110d0-\U000110e8\U00011103-\U00011126\U00011144\U00011150-\U00011172\U00011176\U00011183-\U000111b2\U000111c1-\U000111c4\U000111da\U000111dc\U00011200-\U00011211\U00011213-\U0001122b\U00011280-\U00011286\U00011288\U0001128a-\U0001128d\U0001128f-\U0001129d\U0001129f-\U000112a8\U000112b0-\U000112de\U00011305-\U0001130c\U0001130f-\U00011310\U00011313-\U00011328\U0001132a-\U00011330\U00011332-\U00011333\U00011335-\U00011339\U0001133d\U00011350\U0001135d-\U00011361\U00011400-\U00011434\U00011447-\U0001144a\U00011480-\U000114af\U000114c4-\U000114c5\U000114c7\U00011580-\U000115ae\U000115d8-\U000115db\U00011600-\U0001162f\U00011644\U00011680-\U000116aa\U00011700-\U0001171a\U00011800-\U0001182b\U000118ff\U00011a00\U00011a0b-\U00011a32\U00011a3a\U00011a50\U00011a5c-\U00011a83\U00011a86-\U00011a89\U00011a9d\U00011ac0-\U00011af8\U00011c00-\U00011c08\U00011c0a-\U00011c2e\U00011c40\U00011c72-\U00011c8f\U00011d00-\U00011d06\U00011d08-\U00011d09\U00011d0b-\U00011d30\U00011d46\U00011d60-\U00011d65\U00011d67-\U00011d68\U00011d6a-\U00011d89\U00011d98\U00011ee0-\U00011ef2\U00012000-\U00012399\U00012480-\U00012543\U00013000-\U0001342e\U00014400-\U00014646\U00016800-\U00016a38\U00016a40-\U00016a5e\U00016ad0-\U00016aed\U00016b00-\U00016b2f\U00016b63-\U00016b77\U00016b7d-\U00016b8f\U00016f00-\U00016f44\U00016f50\U00017000-\U000187f1\U00018800-\U00018af2\U0001b000-\U0001b11e\U0001b170-\U0001b2fb\U0001bc00-\U0001bc6a\U0001bc70-\U0001bc7c\U0001bc80-\U0001bc88\U0001bc90-\U0001bc99\U0001e800-\U0001e8c4\U0001ee00-\U0001ee03\U0001ee05-\U0001ee1f\U0001ee21-\U0001ee22\U0001ee24\U0001ee27\U0001ee29-\U0001ee32\U0001ee34-\U0001ee37\U0001ee39\U0001ee3b\U0001ee42\U0001ee47\U0001ee49\U0001ee4b\U0001ee4d-\U0001ee4f\U0001ee51-\U0001ee52\U0001ee54\U0001ee57\U0001ee59\U0001ee5b\U0001ee5d\U0001ee5f\U0001ee61-\U0001ee62\U0001ee64\U0001ee67-\U0001ee6a\U0001ee6c-\U0001ee72\U0001ee74-\U0001ee77\U0001ee79-\U0001ee7c\U0001ee7e\U0001ee80-\U0001ee89\U0001ee8b-\U0001ee9b\U0001eea1-\U0001eea3\U0001eea5-\U0001eea9\U0001eeab-\U0001eebb\U00020000-\U0002a6d6\U0002a700-\U0002b734\U0002b740-\U0002b81d\U0002b820-\U0002cea1\U0002ceb0-\U0002ebe0\U0002f800-\U0002fa1d' - -Lt = '\u01c5\u01c8\u01cb\u01f2\u1f88-\u1f8f\u1f98-\u1f9f\u1fa8-\u1faf\u1fbc\u1fcc\u1ffc' - -Lu = 'A-Z\xc0-\xd6\xd8-\xde\u0100\u0102\u0104\u0106\u0108\u010a\u010c\u010e\u0110\u0112\u0114\u0116\u0118\u011a\u011c\u011e\u0120\u0122\u0124\u0126\u0128\u012a\u012c\u012e\u0130\u0132\u0134\u0136\u0139\u013b\u013d\u013f\u0141\u0143\u0145\u0147\u014a\u014c\u014e\u0150\u0152\u0154\u0156\u0158\u015a\u015c\u015e\u0160\u0162\u0164\u0166\u0168\u016a\u016c\u016e\u0170\u0172\u0174\u0176\u0178-\u0179\u017b\u017d\u0181-\u0182\u0184\u0186-\u0187\u0189-\u018b\u018e-\u0191\u0193-\u0194\u0196-\u0198\u019c-\u019d\u019f-\u01a0\u01a2\u01a4\u01a6-\u01a7\u01a9\u01ac\u01ae-\u01af\u01b1-\u01b3\u01b5\u01b7-\u01b8\u01bc\u01c4\u01c7\u01ca\u01cd\u01cf\u01d1\u01d3\u01d5\u01d7\u01d9\u01db\u01de\u01e0\u01e2\u01e4\u01e6\u01e8\u01ea\u01ec\u01ee\u01f1\u01f4\u01f6-\u01f8\u01fa\u01fc\u01fe\u0200\u0202\u0204\u0206\u0208\u020a\u020c\u020e\u0210\u0212\u0214\u0216\u0218\u021a\u021c\u021e\u0220\u0222\u0224\u0226\u0228\u022a\u022c\u022e\u0230\u0232\u023a-\u023b\u023d-\u023e\u0241\u0243-\u0246\u0248\u024a\u024c\u024e\u0370\u0372\u0376\u037f\u0386\u0388-\u038a\u038c\u038e-\u038f\u0391-\u03a1\u03a3-\u03ab\u03cf\u03d2-\u03d4\u03d8\u03da\u03dc\u03de\u03e0\u03e2\u03e4\u03e6\u03e8\u03ea\u03ec\u03ee\u03f4\u03f7\u03f9-\u03fa\u03fd-\u042f\u0460\u0462\u0464\u0466\u0468\u046a\u046c\u046e\u0470\u0472\u0474\u0476\u0478\u047a\u047c\u047e\u0480\u048a\u048c\u048e\u0490\u0492\u0494\u0496\u0498\u049a\u049c\u049e\u04a0\u04a2\u04a4\u04a6\u04a8\u04aa\u04ac\u04ae\u04b0\u04b2\u04b4\u04b6\u04b8\u04ba\u04bc\u04be\u04c0-\u04c1\u04c3\u04c5\u04c7\u04c9\u04cb\u04cd\u04d0\u04d2\u04d4\u04d6\u04d8\u04da\u04dc\u04de\u04e0\u04e2\u04e4\u04e6\u04e8\u04ea\u04ec\u04ee\u04f0\u04f2\u04f4\u04f6\u04f8\u04fa\u04fc\u04fe\u0500\u0502\u0504\u0506\u0508\u050a\u050c\u050e\u0510\u0512\u0514\u0516\u0518\u051a\u051c\u051e\u0520\u0522\u0524\u0526\u0528\u052a\u052c\u052e\u0531-\u0556\u10a0-\u10c5\u10c7\u10cd\u13a0-\u13f5\u1c90-\u1cba\u1cbd-\u1cbf\u1e00\u1e02\u1e04\u1e06\u1e08\u1e0a\u1e0c\u1e0e\u1e10\u1e12\u1e14\u1e16\u1e18\u1e1a\u1e1c\u1e1e\u1e20\u1e22\u1e24\u1e26\u1e28\u1e2a\u1e2c\u1e2e\u1e30\u1e32\u1e34\u1e36\u1e38\u1e3a\u1e3c\u1e3e\u1e40\u1e42\u1e44\u1e46\u1e48\u1e4a\u1e4c\u1e4e\u1e50\u1e52\u1e54\u1e56\u1e58\u1e5a\u1e5c\u1e5e\u1e60\u1e62\u1e64\u1e66\u1e68\u1e6a\u1e6c\u1e6e\u1e70\u1e72\u1e74\u1e76\u1e78\u1e7a\u1e7c\u1e7e\u1e80\u1e82\u1e84\u1e86\u1e88\u1e8a\u1e8c\u1e8e\u1e90\u1e92\u1e94\u1e9e\u1ea0\u1ea2\u1ea4\u1ea6\u1ea8\u1eaa\u1eac\u1eae\u1eb0\u1eb2\u1eb4\u1eb6\u1eb8\u1eba\u1ebc\u1ebe\u1ec0\u1ec2\u1ec4\u1ec6\u1ec8\u1eca\u1ecc\u1ece\u1ed0\u1ed2\u1ed4\u1ed6\u1ed8\u1eda\u1edc\u1ede\u1ee0\u1ee2\u1ee4\u1ee6\u1ee8\u1eea\u1eec\u1eee\u1ef0\u1ef2\u1ef4\u1ef6\u1ef8\u1efa\u1efc\u1efe\u1f08-\u1f0f\u1f18-\u1f1d\u1f28-\u1f2f\u1f38-\u1f3f\u1f48-\u1f4d\u1f59\u1f5b\u1f5d\u1f5f\u1f68-\u1f6f\u1fb8-\u1fbb\u1fc8-\u1fcb\u1fd8-\u1fdb\u1fe8-\u1fec\u1ff8-\u1ffb\u2102\u2107\u210b-\u210d\u2110-\u2112\u2115\u2119-\u211d\u2124\u2126\u2128\u212a-\u212d\u2130-\u2133\u213e-\u213f\u2145\u2183\u2c00-\u2c2e\u2c60\u2c62-\u2c64\u2c67\u2c69\u2c6b\u2c6d-\u2c70\u2c72\u2c75\u2c7e-\u2c80\u2c82\u2c84\u2c86\u2c88\u2c8a\u2c8c\u2c8e\u2c90\u2c92\u2c94\u2c96\u2c98\u2c9a\u2c9c\u2c9e\u2ca0\u2ca2\u2ca4\u2ca6\u2ca8\u2caa\u2cac\u2cae\u2cb0\u2cb2\u2cb4\u2cb6\u2cb8\u2cba\u2cbc\u2cbe\u2cc0\u2cc2\u2cc4\u2cc6\u2cc8\u2cca\u2ccc\u2cce\u2cd0\u2cd2\u2cd4\u2cd6\u2cd8\u2cda\u2cdc\u2cde\u2ce0\u2ce2\u2ceb\u2ced\u2cf2\ua640\ua642\ua644\ua646\ua648\ua64a\ua64c\ua64e\ua650\ua652\ua654\ua656\ua658\ua65a\ua65c\ua65e\ua660\ua662\ua664\ua666\ua668\ua66a\ua66c\ua680\ua682\ua684\ua686\ua688\ua68a\ua68c\ua68e\ua690\ua692\ua694\ua696\ua698\ua69a\ua722\ua724\ua726\ua728\ua72a\ua72c\ua72e\ua732\ua734\ua736\ua738\ua73a\ua73c\ua73e\ua740\ua742\ua744\ua746\ua748\ua74a\ua74c\ua74e\ua750\ua752\ua754\ua756\ua758\ua75a\ua75c\ua75e\ua760\ua762\ua764\ua766\ua768\ua76a\ua76c\ua76e\ua779\ua77b\ua77d-\ua77e\ua780\ua782\ua784\ua786\ua78b\ua78d\ua790\ua792\ua796\ua798\ua79a\ua79c\ua79e\ua7a0\ua7a2\ua7a4\ua7a6\ua7a8\ua7aa-\ua7ae\ua7b0-\ua7b4\ua7b6\ua7b8\uff21-\uff3a\U00010400-\U00010427\U000104b0-\U000104d3\U00010c80-\U00010cb2\U000118a0-\U000118bf\U00016e40-\U00016e5f\U0001d400-\U0001d419\U0001d434-\U0001d44d\U0001d468-\U0001d481\U0001d49c\U0001d49e-\U0001d49f\U0001d4a2\U0001d4a5-\U0001d4a6\U0001d4a9-\U0001d4ac\U0001d4ae-\U0001d4b5\U0001d4d0-\U0001d4e9\U0001d504-\U0001d505\U0001d507-\U0001d50a\U0001d50d-\U0001d514\U0001d516-\U0001d51c\U0001d538-\U0001d539\U0001d53b-\U0001d53e\U0001d540-\U0001d544\U0001d546\U0001d54a-\U0001d550\U0001d56c-\U0001d585\U0001d5a0-\U0001d5b9\U0001d5d4-\U0001d5ed\U0001d608-\U0001d621\U0001d63c-\U0001d655\U0001d670-\U0001d689\U0001d6a8-\U0001d6c0\U0001d6e2-\U0001d6fa\U0001d71c-\U0001d734\U0001d756-\U0001d76e\U0001d790-\U0001d7a8\U0001d7ca\U0001e900-\U0001e921' - -Mc = '\u0903\u093b\u093e-\u0940\u0949-\u094c\u094e-\u094f\u0982-\u0983\u09be-\u09c0\u09c7-\u09c8\u09cb-\u09cc\u09d7\u0a03\u0a3e-\u0a40\u0a83\u0abe-\u0ac0\u0ac9\u0acb-\u0acc\u0b02-\u0b03\u0b3e\u0b40\u0b47-\u0b48\u0b4b-\u0b4c\u0b57\u0bbe-\u0bbf\u0bc1-\u0bc2\u0bc6-\u0bc8\u0bca-\u0bcc\u0bd7\u0c01-\u0c03\u0c41-\u0c44\u0c82-\u0c83\u0cbe\u0cc0-\u0cc4\u0cc7-\u0cc8\u0cca-\u0ccb\u0cd5-\u0cd6\u0d02-\u0d03\u0d3e-\u0d40\u0d46-\u0d48\u0d4a-\u0d4c\u0d57\u0d82-\u0d83\u0dcf-\u0dd1\u0dd8-\u0ddf\u0df2-\u0df3\u0f3e-\u0f3f\u0f7f\u102b-\u102c\u1031\u1038\u103b-\u103c\u1056-\u1057\u1062-\u1064\u1067-\u106d\u1083-\u1084\u1087-\u108c\u108f\u109a-\u109c\u17b6\u17be-\u17c5\u17c7-\u17c8\u1923-\u1926\u1929-\u192b\u1930-\u1931\u1933-\u1938\u1a19-\u1a1a\u1a55\u1a57\u1a61\u1a63-\u1a64\u1a6d-\u1a72\u1b04\u1b35\u1b3b\u1b3d-\u1b41\u1b43-\u1b44\u1b82\u1ba1\u1ba6-\u1ba7\u1baa\u1be7\u1bea-\u1bec\u1bee\u1bf2-\u1bf3\u1c24-\u1c2b\u1c34-\u1c35\u1ce1\u1cf2-\u1cf3\u1cf7\u302e-\u302f\ua823-\ua824\ua827\ua880-\ua881\ua8b4-\ua8c3\ua952-\ua953\ua983\ua9b4-\ua9b5\ua9ba-\ua9bb\ua9bd-\ua9c0\uaa2f-\uaa30\uaa33-\uaa34\uaa4d\uaa7b\uaa7d\uaaeb\uaaee-\uaaef\uaaf5\uabe3-\uabe4\uabe6-\uabe7\uabe9-\uabea\uabec\U00011000\U00011002\U00011082\U000110b0-\U000110b2\U000110b7-\U000110b8\U0001112c\U00011145-\U00011146\U00011182\U000111b3-\U000111b5\U000111bf-\U000111c0\U0001122c-\U0001122e\U00011232-\U00011233\U00011235\U000112e0-\U000112e2\U00011302-\U00011303\U0001133e-\U0001133f\U00011341-\U00011344\U00011347-\U00011348\U0001134b-\U0001134d\U00011357\U00011362-\U00011363\U00011435-\U00011437\U00011440-\U00011441\U00011445\U000114b0-\U000114b2\U000114b9\U000114bb-\U000114be\U000114c1\U000115af-\U000115b1\U000115b8-\U000115bb\U000115be\U00011630-\U00011632\U0001163b-\U0001163c\U0001163e\U000116ac\U000116ae-\U000116af\U000116b6\U00011720-\U00011721\U00011726\U0001182c-\U0001182e\U00011838\U00011a39\U00011a57-\U00011a58\U00011a97\U00011c2f\U00011c3e\U00011ca9\U00011cb1\U00011cb4\U00011d8a-\U00011d8e\U00011d93-\U00011d94\U00011d96\U00011ef5-\U00011ef6\U00016f51-\U00016f7e\U0001d165-\U0001d166\U0001d16d-\U0001d172' - -Me = '\u0488-\u0489\u1abe\u20dd-\u20e0\u20e2-\u20e4\ua670-\ua672' - -Mn = '\u0300-\u036f\u0483-\u0487\u0591-\u05bd\u05bf\u05c1-\u05c2\u05c4-\u05c5\u05c7\u0610-\u061a\u064b-\u065f\u0670\u06d6-\u06dc\u06df-\u06e4\u06e7-\u06e8\u06ea-\u06ed\u0711\u0730-\u074a\u07a6-\u07b0\u07eb-\u07f3\u07fd\u0816-\u0819\u081b-\u0823\u0825-\u0827\u0829-\u082d\u0859-\u085b\u08d3-\u08e1\u08e3-\u0902\u093a\u093c\u0941-\u0948\u094d\u0951-\u0957\u0962-\u0963\u0981\u09bc\u09c1-\u09c4\u09cd\u09e2-\u09e3\u09fe\u0a01-\u0a02\u0a3c\u0a41-\u0a42\u0a47-\u0a48\u0a4b-\u0a4d\u0a51\u0a70-\u0a71\u0a75\u0a81-\u0a82\u0abc\u0ac1-\u0ac5\u0ac7-\u0ac8\u0acd\u0ae2-\u0ae3\u0afa-\u0aff\u0b01\u0b3c\u0b3f\u0b41-\u0b44\u0b4d\u0b56\u0b62-\u0b63\u0b82\u0bc0\u0bcd\u0c00\u0c04\u0c3e-\u0c40\u0c46-\u0c48\u0c4a-\u0c4d\u0c55-\u0c56\u0c62-\u0c63\u0c81\u0cbc\u0cbf\u0cc6\u0ccc-\u0ccd\u0ce2-\u0ce3\u0d00-\u0d01\u0d3b-\u0d3c\u0d41-\u0d44\u0d4d\u0d62-\u0d63\u0dca\u0dd2-\u0dd4\u0dd6\u0e31\u0e34-\u0e3a\u0e47-\u0e4e\u0eb1\u0eb4-\u0eb9\u0ebb-\u0ebc\u0ec8-\u0ecd\u0f18-\u0f19\u0f35\u0f37\u0f39\u0f71-\u0f7e\u0f80-\u0f84\u0f86-\u0f87\u0f8d-\u0f97\u0f99-\u0fbc\u0fc6\u102d-\u1030\u1032-\u1037\u1039-\u103a\u103d-\u103e\u1058-\u1059\u105e-\u1060\u1071-\u1074\u1082\u1085-\u1086\u108d\u109d\u135d-\u135f\u1712-\u1714\u1732-\u1734\u1752-\u1753\u1772-\u1773\u17b4-\u17b5\u17b7-\u17bd\u17c6\u17c9-\u17d3\u17dd\u180b-\u180d\u1885-\u1886\u18a9\u1920-\u1922\u1927-\u1928\u1932\u1939-\u193b\u1a17-\u1a18\u1a1b\u1a56\u1a58-\u1a5e\u1a60\u1a62\u1a65-\u1a6c\u1a73-\u1a7c\u1a7f\u1ab0-\u1abd\u1b00-\u1b03\u1b34\u1b36-\u1b3a\u1b3c\u1b42\u1b6b-\u1b73\u1b80-\u1b81\u1ba2-\u1ba5\u1ba8-\u1ba9\u1bab-\u1bad\u1be6\u1be8-\u1be9\u1bed\u1bef-\u1bf1\u1c2c-\u1c33\u1c36-\u1c37\u1cd0-\u1cd2\u1cd4-\u1ce0\u1ce2-\u1ce8\u1ced\u1cf4\u1cf8-\u1cf9\u1dc0-\u1df9\u1dfb-\u1dff\u20d0-\u20dc\u20e1\u20e5-\u20f0\u2cef-\u2cf1\u2d7f\u2de0-\u2dff\u302a-\u302d\u3099-\u309a\ua66f\ua674-\ua67d\ua69e-\ua69f\ua6f0-\ua6f1\ua802\ua806\ua80b\ua825-\ua826\ua8c4-\ua8c5\ua8e0-\ua8f1\ua8ff\ua926-\ua92d\ua947-\ua951\ua980-\ua982\ua9b3\ua9b6-\ua9b9\ua9bc\ua9e5\uaa29-\uaa2e\uaa31-\uaa32\uaa35-\uaa36\uaa43\uaa4c\uaa7c\uaab0\uaab2-\uaab4\uaab7-\uaab8\uaabe-\uaabf\uaac1\uaaec-\uaaed\uaaf6\uabe5\uabe8\uabed\ufb1e\ufe00-\ufe0f\ufe20-\ufe2f\U000101fd\U000102e0\U00010376-\U0001037a\U00010a01-\U00010a03\U00010a05-\U00010a06\U00010a0c-\U00010a0f\U00010a38-\U00010a3a\U00010a3f\U00010ae5-\U00010ae6\U00010d24-\U00010d27\U00010f46-\U00010f50\U00011001\U00011038-\U00011046\U0001107f-\U00011081\U000110b3-\U000110b6\U000110b9-\U000110ba\U00011100-\U00011102\U00011127-\U0001112b\U0001112d-\U00011134\U00011173\U00011180-\U00011181\U000111b6-\U000111be\U000111c9-\U000111cc\U0001122f-\U00011231\U00011234\U00011236-\U00011237\U0001123e\U000112df\U000112e3-\U000112ea\U00011300-\U00011301\U0001133b-\U0001133c\U00011340\U00011366-\U0001136c\U00011370-\U00011374\U00011438-\U0001143f\U00011442-\U00011444\U00011446\U0001145e\U000114b3-\U000114b8\U000114ba\U000114bf-\U000114c0\U000114c2-\U000114c3\U000115b2-\U000115b5\U000115bc-\U000115bd\U000115bf-\U000115c0\U000115dc-\U000115dd\U00011633-\U0001163a\U0001163d\U0001163f-\U00011640\U000116ab\U000116ad\U000116b0-\U000116b5\U000116b7\U0001171d-\U0001171f\U00011722-\U00011725\U00011727-\U0001172b\U0001182f-\U00011837\U00011839-\U0001183a\U00011a01-\U00011a0a\U00011a33-\U00011a38\U00011a3b-\U00011a3e\U00011a47\U00011a51-\U00011a56\U00011a59-\U00011a5b\U00011a8a-\U00011a96\U00011a98-\U00011a99\U00011c30-\U00011c36\U00011c38-\U00011c3d\U00011c3f\U00011c92-\U00011ca7\U00011caa-\U00011cb0\U00011cb2-\U00011cb3\U00011cb5-\U00011cb6\U00011d31-\U00011d36\U00011d3a\U00011d3c-\U00011d3d\U00011d3f-\U00011d45\U00011d47\U00011d90-\U00011d91\U00011d95\U00011d97\U00011ef3-\U00011ef4\U00016af0-\U00016af4\U00016b30-\U00016b36\U00016f8f-\U00016f92\U0001bc9d-\U0001bc9e\U0001d167-\U0001d169\U0001d17b-\U0001d182\U0001d185-\U0001d18b\U0001d1aa-\U0001d1ad\U0001d242-\U0001d244\U0001da00-\U0001da36\U0001da3b-\U0001da6c\U0001da75\U0001da84\U0001da9b-\U0001da9f\U0001daa1-\U0001daaf\U0001e000-\U0001e006\U0001e008-\U0001e018\U0001e01b-\U0001e021\U0001e023-\U0001e024\U0001e026-\U0001e02a\U0001e8d0-\U0001e8d6\U0001e944-\U0001e94a\U000e0100-\U000e01ef' - -Nd = '0-9\u0660-\u0669\u06f0-\u06f9\u07c0-\u07c9\u0966-\u096f\u09e6-\u09ef\u0a66-\u0a6f\u0ae6-\u0aef\u0b66-\u0b6f\u0be6-\u0bef\u0c66-\u0c6f\u0ce6-\u0cef\u0d66-\u0d6f\u0de6-\u0def\u0e50-\u0e59\u0ed0-\u0ed9\u0f20-\u0f29\u1040-\u1049\u1090-\u1099\u17e0-\u17e9\u1810-\u1819\u1946-\u194f\u19d0-\u19d9\u1a80-\u1a89\u1a90-\u1a99\u1b50-\u1b59\u1bb0-\u1bb9\u1c40-\u1c49\u1c50-\u1c59\ua620-\ua629\ua8d0-\ua8d9\ua900-\ua909\ua9d0-\ua9d9\ua9f0-\ua9f9\uaa50-\uaa59\uabf0-\uabf9\uff10-\uff19\U000104a0-\U000104a9\U00010d30-\U00010d39\U00011066-\U0001106f\U000110f0-\U000110f9\U00011136-\U0001113f\U000111d0-\U000111d9\U000112f0-\U000112f9\U00011450-\U00011459\U000114d0-\U000114d9\U00011650-\U00011659\U000116c0-\U000116c9\U00011730-\U00011739\U000118e0-\U000118e9\U00011c50-\U00011c59\U00011d50-\U00011d59\U00011da0-\U00011da9\U00016a60-\U00016a69\U00016b50-\U00016b59\U0001d7ce-\U0001d7ff\U0001e950-\U0001e959' - -Nl = '\u16ee-\u16f0\u2160-\u2182\u2185-\u2188\u3007\u3021-\u3029\u3038-\u303a\ua6e6-\ua6ef\U00010140-\U00010174\U00010341\U0001034a\U000103d1-\U000103d5\U00012400-\U0001246e' - -No = '\xb2-\xb3\xb9\xbc-\xbe\u09f4-\u09f9\u0b72-\u0b77\u0bf0-\u0bf2\u0c78-\u0c7e\u0d58-\u0d5e\u0d70-\u0d78\u0f2a-\u0f33\u1369-\u137c\u17f0-\u17f9\u19da\u2070\u2074-\u2079\u2080-\u2089\u2150-\u215f\u2189\u2460-\u249b\u24ea-\u24ff\u2776-\u2793\u2cfd\u3192-\u3195\u3220-\u3229\u3248-\u324f\u3251-\u325f\u3280-\u3289\u32b1-\u32bf\ua830-\ua835\U00010107-\U00010133\U00010175-\U00010178\U0001018a-\U0001018b\U000102e1-\U000102fb\U00010320-\U00010323\U00010858-\U0001085f\U00010879-\U0001087f\U000108a7-\U000108af\U000108fb-\U000108ff\U00010916-\U0001091b\U000109bc-\U000109bd\U000109c0-\U000109cf\U000109d2-\U000109ff\U00010a40-\U00010a48\U00010a7d-\U00010a7e\U00010a9d-\U00010a9f\U00010aeb-\U00010aef\U00010b58-\U00010b5f\U00010b78-\U00010b7f\U00010ba9-\U00010baf\U00010cfa-\U00010cff\U00010e60-\U00010e7e\U00010f1d-\U00010f26\U00010f51-\U00010f54\U00011052-\U00011065\U000111e1-\U000111f4\U0001173a-\U0001173b\U000118ea-\U000118f2\U00011c5a-\U00011c6c\U00016b5b-\U00016b61\U00016e80-\U00016e96\U0001d2e0-\U0001d2f3\U0001d360-\U0001d378\U0001e8c7-\U0001e8cf\U0001ec71-\U0001ecab\U0001ecad-\U0001ecaf\U0001ecb1-\U0001ecb4\U0001f100-\U0001f10c' - -Pc = '_\u203f-\u2040\u2054\ufe33-\ufe34\ufe4d-\ufe4f\uff3f' - -Pd = '\\-\u058a\u05be\u1400\u1806\u2010-\u2015\u2e17\u2e1a\u2e3a-\u2e3b\u2e40\u301c\u3030\u30a0\ufe31-\ufe32\ufe58\ufe63\uff0d' - -Pe = ')\\]}\u0f3b\u0f3d\u169c\u2046\u207e\u208e\u2309\u230b\u232a\u2769\u276b\u276d\u276f\u2771\u2773\u2775\u27c6\u27e7\u27e9\u27eb\u27ed\u27ef\u2984\u2986\u2988\u298a\u298c\u298e\u2990\u2992\u2994\u2996\u2998\u29d9\u29db\u29fd\u2e23\u2e25\u2e27\u2e29\u3009\u300b\u300d\u300f\u3011\u3015\u3017\u3019\u301b\u301e-\u301f\ufd3e\ufe18\ufe36\ufe38\ufe3a\ufe3c\ufe3e\ufe40\ufe42\ufe44\ufe48\ufe5a\ufe5c\ufe5e\uff09\uff3d\uff5d\uff60\uff63' - -Pf = '\xbb\u2019\u201d\u203a\u2e03\u2e05\u2e0a\u2e0d\u2e1d\u2e21' - -Pi = '\xab\u2018\u201b-\u201c\u201f\u2039\u2e02\u2e04\u2e09\u2e0c\u2e1c\u2e20' - -Po = "!-#%-'*,.-/:-;?-@\\\\\xa1\xa7\xb6-\xb7\xbf\u037e\u0387\u055a-\u055f\u0589\u05c0\u05c3\u05c6\u05f3-\u05f4\u0609-\u060a\u060c-\u060d\u061b\u061e-\u061f\u066a-\u066d\u06d4\u0700-\u070d\u07f7-\u07f9\u0830-\u083e\u085e\u0964-\u0965\u0970\u09fd\u0a76\u0af0\u0c84\u0df4\u0e4f\u0e5a-\u0e5b\u0f04-\u0f12\u0f14\u0f85\u0fd0-\u0fd4\u0fd9-\u0fda\u104a-\u104f\u10fb\u1360-\u1368\u166d-\u166e\u16eb-\u16ed\u1735-\u1736\u17d4-\u17d6\u17d8-\u17da\u1800-\u1805\u1807-\u180a\u1944-\u1945\u1a1e-\u1a1f\u1aa0-\u1aa6\u1aa8-\u1aad\u1b5a-\u1b60\u1bfc-\u1bff\u1c3b-\u1c3f\u1c7e-\u1c7f\u1cc0-\u1cc7\u1cd3\u2016-\u2017\u2020-\u2027\u2030-\u2038\u203b-\u203e\u2041-\u2043\u2047-\u2051\u2053\u2055-\u205e\u2cf9-\u2cfc\u2cfe-\u2cff\u2d70\u2e00-\u2e01\u2e06-\u2e08\u2e0b\u2e0e-\u2e16\u2e18-\u2e19\u2e1b\u2e1e-\u2e1f\u2e2a-\u2e2e\u2e30-\u2e39\u2e3c-\u2e3f\u2e41\u2e43-\u2e4e\u3001-\u3003\u303d\u30fb\ua4fe-\ua4ff\ua60d-\ua60f\ua673\ua67e\ua6f2-\ua6f7\ua874-\ua877\ua8ce-\ua8cf\ua8f8-\ua8fa\ua8fc\ua92e-\ua92f\ua95f\ua9c1-\ua9cd\ua9de-\ua9df\uaa5c-\uaa5f\uaade-\uaadf\uaaf0-\uaaf1\uabeb\ufe10-\ufe16\ufe19\ufe30\ufe45-\ufe46\ufe49-\ufe4c\ufe50-\ufe52\ufe54-\ufe57\ufe5f-\ufe61\ufe68\ufe6a-\ufe6b\uff01-\uff03\uff05-\uff07\uff0a\uff0c\uff0e-\uff0f\uff1a-\uff1b\uff1f-\uff20\uff3c\uff61\uff64-\uff65\U00010100-\U00010102\U0001039f\U000103d0\U0001056f\U00010857\U0001091f\U0001093f\U00010a50-\U00010a58\U00010a7f\U00010af0-\U00010af6\U00010b39-\U00010b3f\U00010b99-\U00010b9c\U00010f55-\U00010f59\U00011047-\U0001104d\U000110bb-\U000110bc\U000110be-\U000110c1\U00011140-\U00011143\U00011174-\U00011175\U000111c5-\U000111c8\U000111cd\U000111db\U000111dd-\U000111df\U00011238-\U0001123d\U000112a9\U0001144b-\U0001144f\U0001145b\U0001145d\U000114c6\U000115c1-\U000115d7\U00011641-\U00011643\U00011660-\U0001166c\U0001173c-\U0001173e\U0001183b\U00011a3f-\U00011a46\U00011a9a-\U00011a9c\U00011a9e-\U00011aa2\U00011c41-\U00011c45\U00011c70-\U00011c71\U00011ef7-\U00011ef8\U00012470-\U00012474\U00016a6e-\U00016a6f\U00016af5\U00016b37-\U00016b3b\U00016b44\U00016e97-\U00016e9a\U0001bc9f\U0001da87-\U0001da8b\U0001e95e-\U0001e95f" - -Ps = '(\\[{\u0f3a\u0f3c\u169b\u201a\u201e\u2045\u207d\u208d\u2308\u230a\u2329\u2768\u276a\u276c\u276e\u2770\u2772\u2774\u27c5\u27e6\u27e8\u27ea\u27ec\u27ee\u2983\u2985\u2987\u2989\u298b\u298d\u298f\u2991\u2993\u2995\u2997\u29d8\u29da\u29fc\u2e22\u2e24\u2e26\u2e28\u2e42\u3008\u300a\u300c\u300e\u3010\u3014\u3016\u3018\u301a\u301d\ufd3f\ufe17\ufe35\ufe37\ufe39\ufe3b\ufe3d\ufe3f\ufe41\ufe43\ufe47\ufe59\ufe5b\ufe5d\uff08\uff3b\uff5b\uff5f\uff62' - -Sc = '$\xa2-\xa5\u058f\u060b\u07fe-\u07ff\u09f2-\u09f3\u09fb\u0af1\u0bf9\u0e3f\u17db\u20a0-\u20bf\ua838\ufdfc\ufe69\uff04\uffe0-\uffe1\uffe5-\uffe6\U0001ecb0' - -Sk = '\\^`\xa8\xaf\xb4\xb8\u02c2-\u02c5\u02d2-\u02df\u02e5-\u02eb\u02ed\u02ef-\u02ff\u0375\u0384-\u0385\u1fbd\u1fbf-\u1fc1\u1fcd-\u1fcf\u1fdd-\u1fdf\u1fed-\u1fef\u1ffd-\u1ffe\u309b-\u309c\ua700-\ua716\ua720-\ua721\ua789-\ua78a\uab5b\ufbb2-\ufbc1\uff3e\uff40\uffe3\U0001f3fb-\U0001f3ff' - -Sm = '+<->|~\xac\xb1\xd7\xf7\u03f6\u0606-\u0608\u2044\u2052\u207a-\u207c\u208a-\u208c\u2118\u2140-\u2144\u214b\u2190-\u2194\u219a-\u219b\u21a0\u21a3\u21a6\u21ae\u21ce-\u21cf\u21d2\u21d4\u21f4-\u22ff\u2320-\u2321\u237c\u239b-\u23b3\u23dc-\u23e1\u25b7\u25c1\u25f8-\u25ff\u266f\u27c0-\u27c4\u27c7-\u27e5\u27f0-\u27ff\u2900-\u2982\u2999-\u29d7\u29dc-\u29fb\u29fe-\u2aff\u2b30-\u2b44\u2b47-\u2b4c\ufb29\ufe62\ufe64-\ufe66\uff0b\uff1c-\uff1e\uff5c\uff5e\uffe2\uffe9-\uffec\U0001d6c1\U0001d6db\U0001d6fb\U0001d715\U0001d735\U0001d74f\U0001d76f\U0001d789\U0001d7a9\U0001d7c3\U0001eef0-\U0001eef1' - -So = '\xa6\xa9\xae\xb0\u0482\u058d-\u058e\u060e-\u060f\u06de\u06e9\u06fd-\u06fe\u07f6\u09fa\u0b70\u0bf3-\u0bf8\u0bfa\u0c7f\u0d4f\u0d79\u0f01-\u0f03\u0f13\u0f15-\u0f17\u0f1a-\u0f1f\u0f34\u0f36\u0f38\u0fbe-\u0fc5\u0fc7-\u0fcc\u0fce-\u0fcf\u0fd5-\u0fd8\u109e-\u109f\u1390-\u1399\u1940\u19de-\u19ff\u1b61-\u1b6a\u1b74-\u1b7c\u2100-\u2101\u2103-\u2106\u2108-\u2109\u2114\u2116-\u2117\u211e-\u2123\u2125\u2127\u2129\u212e\u213a-\u213b\u214a\u214c-\u214d\u214f\u218a-\u218b\u2195-\u2199\u219c-\u219f\u21a1-\u21a2\u21a4-\u21a5\u21a7-\u21ad\u21af-\u21cd\u21d0-\u21d1\u21d3\u21d5-\u21f3\u2300-\u2307\u230c-\u231f\u2322-\u2328\u232b-\u237b\u237d-\u239a\u23b4-\u23db\u23e2-\u2426\u2440-\u244a\u249c-\u24e9\u2500-\u25b6\u25b8-\u25c0\u25c2-\u25f7\u2600-\u266e\u2670-\u2767\u2794-\u27bf\u2800-\u28ff\u2b00-\u2b2f\u2b45-\u2b46\u2b4d-\u2b73\u2b76-\u2b95\u2b98-\u2bc8\u2bca-\u2bfe\u2ce5-\u2cea\u2e80-\u2e99\u2e9b-\u2ef3\u2f00-\u2fd5\u2ff0-\u2ffb\u3004\u3012-\u3013\u3020\u3036-\u3037\u303e-\u303f\u3190-\u3191\u3196-\u319f\u31c0-\u31e3\u3200-\u321e\u322a-\u3247\u3250\u3260-\u327f\u328a-\u32b0\u32c0-\u32fe\u3300-\u33ff\u4dc0-\u4dff\ua490-\ua4c6\ua828-\ua82b\ua836-\ua837\ua839\uaa77-\uaa79\ufdfd\uffe4\uffe8\uffed-\uffee\ufffc-\ufffd\U00010137-\U0001013f\U00010179-\U00010189\U0001018c-\U0001018e\U00010190-\U0001019b\U000101a0\U000101d0-\U000101fc\U00010877-\U00010878\U00010ac8\U0001173f\U00016b3c-\U00016b3f\U00016b45\U0001bc9c\U0001d000-\U0001d0f5\U0001d100-\U0001d126\U0001d129-\U0001d164\U0001d16a-\U0001d16c\U0001d183-\U0001d184\U0001d18c-\U0001d1a9\U0001d1ae-\U0001d1e8\U0001d200-\U0001d241\U0001d245\U0001d300-\U0001d356\U0001d800-\U0001d9ff\U0001da37-\U0001da3a\U0001da6d-\U0001da74\U0001da76-\U0001da83\U0001da85-\U0001da86\U0001ecac\U0001f000-\U0001f02b\U0001f030-\U0001f093\U0001f0a0-\U0001f0ae\U0001f0b1-\U0001f0bf\U0001f0c1-\U0001f0cf\U0001f0d1-\U0001f0f5\U0001f110-\U0001f16b\U0001f170-\U0001f1ac\U0001f1e6-\U0001f202\U0001f210-\U0001f23b\U0001f240-\U0001f248\U0001f250-\U0001f251\U0001f260-\U0001f265\U0001f300-\U0001f3fa\U0001f400-\U0001f6d4\U0001f6e0-\U0001f6ec\U0001f6f0-\U0001f6f9\U0001f700-\U0001f773\U0001f780-\U0001f7d8\U0001f800-\U0001f80b\U0001f810-\U0001f847\U0001f850-\U0001f859\U0001f860-\U0001f887\U0001f890-\U0001f8ad\U0001f900-\U0001f90b\U0001f910-\U0001f93e\U0001f940-\U0001f970\U0001f973-\U0001f976\U0001f97a\U0001f97c-\U0001f9a2\U0001f9b0-\U0001f9b9\U0001f9c0-\U0001f9c2\U0001f9d0-\U0001f9ff\U0001fa60-\U0001fa6d' - -Zl = '\u2028' - -Zp = '\u2029' - -Zs = ' \xa0\u1680\u2000-\u200a\u202f\u205f\u3000' - -xid_continue = '0-9A-Z_a-z\xaa\xb5\xb7\xba\xc0-\xd6\xd8-\xf6\xf8-\u02c1\u02c6-\u02d1\u02e0-\u02e4\u02ec\u02ee\u0300-\u0374\u0376-\u0377\u037b-\u037d\u037f\u0386-\u038a\u038c\u038e-\u03a1\u03a3-\u03f5\u03f7-\u0481\u0483-\u0487\u048a-\u052f\u0531-\u0556\u0559\u0560-\u0588\u0591-\u05bd\u05bf\u05c1-\u05c2\u05c4-\u05c5\u05c7\u05d0-\u05ea\u05ef-\u05f2\u0610-\u061a\u0620-\u0669\u066e-\u06d3\u06d5-\u06dc\u06df-\u06e8\u06ea-\u06fc\u06ff\u0710-\u074a\u074d-\u07b1\u07c0-\u07f5\u07fa\u07fd\u0800-\u082d\u0840-\u085b\u0860-\u086a\u08a0-\u08b4\u08b6-\u08bd\u08d3-\u08e1\u08e3-\u0963\u0966-\u096f\u0971-\u0983\u0985-\u098c\u098f-\u0990\u0993-\u09a8\u09aa-\u09b0\u09b2\u09b6-\u09b9\u09bc-\u09c4\u09c7-\u09c8\u09cb-\u09ce\u09d7\u09dc-\u09dd\u09df-\u09e3\u09e6-\u09f1\u09fc\u09fe\u0a01-\u0a03\u0a05-\u0a0a\u0a0f-\u0a10\u0a13-\u0a28\u0a2a-\u0a30\u0a32-\u0a33\u0a35-\u0a36\u0a38-\u0a39\u0a3c\u0a3e-\u0a42\u0a47-\u0a48\u0a4b-\u0a4d\u0a51\u0a59-\u0a5c\u0a5e\u0a66-\u0a75\u0a81-\u0a83\u0a85-\u0a8d\u0a8f-\u0a91\u0a93-\u0aa8\u0aaa-\u0ab0\u0ab2-\u0ab3\u0ab5-\u0ab9\u0abc-\u0ac5\u0ac7-\u0ac9\u0acb-\u0acd\u0ad0\u0ae0-\u0ae3\u0ae6-\u0aef\u0af9-\u0aff\u0b01-\u0b03\u0b05-\u0b0c\u0b0f-\u0b10\u0b13-\u0b28\u0b2a-\u0b30\u0b32-\u0b33\u0b35-\u0b39\u0b3c-\u0b44\u0b47-\u0b48\u0b4b-\u0b4d\u0b56-\u0b57\u0b5c-\u0b5d\u0b5f-\u0b63\u0b66-\u0b6f\u0b71\u0b82-\u0b83\u0b85-\u0b8a\u0b8e-\u0b90\u0b92-\u0b95\u0b99-\u0b9a\u0b9c\u0b9e-\u0b9f\u0ba3-\u0ba4\u0ba8-\u0baa\u0bae-\u0bb9\u0bbe-\u0bc2\u0bc6-\u0bc8\u0bca-\u0bcd\u0bd0\u0bd7\u0be6-\u0bef\u0c00-\u0c0c\u0c0e-\u0c10\u0c12-\u0c28\u0c2a-\u0c39\u0c3d-\u0c44\u0c46-\u0c48\u0c4a-\u0c4d\u0c55-\u0c56\u0c58-\u0c5a\u0c60-\u0c63\u0c66-\u0c6f\u0c80-\u0c83\u0c85-\u0c8c\u0c8e-\u0c90\u0c92-\u0ca8\u0caa-\u0cb3\u0cb5-\u0cb9\u0cbc-\u0cc4\u0cc6-\u0cc8\u0cca-\u0ccd\u0cd5-\u0cd6\u0cde\u0ce0-\u0ce3\u0ce6-\u0cef\u0cf1-\u0cf2\u0d00-\u0d03\u0d05-\u0d0c\u0d0e-\u0d10\u0d12-\u0d44\u0d46-\u0d48\u0d4a-\u0d4e\u0d54-\u0d57\u0d5f-\u0d63\u0d66-\u0d6f\u0d7a-\u0d7f\u0d82-\u0d83\u0d85-\u0d96\u0d9a-\u0db1\u0db3-\u0dbb\u0dbd\u0dc0-\u0dc6\u0dca\u0dcf-\u0dd4\u0dd6\u0dd8-\u0ddf\u0de6-\u0def\u0df2-\u0df3\u0e01-\u0e3a\u0e40-\u0e4e\u0e50-\u0e59\u0e81-\u0e82\u0e84\u0e87-\u0e88\u0e8a\u0e8d\u0e94-\u0e97\u0e99-\u0e9f\u0ea1-\u0ea3\u0ea5\u0ea7\u0eaa-\u0eab\u0ead-\u0eb9\u0ebb-\u0ebd\u0ec0-\u0ec4\u0ec6\u0ec8-\u0ecd\u0ed0-\u0ed9\u0edc-\u0edf\u0f00\u0f18-\u0f19\u0f20-\u0f29\u0f35\u0f37\u0f39\u0f3e-\u0f47\u0f49-\u0f6c\u0f71-\u0f84\u0f86-\u0f97\u0f99-\u0fbc\u0fc6\u1000-\u1049\u1050-\u109d\u10a0-\u10c5\u10c7\u10cd\u10d0-\u10fa\u10fc-\u1248\u124a-\u124d\u1250-\u1256\u1258\u125a-\u125d\u1260-\u1288\u128a-\u128d\u1290-\u12b0\u12b2-\u12b5\u12b8-\u12be\u12c0\u12c2-\u12c5\u12c8-\u12d6\u12d8-\u1310\u1312-\u1315\u1318-\u135a\u135d-\u135f\u1369-\u1371\u1380-\u138f\u13a0-\u13f5\u13f8-\u13fd\u1401-\u166c\u166f-\u167f\u1681-\u169a\u16a0-\u16ea\u16ee-\u16f8\u1700-\u170c\u170e-\u1714\u1720-\u1734\u1740-\u1753\u1760-\u176c\u176e-\u1770\u1772-\u1773\u1780-\u17d3\u17d7\u17dc-\u17dd\u17e0-\u17e9\u180b-\u180d\u1810-\u1819\u1820-\u1878\u1880-\u18aa\u18b0-\u18f5\u1900-\u191e\u1920-\u192b\u1930-\u193b\u1946-\u196d\u1970-\u1974\u1980-\u19ab\u19b0-\u19c9\u19d0-\u19da\u1a00-\u1a1b\u1a20-\u1a5e\u1a60-\u1a7c\u1a7f-\u1a89\u1a90-\u1a99\u1aa7\u1ab0-\u1abd\u1b00-\u1b4b\u1b50-\u1b59\u1b6b-\u1b73\u1b80-\u1bf3\u1c00-\u1c37\u1c40-\u1c49\u1c4d-\u1c7d\u1c80-\u1c88\u1c90-\u1cba\u1cbd-\u1cbf\u1cd0-\u1cd2\u1cd4-\u1cf9\u1d00-\u1df9\u1dfb-\u1f15\u1f18-\u1f1d\u1f20-\u1f45\u1f48-\u1f4d\u1f50-\u1f57\u1f59\u1f5b\u1f5d\u1f5f-\u1f7d\u1f80-\u1fb4\u1fb6-\u1fbc\u1fbe\u1fc2-\u1fc4\u1fc6-\u1fcc\u1fd0-\u1fd3\u1fd6-\u1fdb\u1fe0-\u1fec\u1ff2-\u1ff4\u1ff6-\u1ffc\u203f-\u2040\u2054\u2071\u207f\u2090-\u209c\u20d0-\u20dc\u20e1\u20e5-\u20f0\u2102\u2107\u210a-\u2113\u2115\u2118-\u211d\u2124\u2126\u2128\u212a-\u2139\u213c-\u213f\u2145-\u2149\u214e\u2160-\u2188\u2c00-\u2c2e\u2c30-\u2c5e\u2c60-\u2ce4\u2ceb-\u2cf3\u2d00-\u2d25\u2d27\u2d2d\u2d30-\u2d67\u2d6f\u2d7f-\u2d96\u2da0-\u2da6\u2da8-\u2dae\u2db0-\u2db6\u2db8-\u2dbe\u2dc0-\u2dc6\u2dc8-\u2dce\u2dd0-\u2dd6\u2dd8-\u2dde\u2de0-\u2dff\u3005-\u3007\u3021-\u302f\u3031-\u3035\u3038-\u303c\u3041-\u3096\u3099-\u309a\u309d-\u309f\u30a1-\u30fa\u30fc-\u30ff\u3105-\u312f\u3131-\u318e\u31a0-\u31ba\u31f0-\u31ff\u3400-\u4db5\u4e00-\u9fef\ua000-\ua48c\ua4d0-\ua4fd\ua500-\ua60c\ua610-\ua62b\ua640-\ua66f\ua674-\ua67d\ua67f-\ua6f1\ua717-\ua71f\ua722-\ua788\ua78b-\ua7b9\ua7f7-\ua827\ua840-\ua873\ua880-\ua8c5\ua8d0-\ua8d9\ua8e0-\ua8f7\ua8fb\ua8fd-\ua92d\ua930-\ua953\ua960-\ua97c\ua980-\ua9c0\ua9cf-\ua9d9\ua9e0-\ua9fe\uaa00-\uaa36\uaa40-\uaa4d\uaa50-\uaa59\uaa60-\uaa76\uaa7a-\uaac2\uaadb-\uaadd\uaae0-\uaaef\uaaf2-\uaaf6\uab01-\uab06\uab09-\uab0e\uab11-\uab16\uab20-\uab26\uab28-\uab2e\uab30-\uab5a\uab5c-\uab65\uab70-\uabea\uabec-\uabed\uabf0-\uabf9\uac00-\ud7a3\ud7b0-\ud7c6\ud7cb-\ud7fb\uf900-\ufa6d\ufa70-\ufad9\ufb00-\ufb06\ufb13-\ufb17\ufb1d-\ufb28\ufb2a-\ufb36\ufb38-\ufb3c\ufb3e\ufb40-\ufb41\ufb43-\ufb44\ufb46-\ufbb1\ufbd3-\ufc5d\ufc64-\ufd3d\ufd50-\ufd8f\ufd92-\ufdc7\ufdf0-\ufdf9\ufe00-\ufe0f\ufe20-\ufe2f\ufe33-\ufe34\ufe4d-\ufe4f\ufe71\ufe73\ufe77\ufe79\ufe7b\ufe7d\ufe7f-\ufefc\uff10-\uff19\uff21-\uff3a\uff3f\uff41-\uff5a\uff66-\uffbe\uffc2-\uffc7\uffca-\uffcf\uffd2-\uffd7\uffda-\uffdc\U00010000-\U0001000b\U0001000d-\U00010026\U00010028-\U0001003a\U0001003c-\U0001003d\U0001003f-\U0001004d\U00010050-\U0001005d\U00010080-\U000100fa\U00010140-\U00010174\U000101fd\U00010280-\U0001029c\U000102a0-\U000102d0\U000102e0\U00010300-\U0001031f\U0001032d-\U0001034a\U00010350-\U0001037a\U00010380-\U0001039d\U000103a0-\U000103c3\U000103c8-\U000103cf\U000103d1-\U000103d5\U00010400-\U0001049d\U000104a0-\U000104a9\U000104b0-\U000104d3\U000104d8-\U000104fb\U00010500-\U00010527\U00010530-\U00010563\U00010600-\U00010736\U00010740-\U00010755\U00010760-\U00010767\U00010800-\U00010805\U00010808\U0001080a-\U00010835\U00010837-\U00010838\U0001083c\U0001083f-\U00010855\U00010860-\U00010876\U00010880-\U0001089e\U000108e0-\U000108f2\U000108f4-\U000108f5\U00010900-\U00010915\U00010920-\U00010939\U00010980-\U000109b7\U000109be-\U000109bf\U00010a00-\U00010a03\U00010a05-\U00010a06\U00010a0c-\U00010a13\U00010a15-\U00010a17\U00010a19-\U00010a35\U00010a38-\U00010a3a\U00010a3f\U00010a60-\U00010a7c\U00010a80-\U00010a9c\U00010ac0-\U00010ac7\U00010ac9-\U00010ae6\U00010b00-\U00010b35\U00010b40-\U00010b55\U00010b60-\U00010b72\U00010b80-\U00010b91\U00010c00-\U00010c48\U00010c80-\U00010cb2\U00010cc0-\U00010cf2\U00010d00-\U00010d27\U00010d30-\U00010d39\U00010f00-\U00010f1c\U00010f27\U00010f30-\U00010f50\U00011000-\U00011046\U00011066-\U0001106f\U0001107f-\U000110ba\U000110d0-\U000110e8\U000110f0-\U000110f9\U00011100-\U00011134\U00011136-\U0001113f\U00011144-\U00011146\U00011150-\U00011173\U00011176\U00011180-\U000111c4\U000111c9-\U000111cc\U000111d0-\U000111da\U000111dc\U00011200-\U00011211\U00011213-\U00011237\U0001123e\U00011280-\U00011286\U00011288\U0001128a-\U0001128d\U0001128f-\U0001129d\U0001129f-\U000112a8\U000112b0-\U000112ea\U000112f0-\U000112f9\U00011300-\U00011303\U00011305-\U0001130c\U0001130f-\U00011310\U00011313-\U00011328\U0001132a-\U00011330\U00011332-\U00011333\U00011335-\U00011339\U0001133b-\U00011344\U00011347-\U00011348\U0001134b-\U0001134d\U00011350\U00011357\U0001135d-\U00011363\U00011366-\U0001136c\U00011370-\U00011374\U00011400-\U0001144a\U00011450-\U00011459\U0001145e\U00011480-\U000114c5\U000114c7\U000114d0-\U000114d9\U00011580-\U000115b5\U000115b8-\U000115c0\U000115d8-\U000115dd\U00011600-\U00011640\U00011644\U00011650-\U00011659\U00011680-\U000116b7\U000116c0-\U000116c9\U00011700-\U0001171a\U0001171d-\U0001172b\U00011730-\U00011739\U00011800-\U0001183a\U000118a0-\U000118e9\U000118ff\U00011a00-\U00011a3e\U00011a47\U00011a50-\U00011a83\U00011a86-\U00011a99\U00011a9d\U00011ac0-\U00011af8\U00011c00-\U00011c08\U00011c0a-\U00011c36\U00011c38-\U00011c40\U00011c50-\U00011c59\U00011c72-\U00011c8f\U00011c92-\U00011ca7\U00011ca9-\U00011cb6\U00011d00-\U00011d06\U00011d08-\U00011d09\U00011d0b-\U00011d36\U00011d3a\U00011d3c-\U00011d3d\U00011d3f-\U00011d47\U00011d50-\U00011d59\U00011d60-\U00011d65\U00011d67-\U00011d68\U00011d6a-\U00011d8e\U00011d90-\U00011d91\U00011d93-\U00011d98\U00011da0-\U00011da9\U00011ee0-\U00011ef6\U00012000-\U00012399\U00012400-\U0001246e\U00012480-\U00012543\U00013000-\U0001342e\U00014400-\U00014646\U00016800-\U00016a38\U00016a40-\U00016a5e\U00016a60-\U00016a69\U00016ad0-\U00016aed\U00016af0-\U00016af4\U00016b00-\U00016b36\U00016b40-\U00016b43\U00016b50-\U00016b59\U00016b63-\U00016b77\U00016b7d-\U00016b8f\U00016e40-\U00016e7f\U00016f00-\U00016f44\U00016f50-\U00016f7e\U00016f8f-\U00016f9f\U00016fe0-\U00016fe1\U00017000-\U000187f1\U00018800-\U00018af2\U0001b000-\U0001b11e\U0001b170-\U0001b2fb\U0001bc00-\U0001bc6a\U0001bc70-\U0001bc7c\U0001bc80-\U0001bc88\U0001bc90-\U0001bc99\U0001bc9d-\U0001bc9e\U0001d165-\U0001d169\U0001d16d-\U0001d172\U0001d17b-\U0001d182\U0001d185-\U0001d18b\U0001d1aa-\U0001d1ad\U0001d242-\U0001d244\U0001d400-\U0001d454\U0001d456-\U0001d49c\U0001d49e-\U0001d49f\U0001d4a2\U0001d4a5-\U0001d4a6\U0001d4a9-\U0001d4ac\U0001d4ae-\U0001d4b9\U0001d4bb\U0001d4bd-\U0001d4c3\U0001d4c5-\U0001d505\U0001d507-\U0001d50a\U0001d50d-\U0001d514\U0001d516-\U0001d51c\U0001d51e-\U0001d539\U0001d53b-\U0001d53e\U0001d540-\U0001d544\U0001d546\U0001d54a-\U0001d550\U0001d552-\U0001d6a5\U0001d6a8-\U0001d6c0\U0001d6c2-\U0001d6da\U0001d6dc-\U0001d6fa\U0001d6fc-\U0001d714\U0001d716-\U0001d734\U0001d736-\U0001d74e\U0001d750-\U0001d76e\U0001d770-\U0001d788\U0001d78a-\U0001d7a8\U0001d7aa-\U0001d7c2\U0001d7c4-\U0001d7cb\U0001d7ce-\U0001d7ff\U0001da00-\U0001da36\U0001da3b-\U0001da6c\U0001da75\U0001da84\U0001da9b-\U0001da9f\U0001daa1-\U0001daaf\U0001e000-\U0001e006\U0001e008-\U0001e018\U0001e01b-\U0001e021\U0001e023-\U0001e024\U0001e026-\U0001e02a\U0001e800-\U0001e8c4\U0001e8d0-\U0001e8d6\U0001e900-\U0001e94a\U0001e950-\U0001e959\U0001ee00-\U0001ee03\U0001ee05-\U0001ee1f\U0001ee21-\U0001ee22\U0001ee24\U0001ee27\U0001ee29-\U0001ee32\U0001ee34-\U0001ee37\U0001ee39\U0001ee3b\U0001ee42\U0001ee47\U0001ee49\U0001ee4b\U0001ee4d-\U0001ee4f\U0001ee51-\U0001ee52\U0001ee54\U0001ee57\U0001ee59\U0001ee5b\U0001ee5d\U0001ee5f\U0001ee61-\U0001ee62\U0001ee64\U0001ee67-\U0001ee6a\U0001ee6c-\U0001ee72\U0001ee74-\U0001ee77\U0001ee79-\U0001ee7c\U0001ee7e\U0001ee80-\U0001ee89\U0001ee8b-\U0001ee9b\U0001eea1-\U0001eea3\U0001eea5-\U0001eea9\U0001eeab-\U0001eebb\U00020000-\U0002a6d6\U0002a700-\U0002b734\U0002b740-\U0002b81d\U0002b820-\U0002cea1\U0002ceb0-\U0002ebe0\U0002f800-\U0002fa1d\U000e0100-\U000e01ef' - -xid_start = 'A-Z_a-z\xaa\xb5\xba\xc0-\xd6\xd8-\xf6\xf8-\u02c1\u02c6-\u02d1\u02e0-\u02e4\u02ec\u02ee\u0370-\u0374\u0376-\u0377\u037b-\u037d\u037f\u0386\u0388-\u038a\u038c\u038e-\u03a1\u03a3-\u03f5\u03f7-\u0481\u048a-\u052f\u0531-\u0556\u0559\u0560-\u0588\u05d0-\u05ea\u05ef-\u05f2\u0620-\u064a\u066e-\u066f\u0671-\u06d3\u06d5\u06e5-\u06e6\u06ee-\u06ef\u06fa-\u06fc\u06ff\u0710\u0712-\u072f\u074d-\u07a5\u07b1\u07ca-\u07ea\u07f4-\u07f5\u07fa\u0800-\u0815\u081a\u0824\u0828\u0840-\u0858\u0860-\u086a\u08a0-\u08b4\u08b6-\u08bd\u0904-\u0939\u093d\u0950\u0958-\u0961\u0971-\u0980\u0985-\u098c\u098f-\u0990\u0993-\u09a8\u09aa-\u09b0\u09b2\u09b6-\u09b9\u09bd\u09ce\u09dc-\u09dd\u09df-\u09e1\u09f0-\u09f1\u09fc\u0a05-\u0a0a\u0a0f-\u0a10\u0a13-\u0a28\u0a2a-\u0a30\u0a32-\u0a33\u0a35-\u0a36\u0a38-\u0a39\u0a59-\u0a5c\u0a5e\u0a72-\u0a74\u0a85-\u0a8d\u0a8f-\u0a91\u0a93-\u0aa8\u0aaa-\u0ab0\u0ab2-\u0ab3\u0ab5-\u0ab9\u0abd\u0ad0\u0ae0-\u0ae1\u0af9\u0b05-\u0b0c\u0b0f-\u0b10\u0b13-\u0b28\u0b2a-\u0b30\u0b32-\u0b33\u0b35-\u0b39\u0b3d\u0b5c-\u0b5d\u0b5f-\u0b61\u0b71\u0b83\u0b85-\u0b8a\u0b8e-\u0b90\u0b92-\u0b95\u0b99-\u0b9a\u0b9c\u0b9e-\u0b9f\u0ba3-\u0ba4\u0ba8-\u0baa\u0bae-\u0bb9\u0bd0\u0c05-\u0c0c\u0c0e-\u0c10\u0c12-\u0c28\u0c2a-\u0c39\u0c3d\u0c58-\u0c5a\u0c60-\u0c61\u0c80\u0c85-\u0c8c\u0c8e-\u0c90\u0c92-\u0ca8\u0caa-\u0cb3\u0cb5-\u0cb9\u0cbd\u0cde\u0ce0-\u0ce1\u0cf1-\u0cf2\u0d05-\u0d0c\u0d0e-\u0d10\u0d12-\u0d3a\u0d3d\u0d4e\u0d54-\u0d56\u0d5f-\u0d61\u0d7a-\u0d7f\u0d85-\u0d96\u0d9a-\u0db1\u0db3-\u0dbb\u0dbd\u0dc0-\u0dc6\u0e01-\u0e30\u0e32\u0e40-\u0e46\u0e81-\u0e82\u0e84\u0e87-\u0e88\u0e8a\u0e8d\u0e94-\u0e97\u0e99-\u0e9f\u0ea1-\u0ea3\u0ea5\u0ea7\u0eaa-\u0eab\u0ead-\u0eb0\u0eb2\u0ebd\u0ec0-\u0ec4\u0ec6\u0edc-\u0edf\u0f00\u0f40-\u0f47\u0f49-\u0f6c\u0f88-\u0f8c\u1000-\u102a\u103f\u1050-\u1055\u105a-\u105d\u1061\u1065-\u1066\u106e-\u1070\u1075-\u1081\u108e\u10a0-\u10c5\u10c7\u10cd\u10d0-\u10fa\u10fc-\u1248\u124a-\u124d\u1250-\u1256\u1258\u125a-\u125d\u1260-\u1288\u128a-\u128d\u1290-\u12b0\u12b2-\u12b5\u12b8-\u12be\u12c0\u12c2-\u12c5\u12c8-\u12d6\u12d8-\u1310\u1312-\u1315\u1318-\u135a\u1380-\u138f\u13a0-\u13f5\u13f8-\u13fd\u1401-\u166c\u166f-\u167f\u1681-\u169a\u16a0-\u16ea\u16ee-\u16f8\u1700-\u170c\u170e-\u1711\u1720-\u1731\u1740-\u1751\u1760-\u176c\u176e-\u1770\u1780-\u17b3\u17d7\u17dc\u1820-\u1878\u1880-\u18a8\u18aa\u18b0-\u18f5\u1900-\u191e\u1950-\u196d\u1970-\u1974\u1980-\u19ab\u19b0-\u19c9\u1a00-\u1a16\u1a20-\u1a54\u1aa7\u1b05-\u1b33\u1b45-\u1b4b\u1b83-\u1ba0\u1bae-\u1baf\u1bba-\u1be5\u1c00-\u1c23\u1c4d-\u1c4f\u1c5a-\u1c7d\u1c80-\u1c88\u1c90-\u1cba\u1cbd-\u1cbf\u1ce9-\u1cec\u1cee-\u1cf1\u1cf5-\u1cf6\u1d00-\u1dbf\u1e00-\u1f15\u1f18-\u1f1d\u1f20-\u1f45\u1f48-\u1f4d\u1f50-\u1f57\u1f59\u1f5b\u1f5d\u1f5f-\u1f7d\u1f80-\u1fb4\u1fb6-\u1fbc\u1fbe\u1fc2-\u1fc4\u1fc6-\u1fcc\u1fd0-\u1fd3\u1fd6-\u1fdb\u1fe0-\u1fec\u1ff2-\u1ff4\u1ff6-\u1ffc\u2071\u207f\u2090-\u209c\u2102\u2107\u210a-\u2113\u2115\u2118-\u211d\u2124\u2126\u2128\u212a-\u2139\u213c-\u213f\u2145-\u2149\u214e\u2160-\u2188\u2c00-\u2c2e\u2c30-\u2c5e\u2c60-\u2ce4\u2ceb-\u2cee\u2cf2-\u2cf3\u2d00-\u2d25\u2d27\u2d2d\u2d30-\u2d67\u2d6f\u2d80-\u2d96\u2da0-\u2da6\u2da8-\u2dae\u2db0-\u2db6\u2db8-\u2dbe\u2dc0-\u2dc6\u2dc8-\u2dce\u2dd0-\u2dd6\u2dd8-\u2dde\u3005-\u3007\u3021-\u3029\u3031-\u3035\u3038-\u303c\u3041-\u3096\u309d-\u309f\u30a1-\u30fa\u30fc-\u30ff\u3105-\u312f\u3131-\u318e\u31a0-\u31ba\u31f0-\u31ff\u3400-\u4db5\u4e00-\u9fef\ua000-\ua48c\ua4d0-\ua4fd\ua500-\ua60c\ua610-\ua61f\ua62a-\ua62b\ua640-\ua66e\ua67f-\ua69d\ua6a0-\ua6ef\ua717-\ua71f\ua722-\ua788\ua78b-\ua7b9\ua7f7-\ua801\ua803-\ua805\ua807-\ua80a\ua80c-\ua822\ua840-\ua873\ua882-\ua8b3\ua8f2-\ua8f7\ua8fb\ua8fd-\ua8fe\ua90a-\ua925\ua930-\ua946\ua960-\ua97c\ua984-\ua9b2\ua9cf\ua9e0-\ua9e4\ua9e6-\ua9ef\ua9fa-\ua9fe\uaa00-\uaa28\uaa40-\uaa42\uaa44-\uaa4b\uaa60-\uaa76\uaa7a\uaa7e-\uaaaf\uaab1\uaab5-\uaab6\uaab9-\uaabd\uaac0\uaac2\uaadb-\uaadd\uaae0-\uaaea\uaaf2-\uaaf4\uab01-\uab06\uab09-\uab0e\uab11-\uab16\uab20-\uab26\uab28-\uab2e\uab30-\uab5a\uab5c-\uab65\uab70-\uabe2\uac00-\ud7a3\ud7b0-\ud7c6\ud7cb-\ud7fb\uf900-\ufa6d\ufa70-\ufad9\ufb00-\ufb06\ufb13-\ufb17\ufb1d\ufb1f-\ufb28\ufb2a-\ufb36\ufb38-\ufb3c\ufb3e\ufb40-\ufb41\ufb43-\ufb44\ufb46-\ufbb1\ufbd3-\ufc5d\ufc64-\ufd3d\ufd50-\ufd8f\ufd92-\ufdc7\ufdf0-\ufdf9\ufe71\ufe73\ufe77\ufe79\ufe7b\ufe7d\ufe7f-\ufefc\uff21-\uff3a\uff41-\uff5a\uff66-\uff9d\uffa0-\uffbe\uffc2-\uffc7\uffca-\uffcf\uffd2-\uffd7\uffda-\uffdc\U00010000-\U0001000b\U0001000d-\U00010026\U00010028-\U0001003a\U0001003c-\U0001003d\U0001003f-\U0001004d\U00010050-\U0001005d\U00010080-\U000100fa\U00010140-\U00010174\U00010280-\U0001029c\U000102a0-\U000102d0\U00010300-\U0001031f\U0001032d-\U0001034a\U00010350-\U00010375\U00010380-\U0001039d\U000103a0-\U000103c3\U000103c8-\U000103cf\U000103d1-\U000103d5\U00010400-\U0001049d\U000104b0-\U000104d3\U000104d8-\U000104fb\U00010500-\U00010527\U00010530-\U00010563\U00010600-\U00010736\U00010740-\U00010755\U00010760-\U00010767\U00010800-\U00010805\U00010808\U0001080a-\U00010835\U00010837-\U00010838\U0001083c\U0001083f-\U00010855\U00010860-\U00010876\U00010880-\U0001089e\U000108e0-\U000108f2\U000108f4-\U000108f5\U00010900-\U00010915\U00010920-\U00010939\U00010980-\U000109b7\U000109be-\U000109bf\U00010a00\U00010a10-\U00010a13\U00010a15-\U00010a17\U00010a19-\U00010a35\U00010a60-\U00010a7c\U00010a80-\U00010a9c\U00010ac0-\U00010ac7\U00010ac9-\U00010ae4\U00010b00-\U00010b35\U00010b40-\U00010b55\U00010b60-\U00010b72\U00010b80-\U00010b91\U00010c00-\U00010c48\U00010c80-\U00010cb2\U00010cc0-\U00010cf2\U00010d00-\U00010d23\U00010f00-\U00010f1c\U00010f27\U00010f30-\U00010f45\U00011003-\U00011037\U00011083-\U000110af\U000110d0-\U000110e8\U00011103-\U00011126\U00011144\U00011150-\U00011172\U00011176\U00011183-\U000111b2\U000111c1-\U000111c4\U000111da\U000111dc\U00011200-\U00011211\U00011213-\U0001122b\U00011280-\U00011286\U00011288\U0001128a-\U0001128d\U0001128f-\U0001129d\U0001129f-\U000112a8\U000112b0-\U000112de\U00011305-\U0001130c\U0001130f-\U00011310\U00011313-\U00011328\U0001132a-\U00011330\U00011332-\U00011333\U00011335-\U00011339\U0001133d\U00011350\U0001135d-\U00011361\U00011400-\U00011434\U00011447-\U0001144a\U00011480-\U000114af\U000114c4-\U000114c5\U000114c7\U00011580-\U000115ae\U000115d8-\U000115db\U00011600-\U0001162f\U00011644\U00011680-\U000116aa\U00011700-\U0001171a\U00011800-\U0001182b\U000118a0-\U000118df\U000118ff\U00011a00\U00011a0b-\U00011a32\U00011a3a\U00011a50\U00011a5c-\U00011a83\U00011a86-\U00011a89\U00011a9d\U00011ac0-\U00011af8\U00011c00-\U00011c08\U00011c0a-\U00011c2e\U00011c40\U00011c72-\U00011c8f\U00011d00-\U00011d06\U00011d08-\U00011d09\U00011d0b-\U00011d30\U00011d46\U00011d60-\U00011d65\U00011d67-\U00011d68\U00011d6a-\U00011d89\U00011d98\U00011ee0-\U00011ef2\U00012000-\U00012399\U00012400-\U0001246e\U00012480-\U00012543\U00013000-\U0001342e\U00014400-\U00014646\U00016800-\U00016a38\U00016a40-\U00016a5e\U00016ad0-\U00016aed\U00016b00-\U00016b2f\U00016b40-\U00016b43\U00016b63-\U00016b77\U00016b7d-\U00016b8f\U00016e40-\U00016e7f\U00016f00-\U00016f44\U00016f50\U00016f93-\U00016f9f\U00016fe0-\U00016fe1\U00017000-\U000187f1\U00018800-\U00018af2\U0001b000-\U0001b11e\U0001b170-\U0001b2fb\U0001bc00-\U0001bc6a\U0001bc70-\U0001bc7c\U0001bc80-\U0001bc88\U0001bc90-\U0001bc99\U0001d400-\U0001d454\U0001d456-\U0001d49c\U0001d49e-\U0001d49f\U0001d4a2\U0001d4a5-\U0001d4a6\U0001d4a9-\U0001d4ac\U0001d4ae-\U0001d4b9\U0001d4bb\U0001d4bd-\U0001d4c3\U0001d4c5-\U0001d505\U0001d507-\U0001d50a\U0001d50d-\U0001d514\U0001d516-\U0001d51c\U0001d51e-\U0001d539\U0001d53b-\U0001d53e\U0001d540-\U0001d544\U0001d546\U0001d54a-\U0001d550\U0001d552-\U0001d6a5\U0001d6a8-\U0001d6c0\U0001d6c2-\U0001d6da\U0001d6dc-\U0001d6fa\U0001d6fc-\U0001d714\U0001d716-\U0001d734\U0001d736-\U0001d74e\U0001d750-\U0001d76e\U0001d770-\U0001d788\U0001d78a-\U0001d7a8\U0001d7aa-\U0001d7c2\U0001d7c4-\U0001d7cb\U0001e800-\U0001e8c4\U0001e900-\U0001e943\U0001ee00-\U0001ee03\U0001ee05-\U0001ee1f\U0001ee21-\U0001ee22\U0001ee24\U0001ee27\U0001ee29-\U0001ee32\U0001ee34-\U0001ee37\U0001ee39\U0001ee3b\U0001ee42\U0001ee47\U0001ee49\U0001ee4b\U0001ee4d-\U0001ee4f\U0001ee51-\U0001ee52\U0001ee54\U0001ee57\U0001ee59\U0001ee5b\U0001ee5d\U0001ee5f\U0001ee61-\U0001ee62\U0001ee64\U0001ee67-\U0001ee6a\U0001ee6c-\U0001ee72\U0001ee74-\U0001ee77\U0001ee79-\U0001ee7c\U0001ee7e\U0001ee80-\U0001ee89\U0001ee8b-\U0001ee9b\U0001eea1-\U0001eea3\U0001eea5-\U0001eea9\U0001eeab-\U0001eebb\U00020000-\U0002a6d6\U0002a700-\U0002b734\U0002b740-\U0002b81d\U0002b820-\U0002cea1\U0002ceb0-\U0002ebe0\U0002f800-\U0002fa1d' - -cats = ['Cc', 'Cf', 'Cn', 'Co', 'Cs', 'Ll', 'Lm', 'Lo', 'Lt', 'Lu', 'Mc', 'Me', 'Mn', 'Nd', 'Nl', 'No', 'Pc', 'Pd', 'Pe', 'Pf', 'Pi', 'Po', 'Ps', 'Sc', 'Sk', 'Sm', 'So', 'Zl', 'Zp', 'Zs'] - -# Generated from unidata 11.0.0 - -def combine(*args): - return ''.join(globals()[cat] for cat in args) - - -def allexcept(*args): - newcats = cats[:] - for arg in args: - newcats.remove(arg) - return ''.join(globals()[cat] for cat in newcats) - - -def _handle_runs(char_list): # pragma: no cover - buf = [] - for c in char_list: - if len(c) == 1: - if buf and buf[-1][1] == chr(ord(c)-1): - buf[-1] = (buf[-1][0], c) - else: - buf.append((c, c)) - else: - buf.append((c, c)) - for a, b in buf: - if a == b: - yield a - else: - yield '%s-%s' % (a, b) - - -if __name__ == '__main__': # pragma: no cover - import unicodedata - - categories = {'xid_start': [], 'xid_continue': []} - - with open(__file__, encoding='utf-8') as fp: - content = fp.read() - - header = content[:content.find('Cc =')] - footer = content[content.find("def combine("):] - - for code in range(0x110000): - c = chr(code) - cat = unicodedata.category(c) - if ord(c) == 0xdc00: - # Hack to avoid combining this combining with the preceding high - # surrogate, 0xdbff, when doing a repr. - c = '\\' + c - elif ord(c) in (0x2d, 0x5b, 0x5c, 0x5d, 0x5e): - # Escape regex metachars. - c = '\\' + c - categories.setdefault(cat, []).append(c) - # XID_START and XID_CONTINUE are special categories used for matching - # identifiers in Python 3. - if c.isidentifier(): - categories['xid_start'].append(c) - if ('a' + c).isidentifier(): - categories['xid_continue'].append(c) - - with open(__file__, 'w', encoding='utf-8') as fp: - fp.write(header) - - for cat in sorted(categories): - val = ''.join(_handle_runs(categories[cat])) - fp.write('%s = %a\n\n' % (cat, val)) - - cats = sorted(categories) - cats.remove('xid_start') - cats.remove('xid_continue') - fp.write('cats = %r\n\n' % cats) - - fp.write('# Generated from unidata %s\n\n' % (unicodedata.unidata_version,)) - - fp.write(footer) diff --git a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/setuptools/command/test.py b/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/setuptools/command/test.py deleted file mode 100644 index 8dde513c9534eb7119aa18f4d4f480a264b239a3..0000000000000000000000000000000000000000 --- a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/setuptools/command/test.py +++ /dev/null @@ -1,251 +0,0 @@ -import os -import operator -import sys -import contextlib -import itertools -import unittest -from distutils.errors import DistutilsError, DistutilsOptionError -from distutils import log -from unittest import TestLoader - -from pkg_resources import ( - resource_listdir, - resource_exists, - normalize_path, - working_set, - evaluate_marker, - add_activation_listener, - require, -) -from .._importlib import metadata -from setuptools import Command -from setuptools.extern.more_itertools import unique_everseen -from setuptools.extern.jaraco.functools import pass_none - - -class ScanningLoader(TestLoader): - def __init__(self): - TestLoader.__init__(self) - self._visited = set() - - def loadTestsFromModule(self, module, pattern=None): - """Return a suite of all tests cases contained in the given module - - If the module is a package, load tests from all the modules in it. - If the module has an ``additional_tests`` function, call it and add - the return value to the tests. - """ - if module in self._visited: - return None - self._visited.add(module) - - tests = [] - tests.append(TestLoader.loadTestsFromModule(self, module)) - - if hasattr(module, "additional_tests"): - tests.append(module.additional_tests()) - - if hasattr(module, '__path__'): - for file in resource_listdir(module.__name__, ''): - if file.endswith('.py') and file != '__init__.py': - submodule = module.__name__ + '.' + file[:-3] - else: - if resource_exists(module.__name__, file + '/__init__.py'): - submodule = module.__name__ + '.' + file - else: - continue - tests.append(self.loadTestsFromName(submodule)) - - if len(tests) != 1: - return self.suiteClass(tests) - else: - return tests[0] # don't create a nested suite for only one return - - -# adapted from jaraco.classes.properties:NonDataProperty -class NonDataProperty: - def __init__(self, fget): - self.fget = fget - - def __get__(self, obj, objtype=None): - if obj is None: - return self - return self.fget(obj) - - -class test(Command): - """Command to run unit tests after in-place build""" - - description = "run unit tests after in-place build (deprecated)" - - user_options = [ - ('test-module=', 'm', "Run 'test_suite' in specified module"), - ( - 'test-suite=', - 's', - "Run single test, case or suite (e.g. 'module.test_suite')", - ), - ('test-runner=', 'r', "Test runner to use"), - ] - - def initialize_options(self): - self.test_suite = None - self.test_module = None - self.test_loader = None - self.test_runner = None - - def finalize_options(self): - - if self.test_suite and self.test_module: - msg = "You may specify a module or a suite, but not both" - raise DistutilsOptionError(msg) - - if self.test_suite is None: - if self.test_module is None: - self.test_suite = self.distribution.test_suite - else: - self.test_suite = self.test_module + ".test_suite" - - if self.test_loader is None: - self.test_loader = getattr(self.distribution, 'test_loader', None) - if self.test_loader is None: - self.test_loader = "setuptools.command.test:ScanningLoader" - if self.test_runner is None: - self.test_runner = getattr(self.distribution, 'test_runner', None) - - @NonDataProperty - def test_args(self): - return list(self._test_args()) - - def _test_args(self): - if not self.test_suite: - yield 'discover' - if self.verbose: - yield '--verbose' - if self.test_suite: - yield self.test_suite - - def with_project_on_sys_path(self, func): - """ - Backward compatibility for project_on_sys_path context. - """ - with self.project_on_sys_path(): - func() - - @contextlib.contextmanager - def project_on_sys_path(self, include_dists=[]): - self.run_command('egg_info') - - # Build extensions in-place - self.reinitialize_command('build_ext', inplace=1) - self.run_command('build_ext') - - ei_cmd = self.get_finalized_command("egg_info") - - old_path = sys.path[:] - old_modules = sys.modules.copy() - - try: - project_path = normalize_path(ei_cmd.egg_base) - sys.path.insert(0, project_path) - working_set.__init__() - add_activation_listener(lambda dist: dist.activate()) - require('%s==%s' % (ei_cmd.egg_name, ei_cmd.egg_version)) - with self.paths_on_pythonpath([project_path]): - yield - finally: - sys.path[:] = old_path - sys.modules.clear() - sys.modules.update(old_modules) - working_set.__init__() - - @staticmethod - @contextlib.contextmanager - def paths_on_pythonpath(paths): - """ - Add the indicated paths to the head of the PYTHONPATH environment - variable so that subprocesses will also see the packages at - these paths. - - Do this in a context that restores the value on exit. - """ - nothing = object() - orig_pythonpath = os.environ.get('PYTHONPATH', nothing) - current_pythonpath = os.environ.get('PYTHONPATH', '') - try: - prefix = os.pathsep.join(unique_everseen(paths)) - to_join = filter(None, [prefix, current_pythonpath]) - new_path = os.pathsep.join(to_join) - if new_path: - os.environ['PYTHONPATH'] = new_path - yield - finally: - if orig_pythonpath is nothing: - os.environ.pop('PYTHONPATH', None) - else: - os.environ['PYTHONPATH'] = orig_pythonpath - - @staticmethod - def install_dists(dist): - """ - Install the requirements indicated by self.distribution and - return an iterable of the dists that were built. - """ - ir_d = dist.fetch_build_eggs(dist.install_requires) - tr_d = dist.fetch_build_eggs(dist.tests_require or []) - er_d = dist.fetch_build_eggs( - v - for k, v in dist.extras_require.items() - if k.startswith(':') and evaluate_marker(k[1:]) - ) - return itertools.chain(ir_d, tr_d, er_d) - - def run(self): - self.announce( - "WARNING: Testing via this command is deprecated and will be " - "removed in a future version. Users looking for a generic test " - "entry point independent of test runner are encouraged to use " - "tox.", - log.WARN, - ) - - installed_dists = self.install_dists(self.distribution) - - cmd = ' '.join(self._argv) - if self.dry_run: - self.announce('skipping "%s" (dry run)' % cmd) - return - - self.announce('running "%s"' % cmd) - - paths = map(operator.attrgetter('location'), installed_dists) - with self.paths_on_pythonpath(paths): - with self.project_on_sys_path(): - self.run_tests() - - def run_tests(self): - test = unittest.main( - None, - None, - self._argv, - testLoader=self._resolve_as_ep(self.test_loader), - testRunner=self._resolve_as_ep(self.test_runner), - exit=False, - ) - if not test.result.wasSuccessful(): - msg = 'Test failed: %s' % test.result - self.announce(msg, log.ERROR) - raise DistutilsError(msg) - - @property - def _argv(self): - return ['unittest'] + self.test_args - - @staticmethod - @pass_none - def _resolve_as_ep(val): - """ - Load the indicated attribute value, called, as a as if it were - specified as an entry point. - """ - return metadata.EntryPoint(value=val, name=None, group=None).load()() diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/fontTools/misc/arrayTools.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/fontTools/misc/arrayTools.py deleted file mode 100644 index 5fb01a838ae8769809b4f8ab28cb69ea5e84a3dc..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/fontTools/misc/arrayTools.py +++ /dev/null @@ -1,422 +0,0 @@ -"""Routines for calculating bounding boxes, point in rectangle calculations and -so on. -""" - -from fontTools.misc.roundTools import otRound -from fontTools.misc.vector import Vector as _Vector -import math -import warnings - - -def calcBounds(array): - """Calculate the bounding rectangle of a 2D points array. - - Args: - array: A sequence of 2D tuples. - - Returns: - A four-item tuple representing the bounding rectangle ``(xMin, yMin, xMax, yMax)``. - """ - if not array: - return 0, 0, 0, 0 - xs = [x for x, y in array] - ys = [y for x, y in array] - return min(xs), min(ys), max(xs), max(ys) - - -def calcIntBounds(array, round=otRound): - """Calculate the integer bounding rectangle of a 2D points array. - - Values are rounded to closest integer towards ``+Infinity`` using the - :func:`fontTools.misc.fixedTools.otRound` function by default, unless - an optional ``round`` function is passed. - - Args: - array: A sequence of 2D tuples. - round: A rounding function of type ``f(x: float) -> int``. - - Returns: - A four-item tuple of integers representing the bounding rectangle: - ``(xMin, yMin, xMax, yMax)``. - """ - return tuple(round(v) for v in calcBounds(array)) - - -def updateBounds(bounds, p, min=min, max=max): - """Add a point to a bounding rectangle. - - Args: - bounds: A bounding rectangle expressed as a tuple - ``(xMin, yMin, xMax, yMax)``. - p: A 2D tuple representing a point. - min,max: functions to compute the minimum and maximum. - - Returns: - The updated bounding rectangle ``(xMin, yMin, xMax, yMax)``. - """ - (x, y) = p - xMin, yMin, xMax, yMax = bounds - return min(xMin, x), min(yMin, y), max(xMax, x), max(yMax, y) - - -def pointInRect(p, rect): - """Test if a point is inside a bounding rectangle. - - Args: - p: A 2D tuple representing a point. - rect: A bounding rectangle expressed as a tuple - ``(xMin, yMin, xMax, yMax)``. - - Returns: - ``True`` if the point is inside the rectangle, ``False`` otherwise. - """ - (x, y) = p - xMin, yMin, xMax, yMax = rect - return (xMin <= x <= xMax) and (yMin <= y <= yMax) - - -def pointsInRect(array, rect): - """Determine which points are inside a bounding rectangle. - - Args: - array: A sequence of 2D tuples. - rect: A bounding rectangle expressed as a tuple - ``(xMin, yMin, xMax, yMax)``. - - Returns: - A list containing the points inside the rectangle. - """ - if len(array) < 1: - return [] - xMin, yMin, xMax, yMax = rect - return [(xMin <= x <= xMax) and (yMin <= y <= yMax) for x, y in array] - - -def vectorLength(vector): - """Calculate the length of the given vector. - - Args: - vector: A 2D tuple. - - Returns: - The Euclidean length of the vector. - """ - x, y = vector - return math.sqrt(x**2 + y**2) - - -def asInt16(array): - """Round a list of floats to 16-bit signed integers. - - Args: - array: List of float values. - - Returns: - A list of rounded integers. - """ - return [int(math.floor(i + 0.5)) for i in array] - - -def normRect(rect): - """Normalize a bounding box rectangle. - - This function "turns the rectangle the right way up", so that the following - holds:: - - xMin <= xMax and yMin <= yMax - - Args: - rect: A bounding rectangle expressed as a tuple - ``(xMin, yMin, xMax, yMax)``. - - Returns: - A normalized bounding rectangle. - """ - (xMin, yMin, xMax, yMax) = rect - return min(xMin, xMax), min(yMin, yMax), max(xMin, xMax), max(yMin, yMax) - - -def scaleRect(rect, x, y): - """Scale a bounding box rectangle. - - Args: - rect: A bounding rectangle expressed as a tuple - ``(xMin, yMin, xMax, yMax)``. - x: Factor to scale the rectangle along the X axis. - Y: Factor to scale the rectangle along the Y axis. - - Returns: - A scaled bounding rectangle. - """ - (xMin, yMin, xMax, yMax) = rect - return xMin * x, yMin * y, xMax * x, yMax * y - - -def offsetRect(rect, dx, dy): - """Offset a bounding box rectangle. - - Args: - rect: A bounding rectangle expressed as a tuple - ``(xMin, yMin, xMax, yMax)``. - dx: Amount to offset the rectangle along the X axis. - dY: Amount to offset the rectangle along the Y axis. - - Returns: - An offset bounding rectangle. - """ - (xMin, yMin, xMax, yMax) = rect - return xMin + dx, yMin + dy, xMax + dx, yMax + dy - - -def insetRect(rect, dx, dy): - """Inset a bounding box rectangle on all sides. - - Args: - rect: A bounding rectangle expressed as a tuple - ``(xMin, yMin, xMax, yMax)``. - dx: Amount to inset the rectangle along the X axis. - dY: Amount to inset the rectangle along the Y axis. - - Returns: - An inset bounding rectangle. - """ - (xMin, yMin, xMax, yMax) = rect - return xMin + dx, yMin + dy, xMax - dx, yMax - dy - - -def sectRect(rect1, rect2): - """Test for rectangle-rectangle intersection. - - Args: - rect1: First bounding rectangle, expressed as tuples - ``(xMin, yMin, xMax, yMax)``. - rect2: Second bounding rectangle. - - Returns: - A boolean and a rectangle. - If the input rectangles intersect, returns ``True`` and the intersecting - rectangle. Returns ``False`` and ``(0, 0, 0, 0)`` if the input - rectangles don't intersect. - """ - (xMin1, yMin1, xMax1, yMax1) = rect1 - (xMin2, yMin2, xMax2, yMax2) = rect2 - xMin, yMin, xMax, yMax = ( - max(xMin1, xMin2), - max(yMin1, yMin2), - min(xMax1, xMax2), - min(yMax1, yMax2), - ) - if xMin >= xMax or yMin >= yMax: - return False, (0, 0, 0, 0) - return True, (xMin, yMin, xMax, yMax) - - -def unionRect(rect1, rect2): - """Determine union of bounding rectangles. - - Args: - rect1: First bounding rectangle, expressed as tuples - ``(xMin, yMin, xMax, yMax)``. - rect2: Second bounding rectangle. - - Returns: - The smallest rectangle in which both input rectangles are fully - enclosed. - """ - (xMin1, yMin1, xMax1, yMax1) = rect1 - (xMin2, yMin2, xMax2, yMax2) = rect2 - xMin, yMin, xMax, yMax = ( - min(xMin1, xMin2), - min(yMin1, yMin2), - max(xMax1, xMax2), - max(yMax1, yMax2), - ) - return (xMin, yMin, xMax, yMax) - - -def rectCenter(rect): - """Determine rectangle center. - - Args: - rect: Bounding rectangle, expressed as tuples - ``(xMin, yMin, xMax, yMax)``. - - Returns: - A 2D tuple representing the point at the center of the rectangle. - """ - (xMin, yMin, xMax, yMax) = rect - return (xMin + xMax) / 2, (yMin + yMax) / 2 - - -def rectArea(rect): - """Determine rectangle area. - - Args: - rect: Bounding rectangle, expressed as tuples - ``(xMin, yMin, xMax, yMax)``. - - Returns: - The area of the rectangle. - """ - (xMin, yMin, xMax, yMax) = rect - return (yMax - yMin) * (xMax - xMin) - - -def intRect(rect): - """Round a rectangle to integer values. - - Guarantees that the resulting rectangle is NOT smaller than the original. - - Args: - rect: Bounding rectangle, expressed as tuples - ``(xMin, yMin, xMax, yMax)``. - - Returns: - A rounded bounding rectangle. - """ - (xMin, yMin, xMax, yMax) = rect - xMin = int(math.floor(xMin)) - yMin = int(math.floor(yMin)) - xMax = int(math.ceil(xMax)) - yMax = int(math.ceil(yMax)) - return (xMin, yMin, xMax, yMax) - - -def quantizeRect(rect, factor=1): - """ - >>> bounds = (72.3, -218.4, 1201.3, 919.1) - >>> quantizeRect(bounds) - (72, -219, 1202, 920) - >>> quantizeRect(bounds, factor=10) - (70, -220, 1210, 920) - >>> quantizeRect(bounds, factor=100) - (0, -300, 1300, 1000) - """ - if factor < 1: - raise ValueError(f"Expected quantization factor >= 1, found: {factor!r}") - xMin, yMin, xMax, yMax = normRect(rect) - return ( - int(math.floor(xMin / factor) * factor), - int(math.floor(yMin / factor) * factor), - int(math.ceil(xMax / factor) * factor), - int(math.ceil(yMax / factor) * factor), - ) - - -class Vector(_Vector): - def __init__(self, *args, **kwargs): - warnings.warn( - "fontTools.misc.arrayTools.Vector has been deprecated, please use " - "fontTools.misc.vector.Vector instead.", - DeprecationWarning, - ) - - -def pairwise(iterable, reverse=False): - """Iterate over current and next items in iterable. - - Args: - iterable: An iterable - reverse: If true, iterate in reverse order. - - Returns: - A iterable yielding two elements per iteration. - - Example: - - >>> tuple(pairwise([])) - () - >>> tuple(pairwise([], reverse=True)) - () - >>> tuple(pairwise([0])) - ((0, 0),) - >>> tuple(pairwise([0], reverse=True)) - ((0, 0),) - >>> tuple(pairwise([0, 1])) - ((0, 1), (1, 0)) - >>> tuple(pairwise([0, 1], reverse=True)) - ((1, 0), (0, 1)) - >>> tuple(pairwise([0, 1, 2])) - ((0, 1), (1, 2), (2, 0)) - >>> tuple(pairwise([0, 1, 2], reverse=True)) - ((2, 1), (1, 0), (0, 2)) - >>> tuple(pairwise(['a', 'b', 'c', 'd'])) - (('a', 'b'), ('b', 'c'), ('c', 'd'), ('d', 'a')) - >>> tuple(pairwise(['a', 'b', 'c', 'd'], reverse=True)) - (('d', 'c'), ('c', 'b'), ('b', 'a'), ('a', 'd')) - """ - if not iterable: - return - if reverse: - it = reversed(iterable) - else: - it = iter(iterable) - first = next(it, None) - a = first - for b in it: - yield (a, b) - a = b - yield (a, first) - - -def _test(): - """ - >>> import math - >>> calcBounds([]) - (0, 0, 0, 0) - >>> calcBounds([(0, 40), (0, 100), (50, 50), (80, 10)]) - (0, 10, 80, 100) - >>> updateBounds((0, 0, 0, 0), (100, 100)) - (0, 0, 100, 100) - >>> pointInRect((50, 50), (0, 0, 100, 100)) - True - >>> pointInRect((0, 0), (0, 0, 100, 100)) - True - >>> pointInRect((100, 100), (0, 0, 100, 100)) - True - >>> not pointInRect((101, 100), (0, 0, 100, 100)) - True - >>> list(pointsInRect([(50, 50), (0, 0), (100, 100), (101, 100)], (0, 0, 100, 100))) - [True, True, True, False] - >>> vectorLength((3, 4)) - 5.0 - >>> vectorLength((1, 1)) == math.sqrt(2) - True - >>> list(asInt16([0, 0.1, 0.5, 0.9])) - [0, 0, 1, 1] - >>> normRect((0, 10, 100, 200)) - (0, 10, 100, 200) - >>> normRect((100, 200, 0, 10)) - (0, 10, 100, 200) - >>> scaleRect((10, 20, 50, 150), 1.5, 2) - (15.0, 40, 75.0, 300) - >>> offsetRect((10, 20, 30, 40), 5, 6) - (15, 26, 35, 46) - >>> insetRect((10, 20, 50, 60), 5, 10) - (15, 30, 45, 50) - >>> insetRect((10, 20, 50, 60), -5, -10) - (5, 10, 55, 70) - >>> intersects, rect = sectRect((0, 10, 20, 30), (0, 40, 20, 50)) - >>> not intersects - True - >>> intersects, rect = sectRect((0, 10, 20, 30), (5, 20, 35, 50)) - >>> intersects - 1 - >>> rect - (5, 20, 20, 30) - >>> unionRect((0, 10, 20, 30), (0, 40, 20, 50)) - (0, 10, 20, 50) - >>> rectCenter((0, 0, 100, 200)) - (50.0, 100.0) - >>> rectCenter((0, 0, 100, 199.0)) - (50.0, 99.5) - >>> intRect((0.9, 2.9, 3.1, 4.1)) - (0, 2, 4, 5) - """ - - -if __name__ == "__main__": - import sys - import doctest - - sys.exit(doctest.testmod().failed) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/node/dev/files/runtime/hot-api-esm.js b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/node/dev/files/runtime/hot-api-esm.js deleted file mode 100644 index 58527ccdf48d2a36c05db48e99b4afebc7d41de5..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/node/dev/files/runtime/hot-api-esm.js +++ /dev/null @@ -1,7 +0,0 @@ -import { makeApplyHmr } from '../runtime/index.js' - -export const applyHmr = makeApplyHmr(args => - Object.assign({}, args, { - hot: args.m.hot, - }) -) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/markdown_it/rules_core/__init__.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/markdown_it/rules_core/__init__.py deleted file mode 100644 index c9c5368c2b694231000626a03594ebad75fe8c71..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/markdown_it/rules_core/__init__.py +++ /dev/null @@ -1,19 +0,0 @@ -__all__ = ( - "StateCore", - "normalize", - "block", - "inline", - "replace", - "smartquotes", - "linkify", - "text_join", -) - -from .block import block -from .inline import inline -from .linkify import linkify -from .normalize import normalize -from .replacements import replace -from .smartquotes import smartquotes -from .state_core import StateCore -from .text_join import text_join diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/core/include/numpy/__ufunc_api.c b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/core/include/numpy/__ufunc_api.c deleted file mode 100644 index d1b4a87bb6a0f18a3fcb14542f597921acc0ab2b..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/core/include/numpy/__ufunc_api.c +++ /dev/null @@ -1,50 +0,0 @@ - -/* These pointers will be stored in the C-object for use in other - extension modules -*/ - -void *PyUFunc_API[] = { - (void *) &PyUFunc_Type, - (void *) PyUFunc_FromFuncAndData, - (void *) PyUFunc_RegisterLoopForType, - (void *) PyUFunc_GenericFunction, - (void *) PyUFunc_f_f_As_d_d, - (void *) PyUFunc_d_d, - (void *) PyUFunc_f_f, - (void *) PyUFunc_g_g, - (void *) PyUFunc_F_F_As_D_D, - (void *) PyUFunc_F_F, - (void *) PyUFunc_D_D, - (void *) PyUFunc_G_G, - (void *) PyUFunc_O_O, - (void *) PyUFunc_ff_f_As_dd_d, - (void *) PyUFunc_ff_f, - (void *) PyUFunc_dd_d, - (void *) PyUFunc_gg_g, - (void *) PyUFunc_FF_F_As_DD_D, - (void *) PyUFunc_DD_D, - (void *) PyUFunc_FF_F, - (void *) PyUFunc_GG_G, - (void *) PyUFunc_OO_O, - (void *) PyUFunc_O_O_method, - (void *) PyUFunc_OO_O_method, - (void *) PyUFunc_On_Om, - (void *) PyUFunc_GetPyValues, - (void *) PyUFunc_checkfperr, - (void *) PyUFunc_clearfperr, - (void *) PyUFunc_getfperr, - (void *) PyUFunc_handlefperr, - (void *) PyUFunc_ReplaceLoopBySignature, - (void *) PyUFunc_FromFuncAndDataAndSignature, - (void *) PyUFunc_SetUsesArraysAsData, - (void *) PyUFunc_e_e, - (void *) PyUFunc_e_e_As_f_f, - (void *) PyUFunc_e_e_As_d_d, - (void *) PyUFunc_ee_e, - (void *) PyUFunc_ee_e_As_ff_f, - (void *) PyUFunc_ee_e_As_dd_d, - (void *) PyUFunc_DefaultTypeResolver, - (void *) PyUFunc_ValidateCasting, - (void *) PyUFunc_RegisterLoopForDescr, - (void *) PyUFunc_FromFuncAndDataAndSignatureAndIdentity -}; diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/_config/localization.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/_config/localization.py deleted file mode 100644 index 5c1a0ff1395334a55baa6c5d77a71635872fe824..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/_config/localization.py +++ /dev/null @@ -1,172 +0,0 @@ -""" -Helpers for configuring locale settings. - -Name `localization` is chosen to avoid overlap with builtin `locale` module. -""" -from __future__ import annotations - -from contextlib import contextmanager -import locale -import platform -import re -import subprocess -from typing import TYPE_CHECKING - -from pandas._config.config import options - -if TYPE_CHECKING: - from collections.abc import Generator - - -@contextmanager -def set_locale( - new_locale: str | tuple[str, str], lc_var: int = locale.LC_ALL -) -> Generator[str | tuple[str, str], None, None]: - """ - Context manager for temporarily setting a locale. - - Parameters - ---------- - new_locale : str or tuple - A string of the form .. For example to set - the current locale to US English with a UTF8 encoding, you would pass - "en_US.UTF-8". - lc_var : int, default `locale.LC_ALL` - The category of the locale being set. - - Notes - ----- - This is useful when you want to run a particular block of code under a - particular locale, without globally setting the locale. This probably isn't - thread-safe. - """ - # getlocale is not always compliant with setlocale, use setlocale. GH#46595 - current_locale = locale.setlocale(lc_var) - - try: - locale.setlocale(lc_var, new_locale) - normalized_code, normalized_encoding = locale.getlocale() - if normalized_code is not None and normalized_encoding is not None: - yield f"{normalized_code}.{normalized_encoding}" - else: - yield new_locale - finally: - locale.setlocale(lc_var, current_locale) - - -def can_set_locale(lc: str, lc_var: int = locale.LC_ALL) -> bool: - """ - Check to see if we can set a locale, and subsequently get the locale, - without raising an Exception. - - Parameters - ---------- - lc : str - The locale to attempt to set. - lc_var : int, default `locale.LC_ALL` - The category of the locale being set. - - Returns - ------- - bool - Whether the passed locale can be set - """ - try: - with set_locale(lc, lc_var=lc_var): - pass - except (ValueError, locale.Error): - # horrible name for a Exception subclass - return False - else: - return True - - -def _valid_locales(locales: list[str] | str, normalize: bool) -> list[str]: - """ - Return a list of normalized locales that do not throw an ``Exception`` - when set. - - Parameters - ---------- - locales : str - A string where each locale is separated by a newline. - normalize : bool - Whether to call ``locale.normalize`` on each locale. - - Returns - ------- - valid_locales : list - A list of valid locales. - """ - return [ - loc - for loc in ( - locale.normalize(loc.strip()) if normalize else loc.strip() - for loc in locales - ) - if can_set_locale(loc) - ] - - -def get_locales( - prefix: str | None = None, - normalize: bool = True, -) -> list[str]: - """ - Get all the locales that are available on the system. - - Parameters - ---------- - prefix : str - If not ``None`` then return only those locales with the prefix - provided. For example to get all English language locales (those that - start with ``"en"``), pass ``prefix="en"``. - normalize : bool - Call ``locale.normalize`` on the resulting list of available locales. - If ``True``, only locales that can be set without throwing an - ``Exception`` are returned. - - Returns - ------- - locales : list of strings - A list of locale strings that can be set with ``locale.setlocale()``. - For example:: - - locale.setlocale(locale.LC_ALL, locale_string) - - On error will return an empty list (no locale available, e.g. Windows) - - """ - if platform.system() in ("Linux", "Darwin"): - raw_locales = subprocess.check_output(["locale", "-a"]) - else: - # Other platforms e.g. windows platforms don't define "locale -a" - # Note: is_platform_windows causes circular import here - return [] - - try: - # raw_locales is "\n" separated list of locales - # it may contain non-decodable parts, so split - # extract what we can and then rejoin. - split_raw_locales = raw_locales.split(b"\n") - out_locales = [] - for x in split_raw_locales: - try: - out_locales.append(str(x, encoding=options.display.encoding)) - except UnicodeError: - # 'locale -a' is used to populated 'raw_locales' and on - # Redhat 7 Linux (and maybe others) prints locale names - # using windows-1252 encoding. Bug only triggered by - # a few special characters and when there is an - # extensive list of installed locales. - out_locales.append(str(x, encoding="windows-1252")) - - except TypeError: - pass - - if prefix is None: - return _valid_locales(out_locales, normalize) - - pattern = re.compile(f"{prefix}.*") - found = pattern.findall("\n".join(out_locales)) - return _valid_locales(found, normalize) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/core/computation/engines.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/core/computation/engines.py deleted file mode 100644 index a3a05a9d75c6ed6b80564a69ff5b6cf5a648c1b3..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/core/computation/engines.py +++ /dev/null @@ -1,143 +0,0 @@ -""" -Engine classes for :func:`~pandas.eval` -""" -from __future__ import annotations - -import abc -from typing import TYPE_CHECKING - -from pandas.errors import NumExprClobberingError - -from pandas.core.computation.align import ( - align_terms, - reconstruct_object, -) -from pandas.core.computation.ops import ( - MATHOPS, - REDUCTIONS, -) - -from pandas.io.formats import printing - -if TYPE_CHECKING: - from pandas.core.computation.expr import Expr - -_ne_builtins = frozenset(MATHOPS + REDUCTIONS) - - -def _check_ne_builtin_clash(expr: Expr) -> None: - """ - Attempt to prevent foot-shooting in a helpful way. - - Parameters - ---------- - expr : Expr - Terms can contain - """ - names = expr.names - overlap = names & _ne_builtins - - if overlap: - s = ", ".join([repr(x) for x in overlap]) - raise NumExprClobberingError( - f'Variables in expression "{expr}" overlap with builtins: ({s})' - ) - - -class AbstractEngine(metaclass=abc.ABCMeta): - """Object serving as a base class for all engines.""" - - has_neg_frac = False - - def __init__(self, expr) -> None: - self.expr = expr - self.aligned_axes = None - self.result_type = None - - def convert(self) -> str: - """ - Convert an expression for evaluation. - - Defaults to return the expression as a string. - """ - return printing.pprint_thing(self.expr) - - def evaluate(self) -> object: - """ - Run the engine on the expression. - - This method performs alignment which is necessary no matter what engine - is being used, thus its implementation is in the base class. - - Returns - ------- - object - The result of the passed expression. - """ - if not self._is_aligned: - self.result_type, self.aligned_axes = align_terms(self.expr.terms) - - # make sure no names in resolvers and locals/globals clash - res = self._evaluate() - return reconstruct_object( - self.result_type, res, self.aligned_axes, self.expr.terms.return_type - ) - - @property - def _is_aligned(self) -> bool: - return self.aligned_axes is not None and self.result_type is not None - - @abc.abstractmethod - def _evaluate(self): - """ - Return an evaluated expression. - - Parameters - ---------- - env : Scope - The local and global environment in which to evaluate an - expression. - - Notes - ----- - Must be implemented by subclasses. - """ - - -class NumExprEngine(AbstractEngine): - """NumExpr engine class""" - - has_neg_frac = True - - def _evaluate(self): - import numexpr as ne - - # convert the expression to a valid numexpr expression - s = self.convert() - - env = self.expr.env - scope = env.full_scope - _check_ne_builtin_clash(self.expr) - return ne.evaluate(s, local_dict=scope) - - -class PythonEngine(AbstractEngine): - """ - Evaluate an expression in Python space. - - Mostly for testing purposes. - """ - - has_neg_frac = False - - def evaluate(self): - return self.expr() - - def _evaluate(self) -> None: - pass - - -ENGINES: dict[str, type[AbstractEngine]] = { - "numexpr": NumExprEngine, - "python": PythonEngine, -} diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/arrays/sparse/test_libsparse.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/arrays/sparse/test_libsparse.py deleted file mode 100644 index 7a77a2064e7e097f924a3901994d131d98164ad6..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/arrays/sparse/test_libsparse.py +++ /dev/null @@ -1,551 +0,0 @@ -import operator - -import numpy as np -import pytest - -import pandas._libs.sparse as splib -import pandas.util._test_decorators as td - -from pandas import Series -import pandas._testing as tm -from pandas.core.arrays.sparse import ( - BlockIndex, - IntIndex, - make_sparse_index, -) - - -@pytest.fixture -def test_length(): - return 20 - - -@pytest.fixture( - params=[ - [ - [0, 7, 15], - [3, 5, 5], - [2, 9, 14], - [2, 3, 5], - [2, 9, 15], - [1, 3, 4], - ], - [ - [0, 5], - [4, 4], - [1], - [4], - [1], - [3], - ], - [ - [0], - [10], - [0, 5], - [3, 7], - [0, 5], - [3, 5], - ], - [ - [10], - [5], - [0, 12], - [5, 3], - [12], - [3], - ], - [ - [0, 10], - [4, 6], - [5, 17], - [4, 2], - [], - [], - ], - [ - [0], - [5], - [], - [], - [], - [], - ], - ], - ids=[ - "plain_case", - "delete_blocks", - "split_blocks", - "skip_block", - "no_intersect", - "one_empty", - ], -) -def cases(request): - return request.param - - -class TestSparseIndexUnion: - @pytest.mark.parametrize( - "xloc, xlen, yloc, ylen, eloc, elen", - [ - [[0], [5], [5], [4], [0], [9]], - [[0, 10], [5, 5], [2, 17], [5, 2], [0, 10, 17], [7, 5, 2]], - [[1], [5], [3], [5], [1], [7]], - [[2, 10], [4, 4], [4], [8], [2], [12]], - [[0, 5], [3, 5], [0], [7], [0], [10]], - [[2, 10], [4, 4], [4, 13], [8, 4], [2], [15]], - [[2], [15], [4, 9, 14], [3, 2, 2], [2], [15]], - [[0, 10], [3, 3], [5, 15], [2, 2], [0, 5, 10, 15], [3, 2, 3, 2]], - ], - ) - def test_index_make_union(self, xloc, xlen, yloc, ylen, eloc, elen, test_length): - # Case 1 - # x: ---- - # y: ---- - # r: -------- - # Case 2 - # x: ----- ----- - # y: ----- -- - # Case 3 - # x: ------ - # y: ------- - # r: ---------- - # Case 4 - # x: ------ ----- - # y: ------- - # r: ------------- - # Case 5 - # x: --- ----- - # y: ------- - # r: ------------- - # Case 6 - # x: ------ ----- - # y: ------- --- - # r: ------------- - # Case 7 - # x: ---------------------- - # y: ---- ---- --- - # r: ---------------------- - # Case 8 - # x: ---- --- - # y: --- --- - xindex = BlockIndex(test_length, xloc, xlen) - yindex = BlockIndex(test_length, yloc, ylen) - bresult = xindex.make_union(yindex) - assert isinstance(bresult, BlockIndex) - tm.assert_numpy_array_equal(bresult.blocs, np.array(eloc, dtype=np.int32)) - tm.assert_numpy_array_equal(bresult.blengths, np.array(elen, dtype=np.int32)) - - ixindex = xindex.to_int_index() - iyindex = yindex.to_int_index() - iresult = ixindex.make_union(iyindex) - assert isinstance(iresult, IntIndex) - tm.assert_numpy_array_equal(iresult.indices, bresult.to_int_index().indices) - - def test_int_index_make_union(self): - a = IntIndex(5, np.array([0, 3, 4], dtype=np.int32)) - b = IntIndex(5, np.array([0, 2], dtype=np.int32)) - res = a.make_union(b) - exp = IntIndex(5, np.array([0, 2, 3, 4], np.int32)) - assert res.equals(exp) - - a = IntIndex(5, np.array([], dtype=np.int32)) - b = IntIndex(5, np.array([0, 2], dtype=np.int32)) - res = a.make_union(b) - exp = IntIndex(5, np.array([0, 2], np.int32)) - assert res.equals(exp) - - a = IntIndex(5, np.array([], dtype=np.int32)) - b = IntIndex(5, np.array([], dtype=np.int32)) - res = a.make_union(b) - exp = IntIndex(5, np.array([], np.int32)) - assert res.equals(exp) - - a = IntIndex(5, np.array([0, 1, 2, 3, 4], dtype=np.int32)) - b = IntIndex(5, np.array([0, 1, 2, 3, 4], dtype=np.int32)) - res = a.make_union(b) - exp = IntIndex(5, np.array([0, 1, 2, 3, 4], np.int32)) - assert res.equals(exp) - - a = IntIndex(5, np.array([0, 1], dtype=np.int32)) - b = IntIndex(4, np.array([0, 1], dtype=np.int32)) - - msg = "Indices must reference same underlying length" - with pytest.raises(ValueError, match=msg): - a.make_union(b) - - -class TestSparseIndexIntersect: - @td.skip_if_windows - def test_intersect(self, cases, test_length): - xloc, xlen, yloc, ylen, eloc, elen = cases - xindex = BlockIndex(test_length, xloc, xlen) - yindex = BlockIndex(test_length, yloc, ylen) - expected = BlockIndex(test_length, eloc, elen) - longer_index = BlockIndex(test_length + 1, yloc, ylen) - - result = xindex.intersect(yindex) - assert result.equals(expected) - result = xindex.to_int_index().intersect(yindex.to_int_index()) - assert result.equals(expected.to_int_index()) - - msg = "Indices must reference same underlying length" - with pytest.raises(Exception, match=msg): - xindex.intersect(longer_index) - with pytest.raises(Exception, match=msg): - xindex.to_int_index().intersect(longer_index.to_int_index()) - - def test_intersect_empty(self): - xindex = IntIndex(4, np.array([], dtype=np.int32)) - yindex = IntIndex(4, np.array([2, 3], dtype=np.int32)) - assert xindex.intersect(yindex).equals(xindex) - assert yindex.intersect(xindex).equals(xindex) - - xindex = xindex.to_block_index() - yindex = yindex.to_block_index() - assert xindex.intersect(yindex).equals(xindex) - assert yindex.intersect(xindex).equals(xindex) - - @pytest.mark.parametrize( - "case", - [ - # Argument 2 to "IntIndex" has incompatible type "ndarray[Any, - # dtype[signedinteger[_32Bit]]]"; expected "Sequence[int]" - IntIndex(5, np.array([1, 2], dtype=np.int32)), # type: ignore[arg-type] - IntIndex(5, np.array([0, 2, 4], dtype=np.int32)), # type: ignore[arg-type] - IntIndex(0, np.array([], dtype=np.int32)), # type: ignore[arg-type] - IntIndex(5, np.array([], dtype=np.int32)), # type: ignore[arg-type] - ], - ) - def test_intersect_identical(self, case): - assert case.intersect(case).equals(case) - case = case.to_block_index() - assert case.intersect(case).equals(case) - - -class TestSparseIndexCommon: - def test_int_internal(self): - idx = make_sparse_index(4, np.array([2, 3], dtype=np.int32), kind="integer") - assert isinstance(idx, IntIndex) - assert idx.npoints == 2 - tm.assert_numpy_array_equal(idx.indices, np.array([2, 3], dtype=np.int32)) - - idx = make_sparse_index(4, np.array([], dtype=np.int32), kind="integer") - assert isinstance(idx, IntIndex) - assert idx.npoints == 0 - tm.assert_numpy_array_equal(idx.indices, np.array([], dtype=np.int32)) - - idx = make_sparse_index( - 4, np.array([0, 1, 2, 3], dtype=np.int32), kind="integer" - ) - assert isinstance(idx, IntIndex) - assert idx.npoints == 4 - tm.assert_numpy_array_equal(idx.indices, np.array([0, 1, 2, 3], dtype=np.int32)) - - def test_block_internal(self): - idx = make_sparse_index(4, np.array([2, 3], dtype=np.int32), kind="block") - assert isinstance(idx, BlockIndex) - assert idx.npoints == 2 - tm.assert_numpy_array_equal(idx.blocs, np.array([2], dtype=np.int32)) - tm.assert_numpy_array_equal(idx.blengths, np.array([2], dtype=np.int32)) - - idx = make_sparse_index(4, np.array([], dtype=np.int32), kind="block") - assert isinstance(idx, BlockIndex) - assert idx.npoints == 0 - tm.assert_numpy_array_equal(idx.blocs, np.array([], dtype=np.int32)) - tm.assert_numpy_array_equal(idx.blengths, np.array([], dtype=np.int32)) - - idx = make_sparse_index(4, np.array([0, 1, 2, 3], dtype=np.int32), kind="block") - assert isinstance(idx, BlockIndex) - assert idx.npoints == 4 - tm.assert_numpy_array_equal(idx.blocs, np.array([0], dtype=np.int32)) - tm.assert_numpy_array_equal(idx.blengths, np.array([4], dtype=np.int32)) - - idx = make_sparse_index(4, np.array([0, 2, 3], dtype=np.int32), kind="block") - assert isinstance(idx, BlockIndex) - assert idx.npoints == 3 - tm.assert_numpy_array_equal(idx.blocs, np.array([0, 2], dtype=np.int32)) - tm.assert_numpy_array_equal(idx.blengths, np.array([1, 2], dtype=np.int32)) - - @pytest.mark.parametrize("kind", ["integer", "block"]) - def test_lookup(self, kind): - idx = make_sparse_index(4, np.array([2, 3], dtype=np.int32), kind=kind) - assert idx.lookup(-1) == -1 - assert idx.lookup(0) == -1 - assert idx.lookup(1) == -1 - assert idx.lookup(2) == 0 - assert idx.lookup(3) == 1 - assert idx.lookup(4) == -1 - - idx = make_sparse_index(4, np.array([], dtype=np.int32), kind=kind) - - for i in range(-1, 5): - assert idx.lookup(i) == -1 - - idx = make_sparse_index(4, np.array([0, 1, 2, 3], dtype=np.int32), kind=kind) - assert idx.lookup(-1) == -1 - assert idx.lookup(0) == 0 - assert idx.lookup(1) == 1 - assert idx.lookup(2) == 2 - assert idx.lookup(3) == 3 - assert idx.lookup(4) == -1 - - idx = make_sparse_index(4, np.array([0, 2, 3], dtype=np.int32), kind=kind) - assert idx.lookup(-1) == -1 - assert idx.lookup(0) == 0 - assert idx.lookup(1) == -1 - assert idx.lookup(2) == 1 - assert idx.lookup(3) == 2 - assert idx.lookup(4) == -1 - - @pytest.mark.parametrize("kind", ["integer", "block"]) - def test_lookup_array(self, kind): - idx = make_sparse_index(4, np.array([2, 3], dtype=np.int32), kind=kind) - - res = idx.lookup_array(np.array([-1, 0, 2], dtype=np.int32)) - exp = np.array([-1, -1, 0], dtype=np.int32) - tm.assert_numpy_array_equal(res, exp) - - res = idx.lookup_array(np.array([4, 2, 1, 3], dtype=np.int32)) - exp = np.array([-1, 0, -1, 1], dtype=np.int32) - tm.assert_numpy_array_equal(res, exp) - - idx = make_sparse_index(4, np.array([], dtype=np.int32), kind=kind) - res = idx.lookup_array(np.array([-1, 0, 2, 4], dtype=np.int32)) - exp = np.array([-1, -1, -1, -1], dtype=np.int32) - tm.assert_numpy_array_equal(res, exp) - - idx = make_sparse_index(4, np.array([0, 1, 2, 3], dtype=np.int32), kind=kind) - res = idx.lookup_array(np.array([-1, 0, 2], dtype=np.int32)) - exp = np.array([-1, 0, 2], dtype=np.int32) - tm.assert_numpy_array_equal(res, exp) - - res = idx.lookup_array(np.array([4, 2, 1, 3], dtype=np.int32)) - exp = np.array([-1, 2, 1, 3], dtype=np.int32) - tm.assert_numpy_array_equal(res, exp) - - idx = make_sparse_index(4, np.array([0, 2, 3], dtype=np.int32), kind=kind) - res = idx.lookup_array(np.array([2, 1, 3, 0], dtype=np.int32)) - exp = np.array([1, -1, 2, 0], dtype=np.int32) - tm.assert_numpy_array_equal(res, exp) - - res = idx.lookup_array(np.array([1, 4, 2, 5], dtype=np.int32)) - exp = np.array([-1, -1, 1, -1], dtype=np.int32) - tm.assert_numpy_array_equal(res, exp) - - @pytest.mark.parametrize( - "idx, expected", - [ - [0, -1], - [5, 0], - [7, 2], - [8, -1], - [9, -1], - [10, -1], - [11, -1], - [12, 3], - [17, 8], - [18, -1], - ], - ) - def test_lookup_basics(self, idx, expected): - bindex = BlockIndex(20, [5, 12], [3, 6]) - assert bindex.lookup(idx) == expected - - iindex = bindex.to_int_index() - assert iindex.lookup(idx) == expected - - -class TestBlockIndex: - def test_block_internal(self): - idx = make_sparse_index(4, np.array([2, 3], dtype=np.int32), kind="block") - assert isinstance(idx, BlockIndex) - assert idx.npoints == 2 - tm.assert_numpy_array_equal(idx.blocs, np.array([2], dtype=np.int32)) - tm.assert_numpy_array_equal(idx.blengths, np.array([2], dtype=np.int32)) - - idx = make_sparse_index(4, np.array([], dtype=np.int32), kind="block") - assert isinstance(idx, BlockIndex) - assert idx.npoints == 0 - tm.assert_numpy_array_equal(idx.blocs, np.array([], dtype=np.int32)) - tm.assert_numpy_array_equal(idx.blengths, np.array([], dtype=np.int32)) - - idx = make_sparse_index(4, np.array([0, 1, 2, 3], dtype=np.int32), kind="block") - assert isinstance(idx, BlockIndex) - assert idx.npoints == 4 - tm.assert_numpy_array_equal(idx.blocs, np.array([0], dtype=np.int32)) - tm.assert_numpy_array_equal(idx.blengths, np.array([4], dtype=np.int32)) - - idx = make_sparse_index(4, np.array([0, 2, 3], dtype=np.int32), kind="block") - assert isinstance(idx, BlockIndex) - assert idx.npoints == 3 - tm.assert_numpy_array_equal(idx.blocs, np.array([0, 2], dtype=np.int32)) - tm.assert_numpy_array_equal(idx.blengths, np.array([1, 2], dtype=np.int32)) - - @pytest.mark.parametrize("i", [5, 10, 100, 101]) - def test_make_block_boundary(self, i): - idx = make_sparse_index(i, np.arange(0, i, 2, dtype=np.int32), kind="block") - - exp = np.arange(0, i, 2, dtype=np.int32) - tm.assert_numpy_array_equal(idx.blocs, exp) - tm.assert_numpy_array_equal(idx.blengths, np.ones(len(exp), dtype=np.int32)) - - def test_equals(self): - index = BlockIndex(10, [0, 4], [2, 5]) - - assert index.equals(index) - assert not index.equals(BlockIndex(10, [0, 4], [2, 6])) - - def test_check_integrity(self): - locs = [] - lengths = [] - - # 0-length OK - BlockIndex(0, locs, lengths) - - # also OK even though empty - BlockIndex(1, locs, lengths) - - msg = "Block 0 extends beyond end" - with pytest.raises(ValueError, match=msg): - BlockIndex(10, [5], [10]) - - msg = "Block 0 overlaps" - with pytest.raises(ValueError, match=msg): - BlockIndex(10, [2, 5], [5, 3]) - - def test_to_int_index(self): - locs = [0, 10] - lengths = [4, 6] - exp_inds = [0, 1, 2, 3, 10, 11, 12, 13, 14, 15] - - block = BlockIndex(20, locs, lengths) - dense = block.to_int_index() - - tm.assert_numpy_array_equal(dense.indices, np.array(exp_inds, dtype=np.int32)) - - def test_to_block_index(self): - index = BlockIndex(10, [0, 5], [4, 5]) - assert index.to_block_index() is index - - -class TestIntIndex: - def test_check_integrity(self): - # Too many indices than specified in self.length - msg = "Too many indices" - - with pytest.raises(ValueError, match=msg): - IntIndex(length=1, indices=[1, 2, 3]) - - # No index can be negative. - msg = "No index can be less than zero" - - with pytest.raises(ValueError, match=msg): - IntIndex(length=5, indices=[1, -2, 3]) - - # No index can be negative. - msg = "No index can be less than zero" - - with pytest.raises(ValueError, match=msg): - IntIndex(length=5, indices=[1, -2, 3]) - - # All indices must be less than the length. - msg = "All indices must be less than the length" - - with pytest.raises(ValueError, match=msg): - IntIndex(length=5, indices=[1, 2, 5]) - - with pytest.raises(ValueError, match=msg): - IntIndex(length=5, indices=[1, 2, 6]) - - # Indices must be strictly ascending. - msg = "Indices must be strictly increasing" - - with pytest.raises(ValueError, match=msg): - IntIndex(length=5, indices=[1, 3, 2]) - - with pytest.raises(ValueError, match=msg): - IntIndex(length=5, indices=[1, 3, 3]) - - def test_int_internal(self): - idx = make_sparse_index(4, np.array([2, 3], dtype=np.int32), kind="integer") - assert isinstance(idx, IntIndex) - assert idx.npoints == 2 - tm.assert_numpy_array_equal(idx.indices, np.array([2, 3], dtype=np.int32)) - - idx = make_sparse_index(4, np.array([], dtype=np.int32), kind="integer") - assert isinstance(idx, IntIndex) - assert idx.npoints == 0 - tm.assert_numpy_array_equal(idx.indices, np.array([], dtype=np.int32)) - - idx = make_sparse_index( - 4, np.array([0, 1, 2, 3], dtype=np.int32), kind="integer" - ) - assert isinstance(idx, IntIndex) - assert idx.npoints == 4 - tm.assert_numpy_array_equal(idx.indices, np.array([0, 1, 2, 3], dtype=np.int32)) - - def test_equals(self): - index = IntIndex(10, [0, 1, 2, 3, 4]) - assert index.equals(index) - assert not index.equals(IntIndex(10, [0, 1, 2, 3])) - - def test_to_block_index(self, cases, test_length): - xloc, xlen, yloc, ylen, _, _ = cases - xindex = BlockIndex(test_length, xloc, xlen) - yindex = BlockIndex(test_length, yloc, ylen) - - # see if survive the round trip - xbindex = xindex.to_int_index().to_block_index() - ybindex = yindex.to_int_index().to_block_index() - assert isinstance(xbindex, BlockIndex) - assert xbindex.equals(xindex) - assert ybindex.equals(yindex) - - def test_to_int_index(self): - index = IntIndex(10, [2, 3, 4, 5, 6]) - assert index.to_int_index() is index - - -class TestSparseOperators: - @pytest.mark.parametrize("opname", ["add", "sub", "mul", "truediv", "floordiv"]) - def test_op(self, opname, cases, test_length): - xloc, xlen, yloc, ylen, _, _ = cases - sparse_op = getattr(splib, f"sparse_{opname}_float64") - python_op = getattr(operator, opname) - - xindex = BlockIndex(test_length, xloc, xlen) - yindex = BlockIndex(test_length, yloc, ylen) - - xdindex = xindex.to_int_index() - ydindex = yindex.to_int_index() - - x = np.arange(xindex.npoints) * 10.0 + 1 - y = np.arange(yindex.npoints) * 100.0 + 1 - - xfill = 0 - yfill = 2 - - result_block_vals, rb_index, bfill = sparse_op( - x, xindex, xfill, y, yindex, yfill - ) - result_int_vals, ri_index, ifill = sparse_op( - x, xdindex, xfill, y, ydindex, yfill - ) - - assert rb_index.to_int_index().equals(ri_index) - tm.assert_numpy_array_equal(result_block_vals, result_int_vals) - assert bfill == ifill - - # check versus Series... - xseries = Series(x, xdindex.indices) - xseries = xseries.reindex(np.arange(test_length)).fillna(xfill) - - yseries = Series(y, ydindex.indices) - yseries = yseries.reindex(np.arange(test_length)).fillna(yfill) - - series_result = python_op(xseries, yseries) - series_result = series_result.reindex(ri_index.indices) - - tm.assert_numpy_array_equal(result_block_vals, series_result.values) - tm.assert_numpy_array_equal(result_int_vals, series_result.values) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/io/parser/test_converters.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/io/parser/test_converters.py deleted file mode 100644 index 85f3db0398080fe6689b3a09effd39b787394245..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/io/parser/test_converters.py +++ /dev/null @@ -1,203 +0,0 @@ -""" -Tests column conversion functionality during parsing -for all of the parsers defined in parsers.py -""" -from io import StringIO - -from dateutil.parser import parse -import numpy as np -import pytest - -import pandas as pd -from pandas import ( - DataFrame, - Index, -) -import pandas._testing as tm - -pytestmark = pytest.mark.usefixtures("pyarrow_skip") - - -def test_converters_type_must_be_dict(all_parsers): - parser = all_parsers - data = """index,A,B,C,D -foo,2,3,4,5 -""" - - with pytest.raises(TypeError, match="Type converters.+"): - parser.read_csv(StringIO(data), converters=0) - - -@pytest.mark.parametrize("column", [3, "D"]) -@pytest.mark.parametrize( - "converter", [parse, lambda x: int(x.split("/")[2])] # Produce integer. -) -def test_converters(all_parsers, column, converter): - parser = all_parsers - data = """A,B,C,D -a,1,2,01/01/2009 -b,3,4,01/02/2009 -c,4,5,01/03/2009 -""" - result = parser.read_csv(StringIO(data), converters={column: converter}) - - expected = parser.read_csv(StringIO(data)) - expected["D"] = expected["D"].map(converter) - - tm.assert_frame_equal(result, expected) - - -def test_converters_no_implicit_conv(all_parsers): - # see gh-2184 - parser = all_parsers - data = """000102,1.2,A\n001245,2,B""" - - converters = {0: lambda x: x.strip()} - result = parser.read_csv(StringIO(data), header=None, converters=converters) - - # Column 0 should not be casted to numeric and should remain as object. - expected = DataFrame([["000102", 1.2, "A"], ["001245", 2, "B"]]) - tm.assert_frame_equal(result, expected) - - -def test_converters_euro_decimal_format(all_parsers): - # see gh-583 - converters = {} - parser = all_parsers - - data = """Id;Number1;Number2;Text1;Text2;Number3 -1;1521,1541;187101,9543;ABC;poi;4,7387 -2;121,12;14897,76;DEF;uyt;0,3773 -3;878,158;108013,434;GHI;rez;2,7356""" - converters["Number1"] = converters["Number2"] = converters[ - "Number3" - ] = lambda x: float(x.replace(",", ".")) - - result = parser.read_csv(StringIO(data), sep=";", converters=converters) - expected = DataFrame( - [ - [1, 1521.1541, 187101.9543, "ABC", "poi", 4.7387], - [2, 121.12, 14897.76, "DEF", "uyt", 0.3773], - [3, 878.158, 108013.434, "GHI", "rez", 2.7356], - ], - columns=["Id", "Number1", "Number2", "Text1", "Text2", "Number3"], - ) - tm.assert_frame_equal(result, expected) - - -def test_converters_corner_with_nans(all_parsers): - parser = all_parsers - data = """id,score,days -1,2,12 -2,2-5, -3,,14+ -4,6-12,2""" - - # Example converters. - def convert_days(x): - x = x.strip() - - if not x: - return np.nan - - is_plus = x.endswith("+") - - if is_plus: - x = int(x[:-1]) + 1 - else: - x = int(x) - - return x - - def convert_days_sentinel(x): - x = x.strip() - - if not x: - return np.nan - - is_plus = x.endswith("+") - - if is_plus: - x = int(x[:-1]) + 1 - else: - x = int(x) - - return x - - def convert_score(x): - x = x.strip() - - if not x: - return np.nan - - if x.find("-") > 0: - val_min, val_max = map(int, x.split("-")) - val = 0.5 * (val_min + val_max) - else: - val = float(x) - - return val - - results = [] - - for day_converter in [convert_days, convert_days_sentinel]: - result = parser.read_csv( - StringIO(data), - converters={"score": convert_score, "days": day_converter}, - na_values=["", None], - ) - assert pd.isna(result["days"][1]) - results.append(result) - - tm.assert_frame_equal(results[0], results[1]) - - -@pytest.mark.parametrize("conv_f", [lambda x: x, str]) -def test_converter_index_col_bug(all_parsers, conv_f): - # see gh-1835 , GH#40589 - parser = all_parsers - data = "A;B\n1;2\n3;4" - - rs = parser.read_csv( - StringIO(data), sep=";", index_col="A", converters={"A": conv_f} - ) - - xp = DataFrame({"B": [2, 4]}, index=Index(["1", "3"], name="A", dtype="object")) - tm.assert_frame_equal(rs, xp) - - -def test_converter_identity_object(all_parsers): - # GH#40589 - parser = all_parsers - data = "A,B\n1,2\n3,4" - - rs = parser.read_csv(StringIO(data), converters={"A": lambda x: x}) - - xp = DataFrame({"A": ["1", "3"], "B": [2, 4]}) - tm.assert_frame_equal(rs, xp) - - -def test_converter_multi_index(all_parsers): - # GH 42446 - parser = all_parsers - data = "A,B,B\nX,Y,Z\n1,2,3" - - result = parser.read_csv( - StringIO(data), - header=list(range(2)), - converters={ - ("A", "X"): np.int32, - ("B", "Y"): np.int32, - ("B", "Z"): np.float32, - }, - ) - - expected = DataFrame( - { - ("A", "X"): np.int32([1]), - ("B", "Y"): np.int32([2]), - ("B", "Z"): np.float32([3]), - } - ) - - tm.assert_frame_equal(result, expected) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/series/indexing/test_delitem.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/series/indexing/test_delitem.py deleted file mode 100644 index af6b3910baec036424ad84e796e4e03952524421..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/series/indexing/test_delitem.py +++ /dev/null @@ -1,73 +0,0 @@ -import pytest - -from pandas import ( - Index, - Series, - date_range, -) -import pandas._testing as tm - - -class TestSeriesDelItem: - def test_delitem(self): - # GH#5542 - # should delete the item inplace - s = Series(range(5)) - del s[0] - - expected = Series(range(1, 5), index=range(1, 5)) - tm.assert_series_equal(s, expected) - - del s[1] - expected = Series(range(2, 5), index=range(2, 5)) - tm.assert_series_equal(s, expected) - - # only 1 left, del, add, del - s = Series(1) - del s[0] - tm.assert_series_equal(s, Series(dtype="int64", index=Index([], dtype="int64"))) - s[0] = 1 - tm.assert_series_equal(s, Series(1)) - del s[0] - tm.assert_series_equal(s, Series(dtype="int64", index=Index([], dtype="int64"))) - - def test_delitem_object_index(self): - # Index(dtype=object) - s = Series(1, index=["a"]) - del s["a"] - tm.assert_series_equal( - s, Series(dtype="int64", index=Index([], dtype="object")) - ) - s["a"] = 1 - tm.assert_series_equal(s, Series(1, index=["a"])) - del s["a"] - tm.assert_series_equal( - s, Series(dtype="int64", index=Index([], dtype="object")) - ) - - def test_delitem_missing_key(self): - # empty - s = Series(dtype=object) - - with pytest.raises(KeyError, match=r"^0$"): - del s[0] - - def test_delitem_extension_dtype(self): - # GH#40386 - # DatetimeTZDtype - dti = date_range("2016-01-01", periods=3, tz="US/Pacific") - ser = Series(dti) - - expected = ser[[0, 2]] - del ser[1] - assert ser.dtype == dti.dtype - tm.assert_series_equal(ser, expected) - - # PeriodDtype - pi = dti.tz_localize(None).to_period("D") - ser = Series(pi) - - expected = ser[:2] - del ser[2] - assert ser.dtype == pi.dtype - tm.assert_series_equal(ser, expected) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/setuptools/_distutils/command/build_clib.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/setuptools/_distutils/command/build_clib.py deleted file mode 100644 index 3e20ef23cd81e0d4fb54898f2b642882d3d0d5ef..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/setuptools/_distutils/command/build_clib.py +++ /dev/null @@ -1,209 +0,0 @@ -"""distutils.command.build_clib - -Implements the Distutils 'build_clib' command, to build a C/C++ library -that is included in the module distribution and needed by an extension -module.""" - - -# XXX this module has *lots* of code ripped-off quite transparently from -# build_ext.py -- not surprisingly really, as the work required to build -# a static library from a collection of C source files is not really all -# that different from what's required to build a shared object file from -# a collection of C source files. Nevertheless, I haven't done the -# necessary refactoring to account for the overlap in code between the -# two modules, mainly because a number of subtle details changed in the -# cut 'n paste. Sigh. - -import os -from distutils.core import Command -from distutils.errors import * -from distutils.sysconfig import customize_compiler -from distutils import log - -def show_compilers(): - from distutils.ccompiler import show_compilers - show_compilers() - - -class build_clib(Command): - - description = "build C/C++ libraries used by Python extensions" - - user_options = [ - ('build-clib=', 'b', - "directory to build C/C++ libraries to"), - ('build-temp=', 't', - "directory to put temporary build by-products"), - ('debug', 'g', - "compile with debugging information"), - ('force', 'f', - "forcibly build everything (ignore file timestamps)"), - ('compiler=', 'c', - "specify the compiler type"), - ] - - boolean_options = ['debug', 'force'] - - help_options = [ - ('help-compiler', None, - "list available compilers", show_compilers), - ] - - def initialize_options(self): - self.build_clib = None - self.build_temp = None - - # List of libraries to build - self.libraries = None - - # Compilation options for all libraries - self.include_dirs = None - self.define = None - self.undef = None - self.debug = None - self.force = 0 - self.compiler = None - - - def finalize_options(self): - # This might be confusing: both build-clib and build-temp default - # to build-temp as defined by the "build" command. This is because - # I think that C libraries are really just temporary build - # by-products, at least from the point of view of building Python - # extensions -- but I want to keep my options open. - self.set_undefined_options('build', - ('build_temp', 'build_clib'), - ('build_temp', 'build_temp'), - ('compiler', 'compiler'), - ('debug', 'debug'), - ('force', 'force')) - - self.libraries = self.distribution.libraries - if self.libraries: - self.check_library_list(self.libraries) - - if self.include_dirs is None: - self.include_dirs = self.distribution.include_dirs or [] - if isinstance(self.include_dirs, str): - self.include_dirs = self.include_dirs.split(os.pathsep) - - # XXX same as for build_ext -- what about 'self.define' and - # 'self.undef' ? - - - def run(self): - if not self.libraries: - return - - # Yech -- this is cut 'n pasted from build_ext.py! - from distutils.ccompiler import new_compiler - self.compiler = new_compiler(compiler=self.compiler, - dry_run=self.dry_run, - force=self.force) - customize_compiler(self.compiler) - - if self.include_dirs is not None: - self.compiler.set_include_dirs(self.include_dirs) - if self.define is not None: - # 'define' option is a list of (name,value) tuples - for (name,value) in self.define: - self.compiler.define_macro(name, value) - if self.undef is not None: - for macro in self.undef: - self.compiler.undefine_macro(macro) - - self.build_libraries(self.libraries) - - - def check_library_list(self, libraries): - """Ensure that the list of libraries is valid. - - `library` is presumably provided as a command option 'libraries'. - This method checks that it is a list of 2-tuples, where the tuples - are (library_name, build_info_dict). - - Raise DistutilsSetupError if the structure is invalid anywhere; - just returns otherwise. - """ - if not isinstance(libraries, list): - raise DistutilsSetupError( - "'libraries' option must be a list of tuples") - - for lib in libraries: - if not isinstance(lib, tuple) and len(lib) != 2: - raise DistutilsSetupError( - "each element of 'libraries' must a 2-tuple") - - name, build_info = lib - - if not isinstance(name, str): - raise DistutilsSetupError( - "first element of each tuple in 'libraries' " - "must be a string (the library name)") - - if '/' in name or (os.sep != '/' and os.sep in name): - raise DistutilsSetupError("bad library name '%s': " - "may not contain directory separators" % lib[0]) - - if not isinstance(build_info, dict): - raise DistutilsSetupError( - "second element of each tuple in 'libraries' " - "must be a dictionary (build info)") - - - def get_library_names(self): - # Assume the library list is valid -- 'check_library_list()' is - # called from 'finalize_options()', so it should be! - if not self.libraries: - return None - - lib_names = [] - for (lib_name, build_info) in self.libraries: - lib_names.append(lib_name) - return lib_names - - - def get_source_files(self): - self.check_library_list(self.libraries) - filenames = [] - for (lib_name, build_info) in self.libraries: - sources = build_info.get('sources') - if sources is None or not isinstance(sources, (list, tuple)): - raise DistutilsSetupError( - "in 'libraries' option (library '%s'), " - "'sources' must be present and must be " - "a list of source filenames" % lib_name) - - filenames.extend(sources) - return filenames - - - def build_libraries(self, libraries): - for (lib_name, build_info) in libraries: - sources = build_info.get('sources') - if sources is None or not isinstance(sources, (list, tuple)): - raise DistutilsSetupError( - "in 'libraries' option (library '%s'), " - "'sources' must be present and must be " - "a list of source filenames" % lib_name) - sources = list(sources) - - log.info("building '%s' library", lib_name) - - # First, compile the source code to object files in the library - # directory. (This should probably change to putting object - # files in a temporary build directory.) - macros = build_info.get('macros') - include_dirs = build_info.get('include_dirs') - objects = self.compiler.compile(sources, - output_dir=self.build_temp, - macros=macros, - include_dirs=include_dirs, - debug=self.debug) - - # Now "link" the object files together into a static library. - # (On Unix at least, this isn't really linking -- it just - # builds an archive. Whatever.) - self.compiler.create_static_lib(objects, lib_name, - output_dir=self.build_clib, - debug=self.debug) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/setuptools/_vendor/pyparsing.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/setuptools/_vendor/pyparsing.py deleted file mode 100644 index 1333c00e5c95a75950eb23d5c4a11b5d2b6010ef..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/setuptools/_vendor/pyparsing.py +++ /dev/null @@ -1,5742 +0,0 @@ -# module pyparsing.py -# -# Copyright (c) 2003-2018 Paul T. McGuire -# -# Permission is hereby granted, free of charge, to any person obtaining -# a copy of this software and associated documentation files (the -# "Software"), to deal in the Software without restriction, including -# without limitation the rights to use, copy, modify, merge, publish, -# distribute, sublicense, and/or sell copies of the Software, and to -# permit persons to whom the Software is furnished to do so, subject to -# the following conditions: -# -# The above copyright notice and this permission notice shall be -# included in all copies or substantial portions of the Software. -# -# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, -# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF -# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. -# IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY -# CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, -# TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE -# SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. -# - -__doc__ = \ -""" -pyparsing module - Classes and methods to define and execute parsing grammars -============================================================================= - -The pyparsing module is an alternative approach to creating and executing simple grammars, -vs. the traditional lex/yacc approach, or the use of regular expressions. With pyparsing, you -don't need to learn a new syntax for defining grammars or matching expressions - the parsing module -provides a library of classes that you use to construct the grammar directly in Python. - -Here is a program to parse "Hello, World!" (or any greeting of the form -C{", !"}), built up using L{Word}, L{Literal}, and L{And} elements -(L{'+'} operator gives L{And} expressions, strings are auto-converted to -L{Literal} expressions):: - - from pyparsing import Word, alphas - - # define grammar of a greeting - greet = Word(alphas) + "," + Word(alphas) + "!" - - hello = "Hello, World!" - print (hello, "->", greet.parseString(hello)) - -The program outputs the following:: - - Hello, World! -> ['Hello', ',', 'World', '!'] - -The Python representation of the grammar is quite readable, owing to the self-explanatory -class names, and the use of '+', '|' and '^' operators. - -The L{ParseResults} object returned from L{ParserElement.parseString} can be accessed as a nested list, a dictionary, or an -object with named attributes. - -The pyparsing module handles some of the problems that are typically vexing when writing text parsers: - - extra or missing whitespace (the above program will also handle "Hello,World!", "Hello , World !", etc.) - - quoted strings - - embedded comments - - -Getting Started - ------------------ -Visit the classes L{ParserElement} and L{ParseResults} to see the base classes that most other pyparsing -classes inherit from. Use the docstrings for examples of how to: - - construct literal match expressions from L{Literal} and L{CaselessLiteral} classes - - construct character word-group expressions using the L{Word} class - - see how to create repetitive expressions using L{ZeroOrMore} and L{OneOrMore} classes - - use L{'+'}, L{'|'}, L{'^'}, and L{'&'} operators to combine simple expressions into more complex ones - - associate names with your parsed results using L{ParserElement.setResultsName} - - find some helpful expression short-cuts like L{delimitedList} and L{oneOf} - - find more useful common expressions in the L{pyparsing_common} namespace class -""" - -__version__ = "2.2.1" -__versionTime__ = "18 Sep 2018 00:49 UTC" -__author__ = "Paul McGuire " - -import string -from weakref import ref as wkref -import copy -import sys -import warnings -import re -import sre_constants -import collections -import pprint -import traceback -import types -from datetime import datetime - -try: - from _thread import RLock -except ImportError: - from threading import RLock - -try: - # Python 3 - from collections.abc import Iterable - from collections.abc import MutableMapping -except ImportError: - # Python 2.7 - from collections import Iterable - from collections import MutableMapping - -try: - from collections import OrderedDict as _OrderedDict -except ImportError: - try: - from ordereddict import OrderedDict as _OrderedDict - except ImportError: - _OrderedDict = None - -#~ sys.stderr.write( "testing pyparsing module, version %s, %s\n" % (__version__,__versionTime__ ) ) - -__all__ = [ -'And', 'CaselessKeyword', 'CaselessLiteral', 'CharsNotIn', 'Combine', 'Dict', 'Each', 'Empty', -'FollowedBy', 'Forward', 'GoToColumn', 'Group', 'Keyword', 'LineEnd', 'LineStart', 'Literal', -'MatchFirst', 'NoMatch', 'NotAny', 'OneOrMore', 'OnlyOnce', 'Optional', 'Or', -'ParseBaseException', 'ParseElementEnhance', 'ParseException', 'ParseExpression', 'ParseFatalException', -'ParseResults', 'ParseSyntaxException', 'ParserElement', 'QuotedString', 'RecursiveGrammarException', -'Regex', 'SkipTo', 'StringEnd', 'StringStart', 'Suppress', 'Token', 'TokenConverter', -'White', 'Word', 'WordEnd', 'WordStart', 'ZeroOrMore', -'alphanums', 'alphas', 'alphas8bit', 'anyCloseTag', 'anyOpenTag', 'cStyleComment', 'col', -'commaSeparatedList', 'commonHTMLEntity', 'countedArray', 'cppStyleComment', 'dblQuotedString', -'dblSlashComment', 'delimitedList', 'dictOf', 'downcaseTokens', 'empty', 'hexnums', -'htmlComment', 'javaStyleComment', 'line', 'lineEnd', 'lineStart', 'lineno', -'makeHTMLTags', 'makeXMLTags', 'matchOnlyAtCol', 'matchPreviousExpr', 'matchPreviousLiteral', -'nestedExpr', 'nullDebugAction', 'nums', 'oneOf', 'opAssoc', 'operatorPrecedence', 'printables', -'punc8bit', 'pythonStyleComment', 'quotedString', 'removeQuotes', 'replaceHTMLEntity', -'replaceWith', 'restOfLine', 'sglQuotedString', 'srange', 'stringEnd', -'stringStart', 'traceParseAction', 'unicodeString', 'upcaseTokens', 'withAttribute', -'indentedBlock', 'originalTextFor', 'ungroup', 'infixNotation','locatedExpr', 'withClass', -'CloseMatch', 'tokenMap', 'pyparsing_common', -] - -system_version = tuple(sys.version_info)[:3] -PY_3 = system_version[0] == 3 -if PY_3: - _MAX_INT = sys.maxsize - basestring = str - unichr = chr - _ustr = str - - # build list of single arg builtins, that can be used as parse actions - singleArgBuiltins = [sum, len, sorted, reversed, list, tuple, set, any, all, min, max] - -else: - _MAX_INT = sys.maxint - range = xrange - - def _ustr(obj): - """Drop-in replacement for str(obj) that tries to be Unicode friendly. It first tries - str(obj). If that fails with a UnicodeEncodeError, then it tries unicode(obj). It - then < returns the unicode object | encodes it with the default encoding | ... >. - """ - if isinstance(obj,unicode): - return obj - - try: - # If this works, then _ustr(obj) has the same behaviour as str(obj), so - # it won't break any existing code. - return str(obj) - - except UnicodeEncodeError: - # Else encode it - ret = unicode(obj).encode(sys.getdefaultencoding(), 'xmlcharrefreplace') - xmlcharref = Regex(r'&#\d+;') - xmlcharref.setParseAction(lambda t: '\\u' + hex(int(t[0][2:-1]))[2:]) - return xmlcharref.transformString(ret) - - # build list of single arg builtins, tolerant of Python version, that can be used as parse actions - singleArgBuiltins = [] - import __builtin__ - for fname in "sum len sorted reversed list tuple set any all min max".split(): - try: - singleArgBuiltins.append(getattr(__builtin__,fname)) - except AttributeError: - continue - -_generatorType = type((y for y in range(1))) - -def _xml_escape(data): - """Escape &, <, >, ", ', etc. in a string of data.""" - - # ampersand must be replaced first - from_symbols = '&><"\'' - to_symbols = ('&'+s+';' for s in "amp gt lt quot apos".split()) - for from_,to_ in zip(from_symbols, to_symbols): - data = data.replace(from_, to_) - return data - -class _Constants(object): - pass - -alphas = string.ascii_uppercase + string.ascii_lowercase -nums = "0123456789" -hexnums = nums + "ABCDEFabcdef" -alphanums = alphas + nums -_bslash = chr(92) -printables = "".join(c for c in string.printable if c not in string.whitespace) - -class ParseBaseException(Exception): - """base exception class for all parsing runtime exceptions""" - # Performance tuning: we construct a *lot* of these, so keep this - # constructor as small and fast as possible - def __init__( self, pstr, loc=0, msg=None, elem=None ): - self.loc = loc - if msg is None: - self.msg = pstr - self.pstr = "" - else: - self.msg = msg - self.pstr = pstr - self.parserElement = elem - self.args = (pstr, loc, msg) - - @classmethod - def _from_exception(cls, pe): - """ - internal factory method to simplify creating one type of ParseException - from another - avoids having __init__ signature conflicts among subclasses - """ - return cls(pe.pstr, pe.loc, pe.msg, pe.parserElement) - - def __getattr__( self, aname ): - """supported attributes by name are: - - lineno - returns the line number of the exception text - - col - returns the column number of the exception text - - line - returns the line containing the exception text - """ - if( aname == "lineno" ): - return lineno( self.loc, self.pstr ) - elif( aname in ("col", "column") ): - return col( self.loc, self.pstr ) - elif( aname == "line" ): - return line( self.loc, self.pstr ) - else: - raise AttributeError(aname) - - def __str__( self ): - return "%s (at char %d), (line:%d, col:%d)" % \ - ( self.msg, self.loc, self.lineno, self.column ) - def __repr__( self ): - return _ustr(self) - def markInputline( self, markerString = ">!<" ): - """Extracts the exception line from the input string, and marks - the location of the exception with a special symbol. - """ - line_str = self.line - line_column = self.column - 1 - if markerString: - line_str = "".join((line_str[:line_column], - markerString, line_str[line_column:])) - return line_str.strip() - def __dir__(self): - return "lineno col line".split() + dir(type(self)) - -class ParseException(ParseBaseException): - """ - Exception thrown when parse expressions don't match class; - supported attributes by name are: - - lineno - returns the line number of the exception text - - col - returns the column number of the exception text - - line - returns the line containing the exception text - - Example:: - try: - Word(nums).setName("integer").parseString("ABC") - except ParseException as pe: - print(pe) - print("column: {}".format(pe.col)) - - prints:: - Expected integer (at char 0), (line:1, col:1) - column: 1 - """ - pass - -class ParseFatalException(ParseBaseException): - """user-throwable exception thrown when inconsistent parse content - is found; stops all parsing immediately""" - pass - -class ParseSyntaxException(ParseFatalException): - """just like L{ParseFatalException}, but thrown internally when an - L{ErrorStop} ('-' operator) indicates that parsing is to stop - immediately because an unbacktrackable syntax error has been found""" - pass - -#~ class ReparseException(ParseBaseException): - #~ """Experimental class - parse actions can raise this exception to cause - #~ pyparsing to reparse the input string: - #~ - with a modified input string, and/or - #~ - with a modified start location - #~ Set the values of the ReparseException in the constructor, and raise the - #~ exception in a parse action to cause pyparsing to use the new string/location. - #~ Setting the values as None causes no change to be made. - #~ """ - #~ def __init_( self, newstring, restartLoc ): - #~ self.newParseText = newstring - #~ self.reparseLoc = restartLoc - -class RecursiveGrammarException(Exception): - """exception thrown by L{ParserElement.validate} if the grammar could be improperly recursive""" - def __init__( self, parseElementList ): - self.parseElementTrace = parseElementList - - def __str__( self ): - return "RecursiveGrammarException: %s" % self.parseElementTrace - -class _ParseResultsWithOffset(object): - def __init__(self,p1,p2): - self.tup = (p1,p2) - def __getitem__(self,i): - return self.tup[i] - def __repr__(self): - return repr(self.tup[0]) - def setOffset(self,i): - self.tup = (self.tup[0],i) - -class ParseResults(object): - """ - Structured parse results, to provide multiple means of access to the parsed data: - - as a list (C{len(results)}) - - by list index (C{results[0], results[1]}, etc.) - - by attribute (C{results.} - see L{ParserElement.setResultsName}) - - Example:: - integer = Word(nums) - date_str = (integer.setResultsName("year") + '/' - + integer.setResultsName("month") + '/' - + integer.setResultsName("day")) - # equivalent form: - # date_str = integer("year") + '/' + integer("month") + '/' + integer("day") - - # parseString returns a ParseResults object - result = date_str.parseString("1999/12/31") - - def test(s, fn=repr): - print("%s -> %s" % (s, fn(eval(s)))) - test("list(result)") - test("result[0]") - test("result['month']") - test("result.day") - test("'month' in result") - test("'minutes' in result") - test("result.dump()", str) - prints:: - list(result) -> ['1999', '/', '12', '/', '31'] - result[0] -> '1999' - result['month'] -> '12' - result.day -> '31' - 'month' in result -> True - 'minutes' in result -> False - result.dump() -> ['1999', '/', '12', '/', '31'] - - day: 31 - - month: 12 - - year: 1999 - """ - def __new__(cls, toklist=None, name=None, asList=True, modal=True ): - if isinstance(toklist, cls): - return toklist - retobj = object.__new__(cls) - retobj.__doinit = True - return retobj - - # Performance tuning: we construct a *lot* of these, so keep this - # constructor as small and fast as possible - def __init__( self, toklist=None, name=None, asList=True, modal=True, isinstance=isinstance ): - if self.__doinit: - self.__doinit = False - self.__name = None - self.__parent = None - self.__accumNames = {} - self.__asList = asList - self.__modal = modal - if toklist is None: - toklist = [] - if isinstance(toklist, list): - self.__toklist = toklist[:] - elif isinstance(toklist, _generatorType): - self.__toklist = list(toklist) - else: - self.__toklist = [toklist] - self.__tokdict = dict() - - if name is not None and name: - if not modal: - self.__accumNames[name] = 0 - if isinstance(name,int): - name = _ustr(name) # will always return a str, but use _ustr for consistency - self.__name = name - if not (isinstance(toklist, (type(None), basestring, list)) and toklist in (None,'',[])): - if isinstance(toklist,basestring): - toklist = [ toklist ] - if asList: - if isinstance(toklist,ParseResults): - self[name] = _ParseResultsWithOffset(toklist.copy(),0) - else: - self[name] = _ParseResultsWithOffset(ParseResults(toklist[0]),0) - self[name].__name = name - else: - try: - self[name] = toklist[0] - except (KeyError,TypeError,IndexError): - self[name] = toklist - - def __getitem__( self, i ): - if isinstance( i, (int,slice) ): - return self.__toklist[i] - else: - if i not in self.__accumNames: - return self.__tokdict[i][-1][0] - else: - return ParseResults([ v[0] for v in self.__tokdict[i] ]) - - def __setitem__( self, k, v, isinstance=isinstance ): - if isinstance(v,_ParseResultsWithOffset): - self.__tokdict[k] = self.__tokdict.get(k,list()) + [v] - sub = v[0] - elif isinstance(k,(int,slice)): - self.__toklist[k] = v - sub = v - else: - self.__tokdict[k] = self.__tokdict.get(k,list()) + [_ParseResultsWithOffset(v,0)] - sub = v - if isinstance(sub,ParseResults): - sub.__parent = wkref(self) - - def __delitem__( self, i ): - if isinstance(i,(int,slice)): - mylen = len( self.__toklist ) - del self.__toklist[i] - - # convert int to slice - if isinstance(i, int): - if i < 0: - i += mylen - i = slice(i, i+1) - # get removed indices - removed = list(range(*i.indices(mylen))) - removed.reverse() - # fixup indices in token dictionary - for name,occurrences in self.__tokdict.items(): - for j in removed: - for k, (value, position) in enumerate(occurrences): - occurrences[k] = _ParseResultsWithOffset(value, position - (position > j)) - else: - del self.__tokdict[i] - - def __contains__( self, k ): - return k in self.__tokdict - - def __len__( self ): return len( self.__toklist ) - def __bool__(self): return ( not not self.__toklist ) - __nonzero__ = __bool__ - def __iter__( self ): return iter( self.__toklist ) - def __reversed__( self ): return iter( self.__toklist[::-1] ) - def _iterkeys( self ): - if hasattr(self.__tokdict, "iterkeys"): - return self.__tokdict.iterkeys() - else: - return iter(self.__tokdict) - - def _itervalues( self ): - return (self[k] for k in self._iterkeys()) - - def _iteritems( self ): - return ((k, self[k]) for k in self._iterkeys()) - - if PY_3: - keys = _iterkeys - """Returns an iterator of all named result keys (Python 3.x only).""" - - values = _itervalues - """Returns an iterator of all named result values (Python 3.x only).""" - - items = _iteritems - """Returns an iterator of all named result key-value tuples (Python 3.x only).""" - - else: - iterkeys = _iterkeys - """Returns an iterator of all named result keys (Python 2.x only).""" - - itervalues = _itervalues - """Returns an iterator of all named result values (Python 2.x only).""" - - iteritems = _iteritems - """Returns an iterator of all named result key-value tuples (Python 2.x only).""" - - def keys( self ): - """Returns all named result keys (as a list in Python 2.x, as an iterator in Python 3.x).""" - return list(self.iterkeys()) - - def values( self ): - """Returns all named result values (as a list in Python 2.x, as an iterator in Python 3.x).""" - return list(self.itervalues()) - - def items( self ): - """Returns all named result key-values (as a list of tuples in Python 2.x, as an iterator in Python 3.x).""" - return list(self.iteritems()) - - def haskeys( self ): - """Since keys() returns an iterator, this method is helpful in bypassing - code that looks for the existence of any defined results names.""" - return bool(self.__tokdict) - - def pop( self, *args, **kwargs): - """ - Removes and returns item at specified index (default=C{last}). - Supports both C{list} and C{dict} semantics for C{pop()}. If passed no - argument or an integer argument, it will use C{list} semantics - and pop tokens from the list of parsed tokens. If passed a - non-integer argument (most likely a string), it will use C{dict} - semantics and pop the corresponding value from any defined - results names. A second default return value argument is - supported, just as in C{dict.pop()}. - - Example:: - def remove_first(tokens): - tokens.pop(0) - print(OneOrMore(Word(nums)).parseString("0 123 321")) # -> ['0', '123', '321'] - print(OneOrMore(Word(nums)).addParseAction(remove_first).parseString("0 123 321")) # -> ['123', '321'] - - label = Word(alphas) - patt = label("LABEL") + OneOrMore(Word(nums)) - print(patt.parseString("AAB 123 321").dump()) - - # Use pop() in a parse action to remove named result (note that corresponding value is not - # removed from list form of results) - def remove_LABEL(tokens): - tokens.pop("LABEL") - return tokens - patt.addParseAction(remove_LABEL) - print(patt.parseString("AAB 123 321").dump()) - prints:: - ['AAB', '123', '321'] - - LABEL: AAB - - ['AAB', '123', '321'] - """ - if not args: - args = [-1] - for k,v in kwargs.items(): - if k == 'default': - args = (args[0], v) - else: - raise TypeError("pop() got an unexpected keyword argument '%s'" % k) - if (isinstance(args[0], int) or - len(args) == 1 or - args[0] in self): - index = args[0] - ret = self[index] - del self[index] - return ret - else: - defaultvalue = args[1] - return defaultvalue - - def get(self, key, defaultValue=None): - """ - Returns named result matching the given key, or if there is no - such name, then returns the given C{defaultValue} or C{None} if no - C{defaultValue} is specified. - - Similar to C{dict.get()}. - - Example:: - integer = Word(nums) - date_str = integer("year") + '/' + integer("month") + '/' + integer("day") - - result = date_str.parseString("1999/12/31") - print(result.get("year")) # -> '1999' - print(result.get("hour", "not specified")) # -> 'not specified' - print(result.get("hour")) # -> None - """ - if key in self: - return self[key] - else: - return defaultValue - - def insert( self, index, insStr ): - """ - Inserts new element at location index in the list of parsed tokens. - - Similar to C{list.insert()}. - - Example:: - print(OneOrMore(Word(nums)).parseString("0 123 321")) # -> ['0', '123', '321'] - - # use a parse action to insert the parse location in the front of the parsed results - def insert_locn(locn, tokens): - tokens.insert(0, locn) - print(OneOrMore(Word(nums)).addParseAction(insert_locn).parseString("0 123 321")) # -> [0, '0', '123', '321'] - """ - self.__toklist.insert(index, insStr) - # fixup indices in token dictionary - for name,occurrences in self.__tokdict.items(): - for k, (value, position) in enumerate(occurrences): - occurrences[k] = _ParseResultsWithOffset(value, position + (position > index)) - - def append( self, item ): - """ - Add single element to end of ParseResults list of elements. - - Example:: - print(OneOrMore(Word(nums)).parseString("0 123 321")) # -> ['0', '123', '321'] - - # use a parse action to compute the sum of the parsed integers, and add it to the end - def append_sum(tokens): - tokens.append(sum(map(int, tokens))) - print(OneOrMore(Word(nums)).addParseAction(append_sum).parseString("0 123 321")) # -> ['0', '123', '321', 444] - """ - self.__toklist.append(item) - - def extend( self, itemseq ): - """ - Add sequence of elements to end of ParseResults list of elements. - - Example:: - patt = OneOrMore(Word(alphas)) - - # use a parse action to append the reverse of the matched strings, to make a palindrome - def make_palindrome(tokens): - tokens.extend(reversed([t[::-1] for t in tokens])) - return ''.join(tokens) - print(patt.addParseAction(make_palindrome).parseString("lskdj sdlkjf lksd")) # -> 'lskdjsdlkjflksddsklfjkldsjdksl' - """ - if isinstance(itemseq, ParseResults): - self += itemseq - else: - self.__toklist.extend(itemseq) - - def clear( self ): - """ - Clear all elements and results names. - """ - del self.__toklist[:] - self.__tokdict.clear() - - def __getattr__( self, name ): - try: - return self[name] - except KeyError: - return "" - - if name in self.__tokdict: - if name not in self.__accumNames: - return self.__tokdict[name][-1][0] - else: - return ParseResults([ v[0] for v in self.__tokdict[name] ]) - else: - return "" - - def __add__( self, other ): - ret = self.copy() - ret += other - return ret - - def __iadd__( self, other ): - if other.__tokdict: - offset = len(self.__toklist) - addoffset = lambda a: offset if a<0 else a+offset - otheritems = other.__tokdict.items() - otherdictitems = [(k, _ParseResultsWithOffset(v[0],addoffset(v[1])) ) - for (k,vlist) in otheritems for v in vlist] - for k,v in otherdictitems: - self[k] = v - if isinstance(v[0],ParseResults): - v[0].__parent = wkref(self) - - self.__toklist += other.__toklist - self.__accumNames.update( other.__accumNames ) - return self - - def __radd__(self, other): - if isinstance(other,int) and other == 0: - # useful for merging many ParseResults using sum() builtin - return self.copy() - else: - # this may raise a TypeError - so be it - return other + self - - def __repr__( self ): - return "(%s, %s)" % ( repr( self.__toklist ), repr( self.__tokdict ) ) - - def __str__( self ): - return '[' + ', '.join(_ustr(i) if isinstance(i, ParseResults) else repr(i) for i in self.__toklist) + ']' - - def _asStringList( self, sep='' ): - out = [] - for item in self.__toklist: - if out and sep: - out.append(sep) - if isinstance( item, ParseResults ): - out += item._asStringList() - else: - out.append( _ustr(item) ) - return out - - def asList( self ): - """ - Returns the parse results as a nested list of matching tokens, all converted to strings. - - Example:: - patt = OneOrMore(Word(alphas)) - result = patt.parseString("sldkj lsdkj sldkj") - # even though the result prints in string-like form, it is actually a pyparsing ParseResults - print(type(result), result) # -> ['sldkj', 'lsdkj', 'sldkj'] - - # Use asList() to create an actual list - result_list = result.asList() - print(type(result_list), result_list) # -> ['sldkj', 'lsdkj', 'sldkj'] - """ - return [res.asList() if isinstance(res,ParseResults) else res for res in self.__toklist] - - def asDict( self ): - """ - Returns the named parse results as a nested dictionary. - - Example:: - integer = Word(nums) - date_str = integer("year") + '/' + integer("month") + '/' + integer("day") - - result = date_str.parseString('12/31/1999') - print(type(result), repr(result)) # -> (['12', '/', '31', '/', '1999'], {'day': [('1999', 4)], 'year': [('12', 0)], 'month': [('31', 2)]}) - - result_dict = result.asDict() - print(type(result_dict), repr(result_dict)) # -> {'day': '1999', 'year': '12', 'month': '31'} - - # even though a ParseResults supports dict-like access, sometime you just need to have a dict - import json - print(json.dumps(result)) # -> Exception: TypeError: ... is not JSON serializable - print(json.dumps(result.asDict())) # -> {"month": "31", "day": "1999", "year": "12"} - """ - if PY_3: - item_fn = self.items - else: - item_fn = self.iteritems - - def toItem(obj): - if isinstance(obj, ParseResults): - if obj.haskeys(): - return obj.asDict() - else: - return [toItem(v) for v in obj] - else: - return obj - - return dict((k,toItem(v)) for k,v in item_fn()) - - def copy( self ): - """ - Returns a new copy of a C{ParseResults} object. - """ - ret = ParseResults( self.__toklist ) - ret.__tokdict = self.__tokdict.copy() - ret.__parent = self.__parent - ret.__accumNames.update( self.__accumNames ) - ret.__name = self.__name - return ret - - def asXML( self, doctag=None, namedItemsOnly=False, indent="", formatted=True ): - """ - (Deprecated) Returns the parse results as XML. Tags are created for tokens and lists that have defined results names. - """ - nl = "\n" - out = [] - namedItems = dict((v[1],k) for (k,vlist) in self.__tokdict.items() - for v in vlist) - nextLevelIndent = indent + " " - - # collapse out indents if formatting is not desired - if not formatted: - indent = "" - nextLevelIndent = "" - nl = "" - - selfTag = None - if doctag is not None: - selfTag = doctag - else: - if self.__name: - selfTag = self.__name - - if not selfTag: - if namedItemsOnly: - return "" - else: - selfTag = "ITEM" - - out += [ nl, indent, "<", selfTag, ">" ] - - for i,res in enumerate(self.__toklist): - if isinstance(res,ParseResults): - if i in namedItems: - out += [ res.asXML(namedItems[i], - namedItemsOnly and doctag is None, - nextLevelIndent, - formatted)] - else: - out += [ res.asXML(None, - namedItemsOnly and doctag is None, - nextLevelIndent, - formatted)] - else: - # individual token, see if there is a name for it - resTag = None - if i in namedItems: - resTag = namedItems[i] - if not resTag: - if namedItemsOnly: - continue - else: - resTag = "ITEM" - xmlBodyText = _xml_escape(_ustr(res)) - out += [ nl, nextLevelIndent, "<", resTag, ">", - xmlBodyText, - "" ] - - out += [ nl, indent, "" ] - return "".join(out) - - def __lookup(self,sub): - for k,vlist in self.__tokdict.items(): - for v,loc in vlist: - if sub is v: - return k - return None - - def getName(self): - r""" - Returns the results name for this token expression. Useful when several - different expressions might match at a particular location. - - Example:: - integer = Word(nums) - ssn_expr = Regex(r"\d\d\d-\d\d-\d\d\d\d") - house_number_expr = Suppress('#') + Word(nums, alphanums) - user_data = (Group(house_number_expr)("house_number") - | Group(ssn_expr)("ssn") - | Group(integer)("age")) - user_info = OneOrMore(user_data) - - result = user_info.parseString("22 111-22-3333 #221B") - for item in result: - print(item.getName(), ':', item[0]) - prints:: - age : 22 - ssn : 111-22-3333 - house_number : 221B - """ - if self.__name: - return self.__name - elif self.__parent: - par = self.__parent() - if par: - return par.__lookup(self) - else: - return None - elif (len(self) == 1 and - len(self.__tokdict) == 1 and - next(iter(self.__tokdict.values()))[0][1] in (0,-1)): - return next(iter(self.__tokdict.keys())) - else: - return None - - def dump(self, indent='', depth=0, full=True): - """ - Diagnostic method for listing out the contents of a C{ParseResults}. - Accepts an optional C{indent} argument so that this string can be embedded - in a nested display of other data. - - Example:: - integer = Word(nums) - date_str = integer("year") + '/' + integer("month") + '/' + integer("day") - - result = date_str.parseString('12/31/1999') - print(result.dump()) - prints:: - ['12', '/', '31', '/', '1999'] - - day: 1999 - - month: 31 - - year: 12 - """ - out = [] - NL = '\n' - out.append( indent+_ustr(self.asList()) ) - if full: - if self.haskeys(): - items = sorted((str(k), v) for k,v in self.items()) - for k,v in items: - if out: - out.append(NL) - out.append( "%s%s- %s: " % (indent,(' '*depth), k) ) - if isinstance(v,ParseResults): - if v: - out.append( v.dump(indent,depth+1) ) - else: - out.append(_ustr(v)) - else: - out.append(repr(v)) - elif any(isinstance(vv,ParseResults) for vv in self): - v = self - for i,vv in enumerate(v): - if isinstance(vv,ParseResults): - out.append("\n%s%s[%d]:\n%s%s%s" % (indent,(' '*(depth)),i,indent,(' '*(depth+1)),vv.dump(indent,depth+1) )) - else: - out.append("\n%s%s[%d]:\n%s%s%s" % (indent,(' '*(depth)),i,indent,(' '*(depth+1)),_ustr(vv))) - - return "".join(out) - - def pprint(self, *args, **kwargs): - """ - Pretty-printer for parsed results as a list, using the C{pprint} module. - Accepts additional positional or keyword args as defined for the - C{pprint.pprint} method. (U{http://docs.python.org/3/library/pprint.html#pprint.pprint}) - - Example:: - ident = Word(alphas, alphanums) - num = Word(nums) - func = Forward() - term = ident | num | Group('(' + func + ')') - func <<= ident + Group(Optional(delimitedList(term))) - result = func.parseString("fna a,b,(fnb c,d,200),100") - result.pprint(width=40) - prints:: - ['fna', - ['a', - 'b', - ['(', 'fnb', ['c', 'd', '200'], ')'], - '100']] - """ - pprint.pprint(self.asList(), *args, **kwargs) - - # add support for pickle protocol - def __getstate__(self): - return ( self.__toklist, - ( self.__tokdict.copy(), - self.__parent is not None and self.__parent() or None, - self.__accumNames, - self.__name ) ) - - def __setstate__(self,state): - self.__toklist = state[0] - (self.__tokdict, - par, - inAccumNames, - self.__name) = state[1] - self.__accumNames = {} - self.__accumNames.update(inAccumNames) - if par is not None: - self.__parent = wkref(par) - else: - self.__parent = None - - def __getnewargs__(self): - return self.__toklist, self.__name, self.__asList, self.__modal - - def __dir__(self): - return (dir(type(self)) + list(self.keys())) - -MutableMapping.register(ParseResults) - -def col (loc,strg): - """Returns current column within a string, counting newlines as line separators. - The first column is number 1. - - Note: the default parsing behavior is to expand tabs in the input string - before starting the parsing process. See L{I{ParserElement.parseString}} for more information - on parsing strings containing C{}s, and suggested methods to maintain a - consistent view of the parsed string, the parse location, and line and column - positions within the parsed string. - """ - s = strg - return 1 if 0} for more information - on parsing strings containing C{}s, and suggested methods to maintain a - consistent view of the parsed string, the parse location, and line and column - positions within the parsed string. - """ - return strg.count("\n",0,loc) + 1 - -def line( loc, strg ): - """Returns the line of text containing loc within a string, counting newlines as line separators. - """ - lastCR = strg.rfind("\n", 0, loc) - nextCR = strg.find("\n", loc) - if nextCR >= 0: - return strg[lastCR+1:nextCR] - else: - return strg[lastCR+1:] - -def _defaultStartDebugAction( instring, loc, expr ): - print (("Match " + _ustr(expr) + " at loc " + _ustr(loc) + "(%d,%d)" % ( lineno(loc,instring), col(loc,instring) ))) - -def _defaultSuccessDebugAction( instring, startloc, endloc, expr, toks ): - print ("Matched " + _ustr(expr) + " -> " + str(toks.asList())) - -def _defaultExceptionDebugAction( instring, loc, expr, exc ): - print ("Exception raised:" + _ustr(exc)) - -def nullDebugAction(*args): - """'Do-nothing' debug action, to suppress debugging output during parsing.""" - pass - -# Only works on Python 3.x - nonlocal is toxic to Python 2 installs -#~ 'decorator to trim function calls to match the arity of the target' -#~ def _trim_arity(func, maxargs=3): - #~ if func in singleArgBuiltins: - #~ return lambda s,l,t: func(t) - #~ limit = 0 - #~ foundArity = False - #~ def wrapper(*args): - #~ nonlocal limit,foundArity - #~ while 1: - #~ try: - #~ ret = func(*args[limit:]) - #~ foundArity = True - #~ return ret - #~ except TypeError: - #~ if limit == maxargs or foundArity: - #~ raise - #~ limit += 1 - #~ continue - #~ return wrapper - -# this version is Python 2.x-3.x cross-compatible -'decorator to trim function calls to match the arity of the target' -def _trim_arity(func, maxargs=2): - if func in singleArgBuiltins: - return lambda s,l,t: func(t) - limit = [0] - foundArity = [False] - - # traceback return data structure changed in Py3.5 - normalize back to plain tuples - if system_version[:2] >= (3,5): - def extract_stack(limit=0): - # special handling for Python 3.5.0 - extra deep call stack by 1 - offset = -3 if system_version == (3,5,0) else -2 - frame_summary = traceback.extract_stack(limit=-offset+limit-1)[offset] - return [frame_summary[:2]] - def extract_tb(tb, limit=0): - frames = traceback.extract_tb(tb, limit=limit) - frame_summary = frames[-1] - return [frame_summary[:2]] - else: - extract_stack = traceback.extract_stack - extract_tb = traceback.extract_tb - - # synthesize what would be returned by traceback.extract_stack at the call to - # user's parse action 'func', so that we don't incur call penalty at parse time - - LINE_DIFF = 6 - # IF ANY CODE CHANGES, EVEN JUST COMMENTS OR BLANK LINES, BETWEEN THE NEXT LINE AND - # THE CALL TO FUNC INSIDE WRAPPER, LINE_DIFF MUST BE MODIFIED!!!! - this_line = extract_stack(limit=2)[-1] - pa_call_line_synth = (this_line[0], this_line[1]+LINE_DIFF) - - def wrapper(*args): - while 1: - try: - ret = func(*args[limit[0]:]) - foundArity[0] = True - return ret - except TypeError: - # re-raise TypeErrors if they did not come from our arity testing - if foundArity[0]: - raise - else: - try: - tb = sys.exc_info()[-1] - if not extract_tb(tb, limit=2)[-1][:2] == pa_call_line_synth: - raise - finally: - del tb - - if limit[0] <= maxargs: - limit[0] += 1 - continue - raise - - # copy func name to wrapper for sensible debug output - func_name = "" - try: - func_name = getattr(func, '__name__', - getattr(func, '__class__').__name__) - except Exception: - func_name = str(func) - wrapper.__name__ = func_name - - return wrapper - -class ParserElement(object): - """Abstract base level parser element class.""" - DEFAULT_WHITE_CHARS = " \n\t\r" - verbose_stacktrace = False - - @staticmethod - def setDefaultWhitespaceChars( chars ): - r""" - Overrides the default whitespace chars - - Example:: - # default whitespace chars are space, and newline - OneOrMore(Word(alphas)).parseString("abc def\nghi jkl") # -> ['abc', 'def', 'ghi', 'jkl'] - - # change to just treat newline as significant - ParserElement.setDefaultWhitespaceChars(" \t") - OneOrMore(Word(alphas)).parseString("abc def\nghi jkl") # -> ['abc', 'def'] - """ - ParserElement.DEFAULT_WHITE_CHARS = chars - - @staticmethod - def inlineLiteralsUsing(cls): - """ - Set class to be used for inclusion of string literals into a parser. - - Example:: - # default literal class used is Literal - integer = Word(nums) - date_str = integer("year") + '/' + integer("month") + '/' + integer("day") - - date_str.parseString("1999/12/31") # -> ['1999', '/', '12', '/', '31'] - - - # change to Suppress - ParserElement.inlineLiteralsUsing(Suppress) - date_str = integer("year") + '/' + integer("month") + '/' + integer("day") - - date_str.parseString("1999/12/31") # -> ['1999', '12', '31'] - """ - ParserElement._literalStringClass = cls - - def __init__( self, savelist=False ): - self.parseAction = list() - self.failAction = None - #~ self.name = "" # don't define self.name, let subclasses try/except upcall - self.strRepr = None - self.resultsName = None - self.saveAsList = savelist - self.skipWhitespace = True - self.whiteChars = ParserElement.DEFAULT_WHITE_CHARS - self.copyDefaultWhiteChars = True - self.mayReturnEmpty = False # used when checking for left-recursion - self.keepTabs = False - self.ignoreExprs = list() - self.debug = False - self.streamlined = False - self.mayIndexError = True # used to optimize exception handling for subclasses that don't advance parse index - self.errmsg = "" - self.modalResults = True # used to mark results names as modal (report only last) or cumulative (list all) - self.debugActions = ( None, None, None ) #custom debug actions - self.re = None - self.callPreparse = True # used to avoid redundant calls to preParse - self.callDuringTry = False - - def copy( self ): - """ - Make a copy of this C{ParserElement}. Useful for defining different parse actions - for the same parsing pattern, using copies of the original parse element. - - Example:: - integer = Word(nums).setParseAction(lambda toks: int(toks[0])) - integerK = integer.copy().addParseAction(lambda toks: toks[0]*1024) + Suppress("K") - integerM = integer.copy().addParseAction(lambda toks: toks[0]*1024*1024) + Suppress("M") - - print(OneOrMore(integerK | integerM | integer).parseString("5K 100 640K 256M")) - prints:: - [5120, 100, 655360, 268435456] - Equivalent form of C{expr.copy()} is just C{expr()}:: - integerM = integer().addParseAction(lambda toks: toks[0]*1024*1024) + Suppress("M") - """ - cpy = copy.copy( self ) - cpy.parseAction = self.parseAction[:] - cpy.ignoreExprs = self.ignoreExprs[:] - if self.copyDefaultWhiteChars: - cpy.whiteChars = ParserElement.DEFAULT_WHITE_CHARS - return cpy - - def setName( self, name ): - """ - Define name for this expression, makes debugging and exception messages clearer. - - Example:: - Word(nums).parseString("ABC") # -> Exception: Expected W:(0123...) (at char 0), (line:1, col:1) - Word(nums).setName("integer").parseString("ABC") # -> Exception: Expected integer (at char 0), (line:1, col:1) - """ - self.name = name - self.errmsg = "Expected " + self.name - if hasattr(self,"exception"): - self.exception.msg = self.errmsg - return self - - def setResultsName( self, name, listAllMatches=False ): - """ - Define name for referencing matching tokens as a nested attribute - of the returned parse results. - NOTE: this returns a *copy* of the original C{ParserElement} object; - this is so that the client can define a basic element, such as an - integer, and reference it in multiple places with different names. - - You can also set results names using the abbreviated syntax, - C{expr("name")} in place of C{expr.setResultsName("name")} - - see L{I{__call__}<__call__>}. - - Example:: - date_str = (integer.setResultsName("year") + '/' - + integer.setResultsName("month") + '/' - + integer.setResultsName("day")) - - # equivalent form: - date_str = integer("year") + '/' + integer("month") + '/' + integer("day") - """ - newself = self.copy() - if name.endswith("*"): - name = name[:-1] - listAllMatches=True - newself.resultsName = name - newself.modalResults = not listAllMatches - return newself - - def setBreak(self,breakFlag = True): - """Method to invoke the Python pdb debugger when this element is - about to be parsed. Set C{breakFlag} to True to enable, False to - disable. - """ - if breakFlag: - _parseMethod = self._parse - def breaker(instring, loc, doActions=True, callPreParse=True): - import pdb - pdb.set_trace() - return _parseMethod( instring, loc, doActions, callPreParse ) - breaker._originalParseMethod = _parseMethod - self._parse = breaker - else: - if hasattr(self._parse,"_originalParseMethod"): - self._parse = self._parse._originalParseMethod - return self - - def setParseAction( self, *fns, **kwargs ): - """ - Define one or more actions to perform when successfully matching parse element definition. - Parse action fn is a callable method with 0-3 arguments, called as C{fn(s,loc,toks)}, - C{fn(loc,toks)}, C{fn(toks)}, or just C{fn()}, where: - - s = the original string being parsed (see note below) - - loc = the location of the matching substring - - toks = a list of the matched tokens, packaged as a C{L{ParseResults}} object - If the functions in fns modify the tokens, they can return them as the return - value from fn, and the modified list of tokens will replace the original. - Otherwise, fn does not need to return any value. - - Optional keyword arguments: - - callDuringTry = (default=C{False}) indicate if parse action should be run during lookaheads and alternate testing - - Note: the default parsing behavior is to expand tabs in the input string - before starting the parsing process. See L{I{parseString}} for more information - on parsing strings containing C{}s, and suggested methods to maintain a - consistent view of the parsed string, the parse location, and line and column - positions within the parsed string. - - Example:: - integer = Word(nums) - date_str = integer + '/' + integer + '/' + integer - - date_str.parseString("1999/12/31") # -> ['1999', '/', '12', '/', '31'] - - # use parse action to convert to ints at parse time - integer = Word(nums).setParseAction(lambda toks: int(toks[0])) - date_str = integer + '/' + integer + '/' + integer - - # note that integer fields are now ints, not strings - date_str.parseString("1999/12/31") # -> [1999, '/', 12, '/', 31] - """ - self.parseAction = list(map(_trim_arity, list(fns))) - self.callDuringTry = kwargs.get("callDuringTry", False) - return self - - def addParseAction( self, *fns, **kwargs ): - """ - Add one or more parse actions to expression's list of parse actions. See L{I{setParseAction}}. - - See examples in L{I{copy}}. - """ - self.parseAction += list(map(_trim_arity, list(fns))) - self.callDuringTry = self.callDuringTry or kwargs.get("callDuringTry", False) - return self - - def addCondition(self, *fns, **kwargs): - """Add a boolean predicate function to expression's list of parse actions. See - L{I{setParseAction}} for function call signatures. Unlike C{setParseAction}, - functions passed to C{addCondition} need to return boolean success/fail of the condition. - - Optional keyword arguments: - - message = define a custom message to be used in the raised exception - - fatal = if True, will raise ParseFatalException to stop parsing immediately; otherwise will raise ParseException - - Example:: - integer = Word(nums).setParseAction(lambda toks: int(toks[0])) - year_int = integer.copy() - year_int.addCondition(lambda toks: toks[0] >= 2000, message="Only support years 2000 and later") - date_str = year_int + '/' + integer + '/' + integer - - result = date_str.parseString("1999/12/31") # -> Exception: Only support years 2000 and later (at char 0), (line:1, col:1) - """ - msg = kwargs.get("message", "failed user-defined condition") - exc_type = ParseFatalException if kwargs.get("fatal", False) else ParseException - for fn in fns: - def pa(s,l,t): - if not bool(_trim_arity(fn)(s,l,t)): - raise exc_type(s,l,msg) - self.parseAction.append(pa) - self.callDuringTry = self.callDuringTry or kwargs.get("callDuringTry", False) - return self - - def setFailAction( self, fn ): - """Define action to perform if parsing fails at this expression. - Fail acton fn is a callable function that takes the arguments - C{fn(s,loc,expr,err)} where: - - s = string being parsed - - loc = location where expression match was attempted and failed - - expr = the parse expression that failed - - err = the exception thrown - The function returns no value. It may throw C{L{ParseFatalException}} - if it is desired to stop parsing immediately.""" - self.failAction = fn - return self - - def _skipIgnorables( self, instring, loc ): - exprsFound = True - while exprsFound: - exprsFound = False - for e in self.ignoreExprs: - try: - while 1: - loc,dummy = e._parse( instring, loc ) - exprsFound = True - except ParseException: - pass - return loc - - def preParse( self, instring, loc ): - if self.ignoreExprs: - loc = self._skipIgnorables( instring, loc ) - - if self.skipWhitespace: - wt = self.whiteChars - instrlen = len(instring) - while loc < instrlen and instring[loc] in wt: - loc += 1 - - return loc - - def parseImpl( self, instring, loc, doActions=True ): - return loc, [] - - def postParse( self, instring, loc, tokenlist ): - return tokenlist - - #~ @profile - def _parseNoCache( self, instring, loc, doActions=True, callPreParse=True ): - debugging = ( self.debug ) #and doActions ) - - if debugging or self.failAction: - #~ print ("Match",self,"at loc",loc,"(%d,%d)" % ( lineno(loc,instring), col(loc,instring) )) - if (self.debugActions[0] ): - self.debugActions[0]( instring, loc, self ) - if callPreParse and self.callPreparse: - preloc = self.preParse( instring, loc ) - else: - preloc = loc - tokensStart = preloc - try: - try: - loc,tokens = self.parseImpl( instring, preloc, doActions ) - except IndexError: - raise ParseException( instring, len(instring), self.errmsg, self ) - except ParseBaseException as err: - #~ print ("Exception raised:", err) - if self.debugActions[2]: - self.debugActions[2]( instring, tokensStart, self, err ) - if self.failAction: - self.failAction( instring, tokensStart, self, err ) - raise - else: - if callPreParse and self.callPreparse: - preloc = self.preParse( instring, loc ) - else: - preloc = loc - tokensStart = preloc - if self.mayIndexError or preloc >= len(instring): - try: - loc,tokens = self.parseImpl( instring, preloc, doActions ) - except IndexError: - raise ParseException( instring, len(instring), self.errmsg, self ) - else: - loc,tokens = self.parseImpl( instring, preloc, doActions ) - - tokens = self.postParse( instring, loc, tokens ) - - retTokens = ParseResults( tokens, self.resultsName, asList=self.saveAsList, modal=self.modalResults ) - if self.parseAction and (doActions or self.callDuringTry): - if debugging: - try: - for fn in self.parseAction: - tokens = fn( instring, tokensStart, retTokens ) - if tokens is not None: - retTokens = ParseResults( tokens, - self.resultsName, - asList=self.saveAsList and isinstance(tokens,(ParseResults,list)), - modal=self.modalResults ) - except ParseBaseException as err: - #~ print "Exception raised in user parse action:", err - if (self.debugActions[2] ): - self.debugActions[2]( instring, tokensStart, self, err ) - raise - else: - for fn in self.parseAction: - tokens = fn( instring, tokensStart, retTokens ) - if tokens is not None: - retTokens = ParseResults( tokens, - self.resultsName, - asList=self.saveAsList and isinstance(tokens,(ParseResults,list)), - modal=self.modalResults ) - if debugging: - #~ print ("Matched",self,"->",retTokens.asList()) - if (self.debugActions[1] ): - self.debugActions[1]( instring, tokensStart, loc, self, retTokens ) - - return loc, retTokens - - def tryParse( self, instring, loc ): - try: - return self._parse( instring, loc, doActions=False )[0] - except ParseFatalException: - raise ParseException( instring, loc, self.errmsg, self) - - def canParseNext(self, instring, loc): - try: - self.tryParse(instring, loc) - except (ParseException, IndexError): - return False - else: - return True - - class _UnboundedCache(object): - def __init__(self): - cache = {} - self.not_in_cache = not_in_cache = object() - - def get(self, key): - return cache.get(key, not_in_cache) - - def set(self, key, value): - cache[key] = value - - def clear(self): - cache.clear() - - def cache_len(self): - return len(cache) - - self.get = types.MethodType(get, self) - self.set = types.MethodType(set, self) - self.clear = types.MethodType(clear, self) - self.__len__ = types.MethodType(cache_len, self) - - if _OrderedDict is not None: - class _FifoCache(object): - def __init__(self, size): - self.not_in_cache = not_in_cache = object() - - cache = _OrderedDict() - - def get(self, key): - return cache.get(key, not_in_cache) - - def set(self, key, value): - cache[key] = value - while len(cache) > size: - try: - cache.popitem(False) - except KeyError: - pass - - def clear(self): - cache.clear() - - def cache_len(self): - return len(cache) - - self.get = types.MethodType(get, self) - self.set = types.MethodType(set, self) - self.clear = types.MethodType(clear, self) - self.__len__ = types.MethodType(cache_len, self) - - else: - class _FifoCache(object): - def __init__(self, size): - self.not_in_cache = not_in_cache = object() - - cache = {} - key_fifo = collections.deque([], size) - - def get(self, key): - return cache.get(key, not_in_cache) - - def set(self, key, value): - cache[key] = value - while len(key_fifo) > size: - cache.pop(key_fifo.popleft(), None) - key_fifo.append(key) - - def clear(self): - cache.clear() - key_fifo.clear() - - def cache_len(self): - return len(cache) - - self.get = types.MethodType(get, self) - self.set = types.MethodType(set, self) - self.clear = types.MethodType(clear, self) - self.__len__ = types.MethodType(cache_len, self) - - # argument cache for optimizing repeated calls when backtracking through recursive expressions - packrat_cache = {} # this is set later by enabledPackrat(); this is here so that resetCache() doesn't fail - packrat_cache_lock = RLock() - packrat_cache_stats = [0, 0] - - # this method gets repeatedly called during backtracking with the same arguments - - # we can cache these arguments and save ourselves the trouble of re-parsing the contained expression - def _parseCache( self, instring, loc, doActions=True, callPreParse=True ): - HIT, MISS = 0, 1 - lookup = (self, instring, loc, callPreParse, doActions) - with ParserElement.packrat_cache_lock: - cache = ParserElement.packrat_cache - value = cache.get(lookup) - if value is cache.not_in_cache: - ParserElement.packrat_cache_stats[MISS] += 1 - try: - value = self._parseNoCache(instring, loc, doActions, callPreParse) - except ParseBaseException as pe: - # cache a copy of the exception, without the traceback - cache.set(lookup, pe.__class__(*pe.args)) - raise - else: - cache.set(lookup, (value[0], value[1].copy())) - return value - else: - ParserElement.packrat_cache_stats[HIT] += 1 - if isinstance(value, Exception): - raise value - return (value[0], value[1].copy()) - - _parse = _parseNoCache - - @staticmethod - def resetCache(): - ParserElement.packrat_cache.clear() - ParserElement.packrat_cache_stats[:] = [0] * len(ParserElement.packrat_cache_stats) - - _packratEnabled = False - @staticmethod - def enablePackrat(cache_size_limit=128): - """Enables "packrat" parsing, which adds memoizing to the parsing logic. - Repeated parse attempts at the same string location (which happens - often in many complex grammars) can immediately return a cached value, - instead of re-executing parsing/validating code. Memoizing is done of - both valid results and parsing exceptions. - - Parameters: - - cache_size_limit - (default=C{128}) - if an integer value is provided - will limit the size of the packrat cache; if None is passed, then - the cache size will be unbounded; if 0 is passed, the cache will - be effectively disabled. - - This speedup may break existing programs that use parse actions that - have side-effects. For this reason, packrat parsing is disabled when - you first import pyparsing. To activate the packrat feature, your - program must call the class method C{ParserElement.enablePackrat()}. If - your program uses C{psyco} to "compile as you go", you must call - C{enablePackrat} before calling C{psyco.full()}. If you do not do this, - Python will crash. For best results, call C{enablePackrat()} immediately - after importing pyparsing. - - Example:: - import pyparsing - pyparsing.ParserElement.enablePackrat() - """ - if not ParserElement._packratEnabled: - ParserElement._packratEnabled = True - if cache_size_limit is None: - ParserElement.packrat_cache = ParserElement._UnboundedCache() - else: - ParserElement.packrat_cache = ParserElement._FifoCache(cache_size_limit) - ParserElement._parse = ParserElement._parseCache - - def parseString( self, instring, parseAll=False ): - """ - Execute the parse expression with the given string. - This is the main interface to the client code, once the complete - expression has been built. - - If you want the grammar to require that the entire input string be - successfully parsed, then set C{parseAll} to True (equivalent to ending - the grammar with C{L{StringEnd()}}). - - Note: C{parseString} implicitly calls C{expandtabs()} on the input string, - in order to report proper column numbers in parse actions. - If the input string contains tabs and - the grammar uses parse actions that use the C{loc} argument to index into the - string being parsed, you can ensure you have a consistent view of the input - string by: - - calling C{parseWithTabs} on your grammar before calling C{parseString} - (see L{I{parseWithTabs}}) - - define your parse action using the full C{(s,loc,toks)} signature, and - reference the input string using the parse action's C{s} argument - - explicitly expand the tabs in your input string before calling - C{parseString} - - Example:: - Word('a').parseString('aaaaabaaa') # -> ['aaaaa'] - Word('a').parseString('aaaaabaaa', parseAll=True) # -> Exception: Expected end of text - """ - ParserElement.resetCache() - if not self.streamlined: - self.streamline() - #~ self.saveAsList = True - for e in self.ignoreExprs: - e.streamline() - if not self.keepTabs: - instring = instring.expandtabs() - try: - loc, tokens = self._parse( instring, 0 ) - if parseAll: - loc = self.preParse( instring, loc ) - se = Empty() + StringEnd() - se._parse( instring, loc ) - except ParseBaseException as exc: - if ParserElement.verbose_stacktrace: - raise - else: - # catch and re-raise exception from here, clears out pyparsing internal stack trace - raise exc - else: - return tokens - - def scanString( self, instring, maxMatches=_MAX_INT, overlap=False ): - """ - Scan the input string for expression matches. Each match will return the - matching tokens, start location, and end location. May be called with optional - C{maxMatches} argument, to clip scanning after 'n' matches are found. If - C{overlap} is specified, then overlapping matches will be reported. - - Note that the start and end locations are reported relative to the string - being parsed. See L{I{parseString}} for more information on parsing - strings with embedded tabs. - - Example:: - source = "sldjf123lsdjjkf345sldkjf879lkjsfd987" - print(source) - for tokens,start,end in Word(alphas).scanString(source): - print(' '*start + '^'*(end-start)) - print(' '*start + tokens[0]) - - prints:: - - sldjf123lsdjjkf345sldkjf879lkjsfd987 - ^^^^^ - sldjf - ^^^^^^^ - lsdjjkf - ^^^^^^ - sldkjf - ^^^^^^ - lkjsfd - """ - if not self.streamlined: - self.streamline() - for e in self.ignoreExprs: - e.streamline() - - if not self.keepTabs: - instring = _ustr(instring).expandtabs() - instrlen = len(instring) - loc = 0 - preparseFn = self.preParse - parseFn = self._parse - ParserElement.resetCache() - matches = 0 - try: - while loc <= instrlen and matches < maxMatches: - try: - preloc = preparseFn( instring, loc ) - nextLoc,tokens = parseFn( instring, preloc, callPreParse=False ) - except ParseException: - loc = preloc+1 - else: - if nextLoc > loc: - matches += 1 - yield tokens, preloc, nextLoc - if overlap: - nextloc = preparseFn( instring, loc ) - if nextloc > loc: - loc = nextLoc - else: - loc += 1 - else: - loc = nextLoc - else: - loc = preloc+1 - except ParseBaseException as exc: - if ParserElement.verbose_stacktrace: - raise - else: - # catch and re-raise exception from here, clears out pyparsing internal stack trace - raise exc - - def transformString( self, instring ): - """ - Extension to C{L{scanString}}, to modify matching text with modified tokens that may - be returned from a parse action. To use C{transformString}, define a grammar and - attach a parse action to it that modifies the returned token list. - Invoking C{transformString()} on a target string will then scan for matches, - and replace the matched text patterns according to the logic in the parse - action. C{transformString()} returns the resulting transformed string. - - Example:: - wd = Word(alphas) - wd.setParseAction(lambda toks: toks[0].title()) - - print(wd.transformString("now is the winter of our discontent made glorious summer by this sun of york.")) - Prints:: - Now Is The Winter Of Our Discontent Made Glorious Summer By This Sun Of York. - """ - out = [] - lastE = 0 - # force preservation of s, to minimize unwanted transformation of string, and to - # keep string locs straight between transformString and scanString - self.keepTabs = True - try: - for t,s,e in self.scanString( instring ): - out.append( instring[lastE:s] ) - if t: - if isinstance(t,ParseResults): - out += t.asList() - elif isinstance(t,list): - out += t - else: - out.append(t) - lastE = e - out.append(instring[lastE:]) - out = [o for o in out if o] - return "".join(map(_ustr,_flatten(out))) - except ParseBaseException as exc: - if ParserElement.verbose_stacktrace: - raise - else: - # catch and re-raise exception from here, clears out pyparsing internal stack trace - raise exc - - def searchString( self, instring, maxMatches=_MAX_INT ): - """ - Another extension to C{L{scanString}}, simplifying the access to the tokens found - to match the given parse expression. May be called with optional - C{maxMatches} argument, to clip searching after 'n' matches are found. - - Example:: - # a capitalized word starts with an uppercase letter, followed by zero or more lowercase letters - cap_word = Word(alphas.upper(), alphas.lower()) - - print(cap_word.searchString("More than Iron, more than Lead, more than Gold I need Electricity")) - - # the sum() builtin can be used to merge results into a single ParseResults object - print(sum(cap_word.searchString("More than Iron, more than Lead, more than Gold I need Electricity"))) - prints:: - [['More'], ['Iron'], ['Lead'], ['Gold'], ['I'], ['Electricity']] - ['More', 'Iron', 'Lead', 'Gold', 'I', 'Electricity'] - """ - try: - return ParseResults([ t for t,s,e in self.scanString( instring, maxMatches ) ]) - except ParseBaseException as exc: - if ParserElement.verbose_stacktrace: - raise - else: - # catch and re-raise exception from here, clears out pyparsing internal stack trace - raise exc - - def split(self, instring, maxsplit=_MAX_INT, includeSeparators=False): - """ - Generator method to split a string using the given expression as a separator. - May be called with optional C{maxsplit} argument, to limit the number of splits; - and the optional C{includeSeparators} argument (default=C{False}), if the separating - matching text should be included in the split results. - - Example:: - punc = oneOf(list(".,;:/-!?")) - print(list(punc.split("This, this?, this sentence, is badly punctuated!"))) - prints:: - ['This', ' this', '', ' this sentence', ' is badly punctuated', ''] - """ - splits = 0 - last = 0 - for t,s,e in self.scanString(instring, maxMatches=maxsplit): - yield instring[last:s] - if includeSeparators: - yield t[0] - last = e - yield instring[last:] - - def __add__(self, other ): - """ - Implementation of + operator - returns C{L{And}}. Adding strings to a ParserElement - converts them to L{Literal}s by default. - - Example:: - greet = Word(alphas) + "," + Word(alphas) + "!" - hello = "Hello, World!" - print (hello, "->", greet.parseString(hello)) - Prints:: - Hello, World! -> ['Hello', ',', 'World', '!'] - """ - if isinstance( other, basestring ): - other = ParserElement._literalStringClass( other ) - if not isinstance( other, ParserElement ): - warnings.warn("Cannot combine element of type %s with ParserElement" % type(other), - SyntaxWarning, stacklevel=2) - return None - return And( [ self, other ] ) - - def __radd__(self, other ): - """ - Implementation of + operator when left operand is not a C{L{ParserElement}} - """ - if isinstance( other, basestring ): - other = ParserElement._literalStringClass( other ) - if not isinstance( other, ParserElement ): - warnings.warn("Cannot combine element of type %s with ParserElement" % type(other), - SyntaxWarning, stacklevel=2) - return None - return other + self - - def __sub__(self, other): - """ - Implementation of - operator, returns C{L{And}} with error stop - """ - if isinstance( other, basestring ): - other = ParserElement._literalStringClass( other ) - if not isinstance( other, ParserElement ): - warnings.warn("Cannot combine element of type %s with ParserElement" % type(other), - SyntaxWarning, stacklevel=2) - return None - return self + And._ErrorStop() + other - - def __rsub__(self, other ): - """ - Implementation of - operator when left operand is not a C{L{ParserElement}} - """ - if isinstance( other, basestring ): - other = ParserElement._literalStringClass( other ) - if not isinstance( other, ParserElement ): - warnings.warn("Cannot combine element of type %s with ParserElement" % type(other), - SyntaxWarning, stacklevel=2) - return None - return other - self - - def __mul__(self,other): - """ - Implementation of * operator, allows use of C{expr * 3} in place of - C{expr + expr + expr}. Expressions may also me multiplied by a 2-integer - tuple, similar to C{{min,max}} multipliers in regular expressions. Tuples - may also include C{None} as in: - - C{expr*(n,None)} or C{expr*(n,)} is equivalent - to C{expr*n + L{ZeroOrMore}(expr)} - (read as "at least n instances of C{expr}") - - C{expr*(None,n)} is equivalent to C{expr*(0,n)} - (read as "0 to n instances of C{expr}") - - C{expr*(None,None)} is equivalent to C{L{ZeroOrMore}(expr)} - - C{expr*(1,None)} is equivalent to C{L{OneOrMore}(expr)} - - Note that C{expr*(None,n)} does not raise an exception if - more than n exprs exist in the input stream; that is, - C{expr*(None,n)} does not enforce a maximum number of expr - occurrences. If this behavior is desired, then write - C{expr*(None,n) + ~expr} - """ - if isinstance(other,int): - minElements, optElements = other,0 - elif isinstance(other,tuple): - other = (other + (None, None))[:2] - if other[0] is None: - other = (0, other[1]) - if isinstance(other[0],int) and other[1] is None: - if other[0] == 0: - return ZeroOrMore(self) - if other[0] == 1: - return OneOrMore(self) - else: - return self*other[0] + ZeroOrMore(self) - elif isinstance(other[0],int) and isinstance(other[1],int): - minElements, optElements = other - optElements -= minElements - else: - raise TypeError("cannot multiply 'ParserElement' and ('%s','%s') objects", type(other[0]),type(other[1])) - else: - raise TypeError("cannot multiply 'ParserElement' and '%s' objects", type(other)) - - if minElements < 0: - raise ValueError("cannot multiply ParserElement by negative value") - if optElements < 0: - raise ValueError("second tuple value must be greater or equal to first tuple value") - if minElements == optElements == 0: - raise ValueError("cannot multiply ParserElement by 0 or (0,0)") - - if (optElements): - def makeOptionalList(n): - if n>1: - return Optional(self + makeOptionalList(n-1)) - else: - return Optional(self) - if minElements: - if minElements == 1: - ret = self + makeOptionalList(optElements) - else: - ret = And([self]*minElements) + makeOptionalList(optElements) - else: - ret = makeOptionalList(optElements) - else: - if minElements == 1: - ret = self - else: - ret = And([self]*minElements) - return ret - - def __rmul__(self, other): - return self.__mul__(other) - - def __or__(self, other ): - """ - Implementation of | operator - returns C{L{MatchFirst}} - """ - if isinstance( other, basestring ): - other = ParserElement._literalStringClass( other ) - if not isinstance( other, ParserElement ): - warnings.warn("Cannot combine element of type %s with ParserElement" % type(other), - SyntaxWarning, stacklevel=2) - return None - return MatchFirst( [ self, other ] ) - - def __ror__(self, other ): - """ - Implementation of | operator when left operand is not a C{L{ParserElement}} - """ - if isinstance( other, basestring ): - other = ParserElement._literalStringClass( other ) - if not isinstance( other, ParserElement ): - warnings.warn("Cannot combine element of type %s with ParserElement" % type(other), - SyntaxWarning, stacklevel=2) - return None - return other | self - - def __xor__(self, other ): - """ - Implementation of ^ operator - returns C{L{Or}} - """ - if isinstance( other, basestring ): - other = ParserElement._literalStringClass( other ) - if not isinstance( other, ParserElement ): - warnings.warn("Cannot combine element of type %s with ParserElement" % type(other), - SyntaxWarning, stacklevel=2) - return None - return Or( [ self, other ] ) - - def __rxor__(self, other ): - """ - Implementation of ^ operator when left operand is not a C{L{ParserElement}} - """ - if isinstance( other, basestring ): - other = ParserElement._literalStringClass( other ) - if not isinstance( other, ParserElement ): - warnings.warn("Cannot combine element of type %s with ParserElement" % type(other), - SyntaxWarning, stacklevel=2) - return None - return other ^ self - - def __and__(self, other ): - """ - Implementation of & operator - returns C{L{Each}} - """ - if isinstance( other, basestring ): - other = ParserElement._literalStringClass( other ) - if not isinstance( other, ParserElement ): - warnings.warn("Cannot combine element of type %s with ParserElement" % type(other), - SyntaxWarning, stacklevel=2) - return None - return Each( [ self, other ] ) - - def __rand__(self, other ): - """ - Implementation of & operator when left operand is not a C{L{ParserElement}} - """ - if isinstance( other, basestring ): - other = ParserElement._literalStringClass( other ) - if not isinstance( other, ParserElement ): - warnings.warn("Cannot combine element of type %s with ParserElement" % type(other), - SyntaxWarning, stacklevel=2) - return None - return other & self - - def __invert__( self ): - """ - Implementation of ~ operator - returns C{L{NotAny}} - """ - return NotAny( self ) - - def __call__(self, name=None): - """ - Shortcut for C{L{setResultsName}}, with C{listAllMatches=False}. - - If C{name} is given with a trailing C{'*'} character, then C{listAllMatches} will be - passed as C{True}. - - If C{name} is omitted, same as calling C{L{copy}}. - - Example:: - # these are equivalent - userdata = Word(alphas).setResultsName("name") + Word(nums+"-").setResultsName("socsecno") - userdata = Word(alphas)("name") + Word(nums+"-")("socsecno") - """ - if name is not None: - return self.setResultsName(name) - else: - return self.copy() - - def suppress( self ): - """ - Suppresses the output of this C{ParserElement}; useful to keep punctuation from - cluttering up returned output. - """ - return Suppress( self ) - - def leaveWhitespace( self ): - """ - Disables the skipping of whitespace before matching the characters in the - C{ParserElement}'s defined pattern. This is normally only used internally by - the pyparsing module, but may be needed in some whitespace-sensitive grammars. - """ - self.skipWhitespace = False - return self - - def setWhitespaceChars( self, chars ): - """ - Overrides the default whitespace chars - """ - self.skipWhitespace = True - self.whiteChars = chars - self.copyDefaultWhiteChars = False - return self - - def parseWithTabs( self ): - """ - Overrides default behavior to expand C{}s to spaces before parsing the input string. - Must be called before C{parseString} when the input grammar contains elements that - match C{} characters. - """ - self.keepTabs = True - return self - - def ignore( self, other ): - """ - Define expression to be ignored (e.g., comments) while doing pattern - matching; may be called repeatedly, to define multiple comment or other - ignorable patterns. - - Example:: - patt = OneOrMore(Word(alphas)) - patt.parseString('ablaj /* comment */ lskjd') # -> ['ablaj'] - - patt.ignore(cStyleComment) - patt.parseString('ablaj /* comment */ lskjd') # -> ['ablaj', 'lskjd'] - """ - if isinstance(other, basestring): - other = Suppress(other) - - if isinstance( other, Suppress ): - if other not in self.ignoreExprs: - self.ignoreExprs.append(other) - else: - self.ignoreExprs.append( Suppress( other.copy() ) ) - return self - - def setDebugActions( self, startAction, successAction, exceptionAction ): - """ - Enable display of debugging messages while doing pattern matching. - """ - self.debugActions = (startAction or _defaultStartDebugAction, - successAction or _defaultSuccessDebugAction, - exceptionAction or _defaultExceptionDebugAction) - self.debug = True - return self - - def setDebug( self, flag=True ): - """ - Enable display of debugging messages while doing pattern matching. - Set C{flag} to True to enable, False to disable. - - Example:: - wd = Word(alphas).setName("alphaword") - integer = Word(nums).setName("numword") - term = wd | integer - - # turn on debugging for wd - wd.setDebug() - - OneOrMore(term).parseString("abc 123 xyz 890") - - prints:: - Match alphaword at loc 0(1,1) - Matched alphaword -> ['abc'] - Match alphaword at loc 3(1,4) - Exception raised:Expected alphaword (at char 4), (line:1, col:5) - Match alphaword at loc 7(1,8) - Matched alphaword -> ['xyz'] - Match alphaword at loc 11(1,12) - Exception raised:Expected alphaword (at char 12), (line:1, col:13) - Match alphaword at loc 15(1,16) - Exception raised:Expected alphaword (at char 15), (line:1, col:16) - - The output shown is that produced by the default debug actions - custom debug actions can be - specified using L{setDebugActions}. Prior to attempting - to match the C{wd} expression, the debugging message C{"Match at loc (,)"} - is shown. Then if the parse succeeds, a C{"Matched"} message is shown, or an C{"Exception raised"} - message is shown. Also note the use of L{setName} to assign a human-readable name to the expression, - which makes debugging and exception messages easier to understand - for instance, the default - name created for the C{Word} expression without calling C{setName} is C{"W:(ABCD...)"}. - """ - if flag: - self.setDebugActions( _defaultStartDebugAction, _defaultSuccessDebugAction, _defaultExceptionDebugAction ) - else: - self.debug = False - return self - - def __str__( self ): - return self.name - - def __repr__( self ): - return _ustr(self) - - def streamline( self ): - self.streamlined = True - self.strRepr = None - return self - - def checkRecursion( self, parseElementList ): - pass - - def validate( self, validateTrace=[] ): - """ - Check defined expressions for valid structure, check for infinite recursive definitions. - """ - self.checkRecursion( [] ) - - def parseFile( self, file_or_filename, parseAll=False ): - """ - Execute the parse expression on the given file or filename. - If a filename is specified (instead of a file object), - the entire file is opened, read, and closed before parsing. - """ - try: - file_contents = file_or_filename.read() - except AttributeError: - with open(file_or_filename, "r") as f: - file_contents = f.read() - try: - return self.parseString(file_contents, parseAll) - except ParseBaseException as exc: - if ParserElement.verbose_stacktrace: - raise - else: - # catch and re-raise exception from here, clears out pyparsing internal stack trace - raise exc - - def __eq__(self,other): - if isinstance(other, ParserElement): - return self is other or vars(self) == vars(other) - elif isinstance(other, basestring): - return self.matches(other) - else: - return super(ParserElement,self)==other - - def __ne__(self,other): - return not (self == other) - - def __hash__(self): - return hash(id(self)) - - def __req__(self,other): - return self == other - - def __rne__(self,other): - return not (self == other) - - def matches(self, testString, parseAll=True): - """ - Method for quick testing of a parser against a test string. Good for simple - inline microtests of sub expressions while building up larger parser. - - Parameters: - - testString - to test against this expression for a match - - parseAll - (default=C{True}) - flag to pass to C{L{parseString}} when running tests - - Example:: - expr = Word(nums) - assert expr.matches("100") - """ - try: - self.parseString(_ustr(testString), parseAll=parseAll) - return True - except ParseBaseException: - return False - - def runTests(self, tests, parseAll=True, comment='#', fullDump=True, printResults=True, failureTests=False): - """ - Execute the parse expression on a series of test strings, showing each - test, the parsed results or where the parse failed. Quick and easy way to - run a parse expression against a list of sample strings. - - Parameters: - - tests - a list of separate test strings, or a multiline string of test strings - - parseAll - (default=C{True}) - flag to pass to C{L{parseString}} when running tests - - comment - (default=C{'#'}) - expression for indicating embedded comments in the test - string; pass None to disable comment filtering - - fullDump - (default=C{True}) - dump results as list followed by results names in nested outline; - if False, only dump nested list - - printResults - (default=C{True}) prints test output to stdout - - failureTests - (default=C{False}) indicates if these tests are expected to fail parsing - - Returns: a (success, results) tuple, where success indicates that all tests succeeded - (or failed if C{failureTests} is True), and the results contain a list of lines of each - test's output - - Example:: - number_expr = pyparsing_common.number.copy() - - result = number_expr.runTests(''' - # unsigned integer - 100 - # negative integer - -100 - # float with scientific notation - 6.02e23 - # integer with scientific notation - 1e-12 - ''') - print("Success" if result[0] else "Failed!") - - result = number_expr.runTests(''' - # stray character - 100Z - # missing leading digit before '.' - -.100 - # too many '.' - 3.14.159 - ''', failureTests=True) - print("Success" if result[0] else "Failed!") - prints:: - # unsigned integer - 100 - [100] - - # negative integer - -100 - [-100] - - # float with scientific notation - 6.02e23 - [6.02e+23] - - # integer with scientific notation - 1e-12 - [1e-12] - - Success - - # stray character - 100Z - ^ - FAIL: Expected end of text (at char 3), (line:1, col:4) - - # missing leading digit before '.' - -.100 - ^ - FAIL: Expected {real number with scientific notation | real number | signed integer} (at char 0), (line:1, col:1) - - # too many '.' - 3.14.159 - ^ - FAIL: Expected end of text (at char 4), (line:1, col:5) - - Success - - Each test string must be on a single line. If you want to test a string that spans multiple - lines, create a test like this:: - - expr.runTest(r"this is a test\\n of strings that spans \\n 3 lines") - - (Note that this is a raw string literal, you must include the leading 'r'.) - """ - if isinstance(tests, basestring): - tests = list(map(str.strip, tests.rstrip().splitlines())) - if isinstance(comment, basestring): - comment = Literal(comment) - allResults = [] - comments = [] - success = True - for t in tests: - if comment is not None and comment.matches(t, False) or comments and not t: - comments.append(t) - continue - if not t: - continue - out = ['\n'.join(comments), t] - comments = [] - try: - t = t.replace(r'\n','\n') - result = self.parseString(t, parseAll=parseAll) - out.append(result.dump(full=fullDump)) - success = success and not failureTests - except ParseBaseException as pe: - fatal = "(FATAL)" if isinstance(pe, ParseFatalException) else "" - if '\n' in t: - out.append(line(pe.loc, t)) - out.append(' '*(col(pe.loc,t)-1) + '^' + fatal) - else: - out.append(' '*pe.loc + '^' + fatal) - out.append("FAIL: " + str(pe)) - success = success and failureTests - result = pe - except Exception as exc: - out.append("FAIL-EXCEPTION: " + str(exc)) - success = success and failureTests - result = exc - - if printResults: - if fullDump: - out.append('') - print('\n'.join(out)) - - allResults.append((t, result)) - - return success, allResults - - -class Token(ParserElement): - """ - Abstract C{ParserElement} subclass, for defining atomic matching patterns. - """ - def __init__( self ): - super(Token,self).__init__( savelist=False ) - - -class Empty(Token): - """ - An empty token, will always match. - """ - def __init__( self ): - super(Empty,self).__init__() - self.name = "Empty" - self.mayReturnEmpty = True - self.mayIndexError = False - - -class NoMatch(Token): - """ - A token that will never match. - """ - def __init__( self ): - super(NoMatch,self).__init__() - self.name = "NoMatch" - self.mayReturnEmpty = True - self.mayIndexError = False - self.errmsg = "Unmatchable token" - - def parseImpl( self, instring, loc, doActions=True ): - raise ParseException(instring, loc, self.errmsg, self) - - -class Literal(Token): - """ - Token to exactly match a specified string. - - Example:: - Literal('blah').parseString('blah') # -> ['blah'] - Literal('blah').parseString('blahfooblah') # -> ['blah'] - Literal('blah').parseString('bla') # -> Exception: Expected "blah" - - For case-insensitive matching, use L{CaselessLiteral}. - - For keyword matching (force word break before and after the matched string), - use L{Keyword} or L{CaselessKeyword}. - """ - def __init__( self, matchString ): - super(Literal,self).__init__() - self.match = matchString - self.matchLen = len(matchString) - try: - self.firstMatchChar = matchString[0] - except IndexError: - warnings.warn("null string passed to Literal; use Empty() instead", - SyntaxWarning, stacklevel=2) - self.__class__ = Empty - self.name = '"%s"' % _ustr(self.match) - self.errmsg = "Expected " + self.name - self.mayReturnEmpty = False - self.mayIndexError = False - - # Performance tuning: this routine gets called a *lot* - # if this is a single character match string and the first character matches, - # short-circuit as quickly as possible, and avoid calling startswith - #~ @profile - def parseImpl( self, instring, loc, doActions=True ): - if (instring[loc] == self.firstMatchChar and - (self.matchLen==1 or instring.startswith(self.match,loc)) ): - return loc+self.matchLen, self.match - raise ParseException(instring, loc, self.errmsg, self) -_L = Literal -ParserElement._literalStringClass = Literal - -class Keyword(Token): - """ - Token to exactly match a specified string as a keyword, that is, it must be - immediately followed by a non-keyword character. Compare with C{L{Literal}}: - - C{Literal("if")} will match the leading C{'if'} in C{'ifAndOnlyIf'}. - - C{Keyword("if")} will not; it will only match the leading C{'if'} in C{'if x=1'}, or C{'if(y==2)'} - Accepts two optional constructor arguments in addition to the keyword string: - - C{identChars} is a string of characters that would be valid identifier characters, - defaulting to all alphanumerics + "_" and "$" - - C{caseless} allows case-insensitive matching, default is C{False}. - - Example:: - Keyword("start").parseString("start") # -> ['start'] - Keyword("start").parseString("starting") # -> Exception - - For case-insensitive matching, use L{CaselessKeyword}. - """ - DEFAULT_KEYWORD_CHARS = alphanums+"_$" - - def __init__( self, matchString, identChars=None, caseless=False ): - super(Keyword,self).__init__() - if identChars is None: - identChars = Keyword.DEFAULT_KEYWORD_CHARS - self.match = matchString - self.matchLen = len(matchString) - try: - self.firstMatchChar = matchString[0] - except IndexError: - warnings.warn("null string passed to Keyword; use Empty() instead", - SyntaxWarning, stacklevel=2) - self.name = '"%s"' % self.match - self.errmsg = "Expected " + self.name - self.mayReturnEmpty = False - self.mayIndexError = False - self.caseless = caseless - if caseless: - self.caselessmatch = matchString.upper() - identChars = identChars.upper() - self.identChars = set(identChars) - - def parseImpl( self, instring, loc, doActions=True ): - if self.caseless: - if ( (instring[ loc:loc+self.matchLen ].upper() == self.caselessmatch) and - (loc >= len(instring)-self.matchLen or instring[loc+self.matchLen].upper() not in self.identChars) and - (loc == 0 or instring[loc-1].upper() not in self.identChars) ): - return loc+self.matchLen, self.match - else: - if (instring[loc] == self.firstMatchChar and - (self.matchLen==1 or instring.startswith(self.match,loc)) and - (loc >= len(instring)-self.matchLen or instring[loc+self.matchLen] not in self.identChars) and - (loc == 0 or instring[loc-1] not in self.identChars) ): - return loc+self.matchLen, self.match - raise ParseException(instring, loc, self.errmsg, self) - - def copy(self): - c = super(Keyword,self).copy() - c.identChars = Keyword.DEFAULT_KEYWORD_CHARS - return c - - @staticmethod - def setDefaultKeywordChars( chars ): - """Overrides the default Keyword chars - """ - Keyword.DEFAULT_KEYWORD_CHARS = chars - -class CaselessLiteral(Literal): - """ - Token to match a specified string, ignoring case of letters. - Note: the matched results will always be in the case of the given - match string, NOT the case of the input text. - - Example:: - OneOrMore(CaselessLiteral("CMD")).parseString("cmd CMD Cmd10") # -> ['CMD', 'CMD', 'CMD'] - - (Contrast with example for L{CaselessKeyword}.) - """ - def __init__( self, matchString ): - super(CaselessLiteral,self).__init__( matchString.upper() ) - # Preserve the defining literal. - self.returnString = matchString - self.name = "'%s'" % self.returnString - self.errmsg = "Expected " + self.name - - def parseImpl( self, instring, loc, doActions=True ): - if instring[ loc:loc+self.matchLen ].upper() == self.match: - return loc+self.matchLen, self.returnString - raise ParseException(instring, loc, self.errmsg, self) - -class CaselessKeyword(Keyword): - """ - Caseless version of L{Keyword}. - - Example:: - OneOrMore(CaselessKeyword("CMD")).parseString("cmd CMD Cmd10") # -> ['CMD', 'CMD'] - - (Contrast with example for L{CaselessLiteral}.) - """ - def __init__( self, matchString, identChars=None ): - super(CaselessKeyword,self).__init__( matchString, identChars, caseless=True ) - - def parseImpl( self, instring, loc, doActions=True ): - if ( (instring[ loc:loc+self.matchLen ].upper() == self.caselessmatch) and - (loc >= len(instring)-self.matchLen or instring[loc+self.matchLen].upper() not in self.identChars) ): - return loc+self.matchLen, self.match - raise ParseException(instring, loc, self.errmsg, self) - -class CloseMatch(Token): - """ - A variation on L{Literal} which matches "close" matches, that is, - strings with at most 'n' mismatching characters. C{CloseMatch} takes parameters: - - C{match_string} - string to be matched - - C{maxMismatches} - (C{default=1}) maximum number of mismatches allowed to count as a match - - The results from a successful parse will contain the matched text from the input string and the following named results: - - C{mismatches} - a list of the positions within the match_string where mismatches were found - - C{original} - the original match_string used to compare against the input string - - If C{mismatches} is an empty list, then the match was an exact match. - - Example:: - patt = CloseMatch("ATCATCGAATGGA") - patt.parseString("ATCATCGAAXGGA") # -> (['ATCATCGAAXGGA'], {'mismatches': [[9]], 'original': ['ATCATCGAATGGA']}) - patt.parseString("ATCAXCGAAXGGA") # -> Exception: Expected 'ATCATCGAATGGA' (with up to 1 mismatches) (at char 0), (line:1, col:1) - - # exact match - patt.parseString("ATCATCGAATGGA") # -> (['ATCATCGAATGGA'], {'mismatches': [[]], 'original': ['ATCATCGAATGGA']}) - - # close match allowing up to 2 mismatches - patt = CloseMatch("ATCATCGAATGGA", maxMismatches=2) - patt.parseString("ATCAXCGAAXGGA") # -> (['ATCAXCGAAXGGA'], {'mismatches': [[4, 9]], 'original': ['ATCATCGAATGGA']}) - """ - def __init__(self, match_string, maxMismatches=1): - super(CloseMatch,self).__init__() - self.name = match_string - self.match_string = match_string - self.maxMismatches = maxMismatches - self.errmsg = "Expected %r (with up to %d mismatches)" % (self.match_string, self.maxMismatches) - self.mayIndexError = False - self.mayReturnEmpty = False - - def parseImpl( self, instring, loc, doActions=True ): - start = loc - instrlen = len(instring) - maxloc = start + len(self.match_string) - - if maxloc <= instrlen: - match_string = self.match_string - match_stringloc = 0 - mismatches = [] - maxMismatches = self.maxMismatches - - for match_stringloc,s_m in enumerate(zip(instring[loc:maxloc], self.match_string)): - src,mat = s_m - if src != mat: - mismatches.append(match_stringloc) - if len(mismatches) > maxMismatches: - break - else: - loc = match_stringloc + 1 - results = ParseResults([instring[start:loc]]) - results['original'] = self.match_string - results['mismatches'] = mismatches - return loc, results - - raise ParseException(instring, loc, self.errmsg, self) - - -class Word(Token): - """ - Token for matching words composed of allowed character sets. - Defined with string containing all allowed initial characters, - an optional string containing allowed body characters (if omitted, - defaults to the initial character set), and an optional minimum, - maximum, and/or exact length. The default value for C{min} is 1 (a - minimum value < 1 is not valid); the default values for C{max} and C{exact} - are 0, meaning no maximum or exact length restriction. An optional - C{excludeChars} parameter can list characters that might be found in - the input C{bodyChars} string; useful to define a word of all printables - except for one or two characters, for instance. - - L{srange} is useful for defining custom character set strings for defining - C{Word} expressions, using range notation from regular expression character sets. - - A common mistake is to use C{Word} to match a specific literal string, as in - C{Word("Address")}. Remember that C{Word} uses the string argument to define - I{sets} of matchable characters. This expression would match "Add", "AAA", - "dAred", or any other word made up of the characters 'A', 'd', 'r', 'e', and 's'. - To match an exact literal string, use L{Literal} or L{Keyword}. - - pyparsing includes helper strings for building Words: - - L{alphas} - - L{nums} - - L{alphanums} - - L{hexnums} - - L{alphas8bit} (alphabetic characters in ASCII range 128-255 - accented, tilded, umlauted, etc.) - - L{punc8bit} (non-alphabetic characters in ASCII range 128-255 - currency, symbols, superscripts, diacriticals, etc.) - - L{printables} (any non-whitespace character) - - Example:: - # a word composed of digits - integer = Word(nums) # equivalent to Word("0123456789") or Word(srange("0-9")) - - # a word with a leading capital, and zero or more lowercase - capital_word = Word(alphas.upper(), alphas.lower()) - - # hostnames are alphanumeric, with leading alpha, and '-' - hostname = Word(alphas, alphanums+'-') - - # roman numeral (not a strict parser, accepts invalid mix of characters) - roman = Word("IVXLCDM") - - # any string of non-whitespace characters, except for ',' - csv_value = Word(printables, excludeChars=",") - """ - def __init__( self, initChars, bodyChars=None, min=1, max=0, exact=0, asKeyword=False, excludeChars=None ): - super(Word,self).__init__() - if excludeChars: - initChars = ''.join(c for c in initChars if c not in excludeChars) - if bodyChars: - bodyChars = ''.join(c for c in bodyChars if c not in excludeChars) - self.initCharsOrig = initChars - self.initChars = set(initChars) - if bodyChars : - self.bodyCharsOrig = bodyChars - self.bodyChars = set(bodyChars) - else: - self.bodyCharsOrig = initChars - self.bodyChars = set(initChars) - - self.maxSpecified = max > 0 - - if min < 1: - raise ValueError("cannot specify a minimum length < 1; use Optional(Word()) if zero-length word is permitted") - - self.minLen = min - - if max > 0: - self.maxLen = max - else: - self.maxLen = _MAX_INT - - if exact > 0: - self.maxLen = exact - self.minLen = exact - - self.name = _ustr(self) - self.errmsg = "Expected " + self.name - self.mayIndexError = False - self.asKeyword = asKeyword - - if ' ' not in self.initCharsOrig+self.bodyCharsOrig and (min==1 and max==0 and exact==0): - if self.bodyCharsOrig == self.initCharsOrig: - self.reString = "[%s]+" % _escapeRegexRangeChars(self.initCharsOrig) - elif len(self.initCharsOrig) == 1: - self.reString = "%s[%s]*" % \ - (re.escape(self.initCharsOrig), - _escapeRegexRangeChars(self.bodyCharsOrig),) - else: - self.reString = "[%s][%s]*" % \ - (_escapeRegexRangeChars(self.initCharsOrig), - _escapeRegexRangeChars(self.bodyCharsOrig),) - if self.asKeyword: - self.reString = r"\b"+self.reString+r"\b" - try: - self.re = re.compile( self.reString ) - except Exception: - self.re = None - - def parseImpl( self, instring, loc, doActions=True ): - if self.re: - result = self.re.match(instring,loc) - if not result: - raise ParseException(instring, loc, self.errmsg, self) - - loc = result.end() - return loc, result.group() - - if not(instring[ loc ] in self.initChars): - raise ParseException(instring, loc, self.errmsg, self) - - start = loc - loc += 1 - instrlen = len(instring) - bodychars = self.bodyChars - maxloc = start + self.maxLen - maxloc = min( maxloc, instrlen ) - while loc < maxloc and instring[loc] in bodychars: - loc += 1 - - throwException = False - if loc - start < self.minLen: - throwException = True - if self.maxSpecified and loc < instrlen and instring[loc] in bodychars: - throwException = True - if self.asKeyword: - if (start>0 and instring[start-1] in bodychars) or (loc4: - return s[:4]+"..." - else: - return s - - if ( self.initCharsOrig != self.bodyCharsOrig ): - self.strRepr = "W:(%s,%s)" % ( charsAsStr(self.initCharsOrig), charsAsStr(self.bodyCharsOrig) ) - else: - self.strRepr = "W:(%s)" % charsAsStr(self.initCharsOrig) - - return self.strRepr - - -class Regex(Token): - r""" - Token for matching strings that match a given regular expression. - Defined with string specifying the regular expression in a form recognized by the inbuilt Python re module. - If the given regex contains named groups (defined using C{(?P...)}), these will be preserved as - named parse results. - - Example:: - realnum = Regex(r"[+-]?\d+\.\d*") - date = Regex(r'(?P\d{4})-(?P\d\d?)-(?P\d\d?)') - # ref: http://stackoverflow.com/questions/267399/how-do-you-match-only-valid-roman-numerals-with-a-regular-expression - roman = Regex(r"M{0,4}(CM|CD|D?C{0,3})(XC|XL|L?X{0,3})(IX|IV|V?I{0,3})") - """ - compiledREtype = type(re.compile("[A-Z]")) - def __init__( self, pattern, flags=0): - """The parameters C{pattern} and C{flags} are passed to the C{re.compile()} function as-is. See the Python C{re} module for an explanation of the acceptable patterns and flags.""" - super(Regex,self).__init__() - - if isinstance(pattern, basestring): - if not pattern: - warnings.warn("null string passed to Regex; use Empty() instead", - SyntaxWarning, stacklevel=2) - - self.pattern = pattern - self.flags = flags - - try: - self.re = re.compile(self.pattern, self.flags) - self.reString = self.pattern - except sre_constants.error: - warnings.warn("invalid pattern (%s) passed to Regex" % pattern, - SyntaxWarning, stacklevel=2) - raise - - elif isinstance(pattern, Regex.compiledREtype): - self.re = pattern - self.pattern = \ - self.reString = str(pattern) - self.flags = flags - - else: - raise ValueError("Regex may only be constructed with a string or a compiled RE object") - - self.name = _ustr(self) - self.errmsg = "Expected " + self.name - self.mayIndexError = False - self.mayReturnEmpty = True - - def parseImpl( self, instring, loc, doActions=True ): - result = self.re.match(instring,loc) - if not result: - raise ParseException(instring, loc, self.errmsg, self) - - loc = result.end() - d = result.groupdict() - ret = ParseResults(result.group()) - if d: - for k in d: - ret[k] = d[k] - return loc,ret - - def __str__( self ): - try: - return super(Regex,self).__str__() - except Exception: - pass - - if self.strRepr is None: - self.strRepr = "Re:(%s)" % repr(self.pattern) - - return self.strRepr - - -class QuotedString(Token): - r""" - Token for matching strings that are delimited by quoting characters. - - Defined with the following parameters: - - quoteChar - string of one or more characters defining the quote delimiting string - - escChar - character to escape quotes, typically backslash (default=C{None}) - - escQuote - special quote sequence to escape an embedded quote string (such as SQL's "" to escape an embedded ") (default=C{None}) - - multiline - boolean indicating whether quotes can span multiple lines (default=C{False}) - - unquoteResults - boolean indicating whether the matched text should be unquoted (default=C{True}) - - endQuoteChar - string of one or more characters defining the end of the quote delimited string (default=C{None} => same as quoteChar) - - convertWhitespaceEscapes - convert escaped whitespace (C{'\t'}, C{'\n'}, etc.) to actual whitespace (default=C{True}) - - Example:: - qs = QuotedString('"') - print(qs.searchString('lsjdf "This is the quote" sldjf')) - complex_qs = QuotedString('{{', endQuoteChar='}}') - print(complex_qs.searchString('lsjdf {{This is the "quote"}} sldjf')) - sql_qs = QuotedString('"', escQuote='""') - print(sql_qs.searchString('lsjdf "This is the quote with ""embedded"" quotes" sldjf')) - prints:: - [['This is the quote']] - [['This is the "quote"']] - [['This is the quote with "embedded" quotes']] - """ - def __init__( self, quoteChar, escChar=None, escQuote=None, multiline=False, unquoteResults=True, endQuoteChar=None, convertWhitespaceEscapes=True): - super(QuotedString,self).__init__() - - # remove white space from quote chars - wont work anyway - quoteChar = quoteChar.strip() - if not quoteChar: - warnings.warn("quoteChar cannot be the empty string",SyntaxWarning,stacklevel=2) - raise SyntaxError() - - if endQuoteChar is None: - endQuoteChar = quoteChar - else: - endQuoteChar = endQuoteChar.strip() - if not endQuoteChar: - warnings.warn("endQuoteChar cannot be the empty string",SyntaxWarning,stacklevel=2) - raise SyntaxError() - - self.quoteChar = quoteChar - self.quoteCharLen = len(quoteChar) - self.firstQuoteChar = quoteChar[0] - self.endQuoteChar = endQuoteChar - self.endQuoteCharLen = len(endQuoteChar) - self.escChar = escChar - self.escQuote = escQuote - self.unquoteResults = unquoteResults - self.convertWhitespaceEscapes = convertWhitespaceEscapes - - if multiline: - self.flags = re.MULTILINE | re.DOTALL - self.pattern = r'%s(?:[^%s%s]' % \ - ( re.escape(self.quoteChar), - _escapeRegexRangeChars(self.endQuoteChar[0]), - (escChar is not None and _escapeRegexRangeChars(escChar) or '') ) - else: - self.flags = 0 - self.pattern = r'%s(?:[^%s\n\r%s]' % \ - ( re.escape(self.quoteChar), - _escapeRegexRangeChars(self.endQuoteChar[0]), - (escChar is not None and _escapeRegexRangeChars(escChar) or '') ) - if len(self.endQuoteChar) > 1: - self.pattern += ( - '|(?:' + ')|(?:'.join("%s[^%s]" % (re.escape(self.endQuoteChar[:i]), - _escapeRegexRangeChars(self.endQuoteChar[i])) - for i in range(len(self.endQuoteChar)-1,0,-1)) + ')' - ) - if escQuote: - self.pattern += (r'|(?:%s)' % re.escape(escQuote)) - if escChar: - self.pattern += (r'|(?:%s.)' % re.escape(escChar)) - self.escCharReplacePattern = re.escape(self.escChar)+"(.)" - self.pattern += (r')*%s' % re.escape(self.endQuoteChar)) - - try: - self.re = re.compile(self.pattern, self.flags) - self.reString = self.pattern - except sre_constants.error: - warnings.warn("invalid pattern (%s) passed to Regex" % self.pattern, - SyntaxWarning, stacklevel=2) - raise - - self.name = _ustr(self) - self.errmsg = "Expected " + self.name - self.mayIndexError = False - self.mayReturnEmpty = True - - def parseImpl( self, instring, loc, doActions=True ): - result = instring[loc] == self.firstQuoteChar and self.re.match(instring,loc) or None - if not result: - raise ParseException(instring, loc, self.errmsg, self) - - loc = result.end() - ret = result.group() - - if self.unquoteResults: - - # strip off quotes - ret = ret[self.quoteCharLen:-self.endQuoteCharLen] - - if isinstance(ret,basestring): - # replace escaped whitespace - if '\\' in ret and self.convertWhitespaceEscapes: - ws_map = { - r'\t' : '\t', - r'\n' : '\n', - r'\f' : '\f', - r'\r' : '\r', - } - for wslit,wschar in ws_map.items(): - ret = ret.replace(wslit, wschar) - - # replace escaped characters - if self.escChar: - ret = re.sub(self.escCharReplacePattern, r"\g<1>", ret) - - # replace escaped quotes - if self.escQuote: - ret = ret.replace(self.escQuote, self.endQuoteChar) - - return loc, ret - - def __str__( self ): - try: - return super(QuotedString,self).__str__() - except Exception: - pass - - if self.strRepr is None: - self.strRepr = "quoted string, starting with %s ending with %s" % (self.quoteChar, self.endQuoteChar) - - return self.strRepr - - -class CharsNotIn(Token): - """ - Token for matching words composed of characters I{not} in a given set (will - include whitespace in matched characters if not listed in the provided exclusion set - see example). - Defined with string containing all disallowed characters, and an optional - minimum, maximum, and/or exact length. The default value for C{min} is 1 (a - minimum value < 1 is not valid); the default values for C{max} and C{exact} - are 0, meaning no maximum or exact length restriction. - - Example:: - # define a comma-separated-value as anything that is not a ',' - csv_value = CharsNotIn(',') - print(delimitedList(csv_value).parseString("dkls,lsdkjf,s12 34,@!#,213")) - prints:: - ['dkls', 'lsdkjf', 's12 34', '@!#', '213'] - """ - def __init__( self, notChars, min=1, max=0, exact=0 ): - super(CharsNotIn,self).__init__() - self.skipWhitespace = False - self.notChars = notChars - - if min < 1: - raise ValueError("cannot specify a minimum length < 1; use Optional(CharsNotIn()) if zero-length char group is permitted") - - self.minLen = min - - if max > 0: - self.maxLen = max - else: - self.maxLen = _MAX_INT - - if exact > 0: - self.maxLen = exact - self.minLen = exact - - self.name = _ustr(self) - self.errmsg = "Expected " + self.name - self.mayReturnEmpty = ( self.minLen == 0 ) - self.mayIndexError = False - - def parseImpl( self, instring, loc, doActions=True ): - if instring[loc] in self.notChars: - raise ParseException(instring, loc, self.errmsg, self) - - start = loc - loc += 1 - notchars = self.notChars - maxlen = min( start+self.maxLen, len(instring) ) - while loc < maxlen and \ - (instring[loc] not in notchars): - loc += 1 - - if loc - start < self.minLen: - raise ParseException(instring, loc, self.errmsg, self) - - return loc, instring[start:loc] - - def __str__( self ): - try: - return super(CharsNotIn, self).__str__() - except Exception: - pass - - if self.strRepr is None: - if len(self.notChars) > 4: - self.strRepr = "!W:(%s...)" % self.notChars[:4] - else: - self.strRepr = "!W:(%s)" % self.notChars - - return self.strRepr - -class White(Token): - """ - Special matching class for matching whitespace. Normally, whitespace is ignored - by pyparsing grammars. This class is included when some whitespace structures - are significant. Define with a string containing the whitespace characters to be - matched; default is C{" \\t\\r\\n"}. Also takes optional C{min}, C{max}, and C{exact} arguments, - as defined for the C{L{Word}} class. - """ - whiteStrs = { - " " : "", - "\t": "", - "\n": "", - "\r": "", - "\f": "", - } - def __init__(self, ws=" \t\r\n", min=1, max=0, exact=0): - super(White,self).__init__() - self.matchWhite = ws - self.setWhitespaceChars( "".join(c for c in self.whiteChars if c not in self.matchWhite) ) - #~ self.leaveWhitespace() - self.name = ("".join(White.whiteStrs[c] for c in self.matchWhite)) - self.mayReturnEmpty = True - self.errmsg = "Expected " + self.name - - self.minLen = min - - if max > 0: - self.maxLen = max - else: - self.maxLen = _MAX_INT - - if exact > 0: - self.maxLen = exact - self.minLen = exact - - def parseImpl( self, instring, loc, doActions=True ): - if not(instring[ loc ] in self.matchWhite): - raise ParseException(instring, loc, self.errmsg, self) - start = loc - loc += 1 - maxloc = start + self.maxLen - maxloc = min( maxloc, len(instring) ) - while loc < maxloc and instring[loc] in self.matchWhite: - loc += 1 - - if loc - start < self.minLen: - raise ParseException(instring, loc, self.errmsg, self) - - return loc, instring[start:loc] - - -class _PositionToken(Token): - def __init__( self ): - super(_PositionToken,self).__init__() - self.name=self.__class__.__name__ - self.mayReturnEmpty = True - self.mayIndexError = False - -class GoToColumn(_PositionToken): - """ - Token to advance to a specific column of input text; useful for tabular report scraping. - """ - def __init__( self, colno ): - super(GoToColumn,self).__init__() - self.col = colno - - def preParse( self, instring, loc ): - if col(loc,instring) != self.col: - instrlen = len(instring) - if self.ignoreExprs: - loc = self._skipIgnorables( instring, loc ) - while loc < instrlen and instring[loc].isspace() and col( loc, instring ) != self.col : - loc += 1 - return loc - - def parseImpl( self, instring, loc, doActions=True ): - thiscol = col( loc, instring ) - if thiscol > self.col: - raise ParseException( instring, loc, "Text not in expected column", self ) - newloc = loc + self.col - thiscol - ret = instring[ loc: newloc ] - return newloc, ret - - -class LineStart(_PositionToken): - """ - Matches if current position is at the beginning of a line within the parse string - - Example:: - - test = '''\ - AAA this line - AAA and this line - AAA but not this one - B AAA and definitely not this one - ''' - - for t in (LineStart() + 'AAA' + restOfLine).searchString(test): - print(t) - - Prints:: - ['AAA', ' this line'] - ['AAA', ' and this line'] - - """ - def __init__( self ): - super(LineStart,self).__init__() - self.errmsg = "Expected start of line" - - def parseImpl( self, instring, loc, doActions=True ): - if col(loc, instring) == 1: - return loc, [] - raise ParseException(instring, loc, self.errmsg, self) - -class LineEnd(_PositionToken): - """ - Matches if current position is at the end of a line within the parse string - """ - def __init__( self ): - super(LineEnd,self).__init__() - self.setWhitespaceChars( ParserElement.DEFAULT_WHITE_CHARS.replace("\n","") ) - self.errmsg = "Expected end of line" - - def parseImpl( self, instring, loc, doActions=True ): - if loc len(instring): - return loc, [] - else: - raise ParseException(instring, loc, self.errmsg, self) - -class WordStart(_PositionToken): - """ - Matches if the current position is at the beginning of a Word, and - is not preceded by any character in a given set of C{wordChars} - (default=C{printables}). To emulate the C{\b} behavior of regular expressions, - use C{WordStart(alphanums)}. C{WordStart} will also match at the beginning of - the string being parsed, or at the beginning of a line. - """ - def __init__(self, wordChars = printables): - super(WordStart,self).__init__() - self.wordChars = set(wordChars) - self.errmsg = "Not at the start of a word" - - def parseImpl(self, instring, loc, doActions=True ): - if loc != 0: - if (instring[loc-1] in self.wordChars or - instring[loc] not in self.wordChars): - raise ParseException(instring, loc, self.errmsg, self) - return loc, [] - -class WordEnd(_PositionToken): - """ - Matches if the current position is at the end of a Word, and - is not followed by any character in a given set of C{wordChars} - (default=C{printables}). To emulate the C{\b} behavior of regular expressions, - use C{WordEnd(alphanums)}. C{WordEnd} will also match at the end of - the string being parsed, or at the end of a line. - """ - def __init__(self, wordChars = printables): - super(WordEnd,self).__init__() - self.wordChars = set(wordChars) - self.skipWhitespace = False - self.errmsg = "Not at the end of a word" - - def parseImpl(self, instring, loc, doActions=True ): - instrlen = len(instring) - if instrlen>0 and loc maxExcLoc: - maxException = err - maxExcLoc = err.loc - except IndexError: - if len(instring) > maxExcLoc: - maxException = ParseException(instring,len(instring),e.errmsg,self) - maxExcLoc = len(instring) - else: - # save match among all matches, to retry longest to shortest - matches.append((loc2, e)) - - if matches: - matches.sort(key=lambda x: -x[0]) - for _,e in matches: - try: - return e._parse( instring, loc, doActions ) - except ParseException as err: - err.__traceback__ = None - if err.loc > maxExcLoc: - maxException = err - maxExcLoc = err.loc - - if maxException is not None: - maxException.msg = self.errmsg - raise maxException - else: - raise ParseException(instring, loc, "no defined alternatives to match", self) - - - def __ixor__(self, other ): - if isinstance( other, basestring ): - other = ParserElement._literalStringClass( other ) - return self.append( other ) #Or( [ self, other ] ) - - def __str__( self ): - if hasattr(self,"name"): - return self.name - - if self.strRepr is None: - self.strRepr = "{" + " ^ ".join(_ustr(e) for e in self.exprs) + "}" - - return self.strRepr - - def checkRecursion( self, parseElementList ): - subRecCheckList = parseElementList[:] + [ self ] - for e in self.exprs: - e.checkRecursion( subRecCheckList ) - - -class MatchFirst(ParseExpression): - """ - Requires that at least one C{ParseExpression} is found. - If two expressions match, the first one listed is the one that will match. - May be constructed using the C{'|'} operator. - - Example:: - # construct MatchFirst using '|' operator - - # watch the order of expressions to match - number = Word(nums) | Combine(Word(nums) + '.' + Word(nums)) - print(number.searchString("123 3.1416 789")) # Fail! -> [['123'], ['3'], ['1416'], ['789']] - - # put more selective expression first - number = Combine(Word(nums) + '.' + Word(nums)) | Word(nums) - print(number.searchString("123 3.1416 789")) # Better -> [['123'], ['3.1416'], ['789']] - """ - def __init__( self, exprs, savelist = False ): - super(MatchFirst,self).__init__(exprs, savelist) - if self.exprs: - self.mayReturnEmpty = any(e.mayReturnEmpty for e in self.exprs) - else: - self.mayReturnEmpty = True - - def parseImpl( self, instring, loc, doActions=True ): - maxExcLoc = -1 - maxException = None - for e in self.exprs: - try: - ret = e._parse( instring, loc, doActions ) - return ret - except ParseException as err: - if err.loc > maxExcLoc: - maxException = err - maxExcLoc = err.loc - except IndexError: - if len(instring) > maxExcLoc: - maxException = ParseException(instring,len(instring),e.errmsg,self) - maxExcLoc = len(instring) - - # only got here if no expression matched, raise exception for match that made it the furthest - else: - if maxException is not None: - maxException.msg = self.errmsg - raise maxException - else: - raise ParseException(instring, loc, "no defined alternatives to match", self) - - def __ior__(self, other ): - if isinstance( other, basestring ): - other = ParserElement._literalStringClass( other ) - return self.append( other ) #MatchFirst( [ self, other ] ) - - def __str__( self ): - if hasattr(self,"name"): - return self.name - - if self.strRepr is None: - self.strRepr = "{" + " | ".join(_ustr(e) for e in self.exprs) + "}" - - return self.strRepr - - def checkRecursion( self, parseElementList ): - subRecCheckList = parseElementList[:] + [ self ] - for e in self.exprs: - e.checkRecursion( subRecCheckList ) - - -class Each(ParseExpression): - """ - Requires all given C{ParseExpression}s to be found, but in any order. - Expressions may be separated by whitespace. - May be constructed using the C{'&'} operator. - - Example:: - color = oneOf("RED ORANGE YELLOW GREEN BLUE PURPLE BLACK WHITE BROWN") - shape_type = oneOf("SQUARE CIRCLE TRIANGLE STAR HEXAGON OCTAGON") - integer = Word(nums) - shape_attr = "shape:" + shape_type("shape") - posn_attr = "posn:" + Group(integer("x") + ',' + integer("y"))("posn") - color_attr = "color:" + color("color") - size_attr = "size:" + integer("size") - - # use Each (using operator '&') to accept attributes in any order - # (shape and posn are required, color and size are optional) - shape_spec = shape_attr & posn_attr & Optional(color_attr) & Optional(size_attr) - - shape_spec.runTests(''' - shape: SQUARE color: BLACK posn: 100, 120 - shape: CIRCLE size: 50 color: BLUE posn: 50,80 - color:GREEN size:20 shape:TRIANGLE posn:20,40 - ''' - ) - prints:: - shape: SQUARE color: BLACK posn: 100, 120 - ['shape:', 'SQUARE', 'color:', 'BLACK', 'posn:', ['100', ',', '120']] - - color: BLACK - - posn: ['100', ',', '120'] - - x: 100 - - y: 120 - - shape: SQUARE - - - shape: CIRCLE size: 50 color: BLUE posn: 50,80 - ['shape:', 'CIRCLE', 'size:', '50', 'color:', 'BLUE', 'posn:', ['50', ',', '80']] - - color: BLUE - - posn: ['50', ',', '80'] - - x: 50 - - y: 80 - - shape: CIRCLE - - size: 50 - - - color: GREEN size: 20 shape: TRIANGLE posn: 20,40 - ['color:', 'GREEN', 'size:', '20', 'shape:', 'TRIANGLE', 'posn:', ['20', ',', '40']] - - color: GREEN - - posn: ['20', ',', '40'] - - x: 20 - - y: 40 - - shape: TRIANGLE - - size: 20 - """ - def __init__( self, exprs, savelist = True ): - super(Each,self).__init__(exprs, savelist) - self.mayReturnEmpty = all(e.mayReturnEmpty for e in self.exprs) - self.skipWhitespace = True - self.initExprGroups = True - - def parseImpl( self, instring, loc, doActions=True ): - if self.initExprGroups: - self.opt1map = dict((id(e.expr),e) for e in self.exprs if isinstance(e,Optional)) - opt1 = [ e.expr for e in self.exprs if isinstance(e,Optional) ] - opt2 = [ e for e in self.exprs if e.mayReturnEmpty and not isinstance(e,Optional)] - self.optionals = opt1 + opt2 - self.multioptionals = [ e.expr for e in self.exprs if isinstance(e,ZeroOrMore) ] - self.multirequired = [ e.expr for e in self.exprs if isinstance(e,OneOrMore) ] - self.required = [ e for e in self.exprs if not isinstance(e,(Optional,ZeroOrMore,OneOrMore)) ] - self.required += self.multirequired - self.initExprGroups = False - tmpLoc = loc - tmpReqd = self.required[:] - tmpOpt = self.optionals[:] - matchOrder = [] - - keepMatching = True - while keepMatching: - tmpExprs = tmpReqd + tmpOpt + self.multioptionals + self.multirequired - failed = [] - for e in tmpExprs: - try: - tmpLoc = e.tryParse( instring, tmpLoc ) - except ParseException: - failed.append(e) - else: - matchOrder.append(self.opt1map.get(id(e),e)) - if e in tmpReqd: - tmpReqd.remove(e) - elif e in tmpOpt: - tmpOpt.remove(e) - if len(failed) == len(tmpExprs): - keepMatching = False - - if tmpReqd: - missing = ", ".join(_ustr(e) for e in tmpReqd) - raise ParseException(instring,loc,"Missing one or more required elements (%s)" % missing ) - - # add any unmatched Optionals, in case they have default values defined - matchOrder += [e for e in self.exprs if isinstance(e,Optional) and e.expr in tmpOpt] - - resultlist = [] - for e in matchOrder: - loc,results = e._parse(instring,loc,doActions) - resultlist.append(results) - - finalResults = sum(resultlist, ParseResults([])) - return loc, finalResults - - def __str__( self ): - if hasattr(self,"name"): - return self.name - - if self.strRepr is None: - self.strRepr = "{" + " & ".join(_ustr(e) for e in self.exprs) + "}" - - return self.strRepr - - def checkRecursion( self, parseElementList ): - subRecCheckList = parseElementList[:] + [ self ] - for e in self.exprs: - e.checkRecursion( subRecCheckList ) - - -class ParseElementEnhance(ParserElement): - """ - Abstract subclass of C{ParserElement}, for combining and post-processing parsed tokens. - """ - def __init__( self, expr, savelist=False ): - super(ParseElementEnhance,self).__init__(savelist) - if isinstance( expr, basestring ): - if issubclass(ParserElement._literalStringClass, Token): - expr = ParserElement._literalStringClass(expr) - else: - expr = ParserElement._literalStringClass(Literal(expr)) - self.expr = expr - self.strRepr = None - if expr is not None: - self.mayIndexError = expr.mayIndexError - self.mayReturnEmpty = expr.mayReturnEmpty - self.setWhitespaceChars( expr.whiteChars ) - self.skipWhitespace = expr.skipWhitespace - self.saveAsList = expr.saveAsList - self.callPreparse = expr.callPreparse - self.ignoreExprs.extend(expr.ignoreExprs) - - def parseImpl( self, instring, loc, doActions=True ): - if self.expr is not None: - return self.expr._parse( instring, loc, doActions, callPreParse=False ) - else: - raise ParseException("",loc,self.errmsg,self) - - def leaveWhitespace( self ): - self.skipWhitespace = False - self.expr = self.expr.copy() - if self.expr is not None: - self.expr.leaveWhitespace() - return self - - def ignore( self, other ): - if isinstance( other, Suppress ): - if other not in self.ignoreExprs: - super( ParseElementEnhance, self).ignore( other ) - if self.expr is not None: - self.expr.ignore( self.ignoreExprs[-1] ) - else: - super( ParseElementEnhance, self).ignore( other ) - if self.expr is not None: - self.expr.ignore( self.ignoreExprs[-1] ) - return self - - def streamline( self ): - super(ParseElementEnhance,self).streamline() - if self.expr is not None: - self.expr.streamline() - return self - - def checkRecursion( self, parseElementList ): - if self in parseElementList: - raise RecursiveGrammarException( parseElementList+[self] ) - subRecCheckList = parseElementList[:] + [ self ] - if self.expr is not None: - self.expr.checkRecursion( subRecCheckList ) - - def validate( self, validateTrace=[] ): - tmp = validateTrace[:]+[self] - if self.expr is not None: - self.expr.validate(tmp) - self.checkRecursion( [] ) - - def __str__( self ): - try: - return super(ParseElementEnhance,self).__str__() - except Exception: - pass - - if self.strRepr is None and self.expr is not None: - self.strRepr = "%s:(%s)" % ( self.__class__.__name__, _ustr(self.expr) ) - return self.strRepr - - -class FollowedBy(ParseElementEnhance): - """ - Lookahead matching of the given parse expression. C{FollowedBy} - does I{not} advance the parsing position within the input string, it only - verifies that the specified parse expression matches at the current - position. C{FollowedBy} always returns a null token list. - - Example:: - # use FollowedBy to match a label only if it is followed by a ':' - data_word = Word(alphas) - label = data_word + FollowedBy(':') - attr_expr = Group(label + Suppress(':') + OneOrMore(data_word, stopOn=label).setParseAction(' '.join)) - - OneOrMore(attr_expr).parseString("shape: SQUARE color: BLACK posn: upper left").pprint() - prints:: - [['shape', 'SQUARE'], ['color', 'BLACK'], ['posn', 'upper left']] - """ - def __init__( self, expr ): - super(FollowedBy,self).__init__(expr) - self.mayReturnEmpty = True - - def parseImpl( self, instring, loc, doActions=True ): - self.expr.tryParse( instring, loc ) - return loc, [] - - -class NotAny(ParseElementEnhance): - """ - Lookahead to disallow matching with the given parse expression. C{NotAny} - does I{not} advance the parsing position within the input string, it only - verifies that the specified parse expression does I{not} match at the current - position. Also, C{NotAny} does I{not} skip over leading whitespace. C{NotAny} - always returns a null token list. May be constructed using the '~' operator. - - Example:: - - """ - def __init__( self, expr ): - super(NotAny,self).__init__(expr) - #~ self.leaveWhitespace() - self.skipWhitespace = False # do NOT use self.leaveWhitespace(), don't want to propagate to exprs - self.mayReturnEmpty = True - self.errmsg = "Found unwanted token, "+_ustr(self.expr) - - def parseImpl( self, instring, loc, doActions=True ): - if self.expr.canParseNext(instring, loc): - raise ParseException(instring, loc, self.errmsg, self) - return loc, [] - - def __str__( self ): - if hasattr(self,"name"): - return self.name - - if self.strRepr is None: - self.strRepr = "~{" + _ustr(self.expr) + "}" - - return self.strRepr - -class _MultipleMatch(ParseElementEnhance): - def __init__( self, expr, stopOn=None): - super(_MultipleMatch, self).__init__(expr) - self.saveAsList = True - ender = stopOn - if isinstance(ender, basestring): - ender = ParserElement._literalStringClass(ender) - self.not_ender = ~ender if ender is not None else None - - def parseImpl( self, instring, loc, doActions=True ): - self_expr_parse = self.expr._parse - self_skip_ignorables = self._skipIgnorables - check_ender = self.not_ender is not None - if check_ender: - try_not_ender = self.not_ender.tryParse - - # must be at least one (but first see if we are the stopOn sentinel; - # if so, fail) - if check_ender: - try_not_ender(instring, loc) - loc, tokens = self_expr_parse( instring, loc, doActions, callPreParse=False ) - try: - hasIgnoreExprs = (not not self.ignoreExprs) - while 1: - if check_ender: - try_not_ender(instring, loc) - if hasIgnoreExprs: - preloc = self_skip_ignorables( instring, loc ) - else: - preloc = loc - loc, tmptokens = self_expr_parse( instring, preloc, doActions ) - if tmptokens or tmptokens.haskeys(): - tokens += tmptokens - except (ParseException,IndexError): - pass - - return loc, tokens - -class OneOrMore(_MultipleMatch): - """ - Repetition of one or more of the given expression. - - Parameters: - - expr - expression that must match one or more times - - stopOn - (default=C{None}) - expression for a terminating sentinel - (only required if the sentinel would ordinarily match the repetition - expression) - - Example:: - data_word = Word(alphas) - label = data_word + FollowedBy(':') - attr_expr = Group(label + Suppress(':') + OneOrMore(data_word).setParseAction(' '.join)) - - text = "shape: SQUARE posn: upper left color: BLACK" - OneOrMore(attr_expr).parseString(text).pprint() # Fail! read 'color' as data instead of next label -> [['shape', 'SQUARE color']] - - # use stopOn attribute for OneOrMore to avoid reading label string as part of the data - attr_expr = Group(label + Suppress(':') + OneOrMore(data_word, stopOn=label).setParseAction(' '.join)) - OneOrMore(attr_expr).parseString(text).pprint() # Better -> [['shape', 'SQUARE'], ['posn', 'upper left'], ['color', 'BLACK']] - - # could also be written as - (attr_expr * (1,)).parseString(text).pprint() - """ - - def __str__( self ): - if hasattr(self,"name"): - return self.name - - if self.strRepr is None: - self.strRepr = "{" + _ustr(self.expr) + "}..." - - return self.strRepr - -class ZeroOrMore(_MultipleMatch): - """ - Optional repetition of zero or more of the given expression. - - Parameters: - - expr - expression that must match zero or more times - - stopOn - (default=C{None}) - expression for a terminating sentinel - (only required if the sentinel would ordinarily match the repetition - expression) - - Example: similar to L{OneOrMore} - """ - def __init__( self, expr, stopOn=None): - super(ZeroOrMore,self).__init__(expr, stopOn=stopOn) - self.mayReturnEmpty = True - - def parseImpl( self, instring, loc, doActions=True ): - try: - return super(ZeroOrMore, self).parseImpl(instring, loc, doActions) - except (ParseException,IndexError): - return loc, [] - - def __str__( self ): - if hasattr(self,"name"): - return self.name - - if self.strRepr is None: - self.strRepr = "[" + _ustr(self.expr) + "]..." - - return self.strRepr - -class _NullToken(object): - def __bool__(self): - return False - __nonzero__ = __bool__ - def __str__(self): - return "" - -_optionalNotMatched = _NullToken() -class Optional(ParseElementEnhance): - """ - Optional matching of the given expression. - - Parameters: - - expr - expression that must match zero or more times - - default (optional) - value to be returned if the optional expression is not found. - - Example:: - # US postal code can be a 5-digit zip, plus optional 4-digit qualifier - zip = Combine(Word(nums, exact=5) + Optional('-' + Word(nums, exact=4))) - zip.runTests(''' - # traditional ZIP code - 12345 - - # ZIP+4 form - 12101-0001 - - # invalid ZIP - 98765- - ''') - prints:: - # traditional ZIP code - 12345 - ['12345'] - - # ZIP+4 form - 12101-0001 - ['12101-0001'] - - # invalid ZIP - 98765- - ^ - FAIL: Expected end of text (at char 5), (line:1, col:6) - """ - def __init__( self, expr, default=_optionalNotMatched ): - super(Optional,self).__init__( expr, savelist=False ) - self.saveAsList = self.expr.saveAsList - self.defaultValue = default - self.mayReturnEmpty = True - - def parseImpl( self, instring, loc, doActions=True ): - try: - loc, tokens = self.expr._parse( instring, loc, doActions, callPreParse=False ) - except (ParseException,IndexError): - if self.defaultValue is not _optionalNotMatched: - if self.expr.resultsName: - tokens = ParseResults([ self.defaultValue ]) - tokens[self.expr.resultsName] = self.defaultValue - else: - tokens = [ self.defaultValue ] - else: - tokens = [] - return loc, tokens - - def __str__( self ): - if hasattr(self,"name"): - return self.name - - if self.strRepr is None: - self.strRepr = "[" + _ustr(self.expr) + "]" - - return self.strRepr - -class SkipTo(ParseElementEnhance): - """ - Token for skipping over all undefined text until the matched expression is found. - - Parameters: - - expr - target expression marking the end of the data to be skipped - - include - (default=C{False}) if True, the target expression is also parsed - (the skipped text and target expression are returned as a 2-element list). - - ignore - (default=C{None}) used to define grammars (typically quoted strings and - comments) that might contain false matches to the target expression - - failOn - (default=C{None}) define expressions that are not allowed to be - included in the skipped test; if found before the target expression is found, - the SkipTo is not a match - - Example:: - report = ''' - Outstanding Issues Report - 1 Jan 2000 - - # | Severity | Description | Days Open - -----+----------+-------------------------------------------+----------- - 101 | Critical | Intermittent system crash | 6 - 94 | Cosmetic | Spelling error on Login ('log|n') | 14 - 79 | Minor | System slow when running too many reports | 47 - ''' - integer = Word(nums) - SEP = Suppress('|') - # use SkipTo to simply match everything up until the next SEP - # - ignore quoted strings, so that a '|' character inside a quoted string does not match - # - parse action will call token.strip() for each matched token, i.e., the description body - string_data = SkipTo(SEP, ignore=quotedString) - string_data.setParseAction(tokenMap(str.strip)) - ticket_expr = (integer("issue_num") + SEP - + string_data("sev") + SEP - + string_data("desc") + SEP - + integer("days_open")) - - for tkt in ticket_expr.searchString(report): - print tkt.dump() - prints:: - ['101', 'Critical', 'Intermittent system crash', '6'] - - days_open: 6 - - desc: Intermittent system crash - - issue_num: 101 - - sev: Critical - ['94', 'Cosmetic', "Spelling error on Login ('log|n')", '14'] - - days_open: 14 - - desc: Spelling error on Login ('log|n') - - issue_num: 94 - - sev: Cosmetic - ['79', 'Minor', 'System slow when running too many reports', '47'] - - days_open: 47 - - desc: System slow when running too many reports - - issue_num: 79 - - sev: Minor - """ - def __init__( self, other, include=False, ignore=None, failOn=None ): - super( SkipTo, self ).__init__( other ) - self.ignoreExpr = ignore - self.mayReturnEmpty = True - self.mayIndexError = False - self.includeMatch = include - self.asList = False - if isinstance(failOn, basestring): - self.failOn = ParserElement._literalStringClass(failOn) - else: - self.failOn = failOn - self.errmsg = "No match found for "+_ustr(self.expr) - - def parseImpl( self, instring, loc, doActions=True ): - startloc = loc - instrlen = len(instring) - expr = self.expr - expr_parse = self.expr._parse - self_failOn_canParseNext = self.failOn.canParseNext if self.failOn is not None else None - self_ignoreExpr_tryParse = self.ignoreExpr.tryParse if self.ignoreExpr is not None else None - - tmploc = loc - while tmploc <= instrlen: - if self_failOn_canParseNext is not None: - # break if failOn expression matches - if self_failOn_canParseNext(instring, tmploc): - break - - if self_ignoreExpr_tryParse is not None: - # advance past ignore expressions - while 1: - try: - tmploc = self_ignoreExpr_tryParse(instring, tmploc) - except ParseBaseException: - break - - try: - expr_parse(instring, tmploc, doActions=False, callPreParse=False) - except (ParseException, IndexError): - # no match, advance loc in string - tmploc += 1 - else: - # matched skipto expr, done - break - - else: - # ran off the end of the input string without matching skipto expr, fail - raise ParseException(instring, loc, self.errmsg, self) - - # build up return values - loc = tmploc - skiptext = instring[startloc:loc] - skipresult = ParseResults(skiptext) - - if self.includeMatch: - loc, mat = expr_parse(instring,loc,doActions,callPreParse=False) - skipresult += mat - - return loc, skipresult - -class Forward(ParseElementEnhance): - """ - Forward declaration of an expression to be defined later - - used for recursive grammars, such as algebraic infix notation. - When the expression is known, it is assigned to the C{Forward} variable using the '<<' operator. - - Note: take care when assigning to C{Forward} not to overlook precedence of operators. - Specifically, '|' has a lower precedence than '<<', so that:: - fwdExpr << a | b | c - will actually be evaluated as:: - (fwdExpr << a) | b | c - thereby leaving b and c out as parseable alternatives. It is recommended that you - explicitly group the values inserted into the C{Forward}:: - fwdExpr << (a | b | c) - Converting to use the '<<=' operator instead will avoid this problem. - - See L{ParseResults.pprint} for an example of a recursive parser created using - C{Forward}. - """ - def __init__( self, other=None ): - super(Forward,self).__init__( other, savelist=False ) - - def __lshift__( self, other ): - if isinstance( other, basestring ): - other = ParserElement._literalStringClass(other) - self.expr = other - self.strRepr = None - self.mayIndexError = self.expr.mayIndexError - self.mayReturnEmpty = self.expr.mayReturnEmpty - self.setWhitespaceChars( self.expr.whiteChars ) - self.skipWhitespace = self.expr.skipWhitespace - self.saveAsList = self.expr.saveAsList - self.ignoreExprs.extend(self.expr.ignoreExprs) - return self - - def __ilshift__(self, other): - return self << other - - def leaveWhitespace( self ): - self.skipWhitespace = False - return self - - def streamline( self ): - if not self.streamlined: - self.streamlined = True - if self.expr is not None: - self.expr.streamline() - return self - - def validate( self, validateTrace=[] ): - if self not in validateTrace: - tmp = validateTrace[:]+[self] - if self.expr is not None: - self.expr.validate(tmp) - self.checkRecursion([]) - - def __str__( self ): - if hasattr(self,"name"): - return self.name - return self.__class__.__name__ + ": ..." - - # stubbed out for now - creates awful memory and perf issues - self._revertClass = self.__class__ - self.__class__ = _ForwardNoRecurse - try: - if self.expr is not None: - retString = _ustr(self.expr) - else: - retString = "None" - finally: - self.__class__ = self._revertClass - return self.__class__.__name__ + ": " + retString - - def copy(self): - if self.expr is not None: - return super(Forward,self).copy() - else: - ret = Forward() - ret <<= self - return ret - -class _ForwardNoRecurse(Forward): - def __str__( self ): - return "..." - -class TokenConverter(ParseElementEnhance): - """ - Abstract subclass of C{ParseExpression}, for converting parsed results. - """ - def __init__( self, expr, savelist=False ): - super(TokenConverter,self).__init__( expr )#, savelist ) - self.saveAsList = False - -class Combine(TokenConverter): - """ - Converter to concatenate all matching tokens to a single string. - By default, the matching patterns must also be contiguous in the input string; - this can be disabled by specifying C{'adjacent=False'} in the constructor. - - Example:: - real = Word(nums) + '.' + Word(nums) - print(real.parseString('3.1416')) # -> ['3', '.', '1416'] - # will also erroneously match the following - print(real.parseString('3. 1416')) # -> ['3', '.', '1416'] - - real = Combine(Word(nums) + '.' + Word(nums)) - print(real.parseString('3.1416')) # -> ['3.1416'] - # no match when there are internal spaces - print(real.parseString('3. 1416')) # -> Exception: Expected W:(0123...) - """ - def __init__( self, expr, joinString="", adjacent=True ): - super(Combine,self).__init__( expr ) - # suppress whitespace-stripping in contained parse expressions, but re-enable it on the Combine itself - if adjacent: - self.leaveWhitespace() - self.adjacent = adjacent - self.skipWhitespace = True - self.joinString = joinString - self.callPreparse = True - - def ignore( self, other ): - if self.adjacent: - ParserElement.ignore(self, other) - else: - super( Combine, self).ignore( other ) - return self - - def postParse( self, instring, loc, tokenlist ): - retToks = tokenlist.copy() - del retToks[:] - retToks += ParseResults([ "".join(tokenlist._asStringList(self.joinString)) ], modal=self.modalResults) - - if self.resultsName and retToks.haskeys(): - return [ retToks ] - else: - return retToks - -class Group(TokenConverter): - """ - Converter to return the matched tokens as a list - useful for returning tokens of C{L{ZeroOrMore}} and C{L{OneOrMore}} expressions. - - Example:: - ident = Word(alphas) - num = Word(nums) - term = ident | num - func = ident + Optional(delimitedList(term)) - print(func.parseString("fn a,b,100")) # -> ['fn', 'a', 'b', '100'] - - func = ident + Group(Optional(delimitedList(term))) - print(func.parseString("fn a,b,100")) # -> ['fn', ['a', 'b', '100']] - """ - def __init__( self, expr ): - super(Group,self).__init__( expr ) - self.saveAsList = True - - def postParse( self, instring, loc, tokenlist ): - return [ tokenlist ] - -class Dict(TokenConverter): - """ - Converter to return a repetitive expression as a list, but also as a dictionary. - Each element can also be referenced using the first token in the expression as its key. - Useful for tabular report scraping when the first column can be used as a item key. - - Example:: - data_word = Word(alphas) - label = data_word + FollowedBy(':') - attr_expr = Group(label + Suppress(':') + OneOrMore(data_word).setParseAction(' '.join)) - - text = "shape: SQUARE posn: upper left color: light blue texture: burlap" - attr_expr = (label + Suppress(':') + OneOrMore(data_word, stopOn=label).setParseAction(' '.join)) - - # print attributes as plain groups - print(OneOrMore(attr_expr).parseString(text).dump()) - - # instead of OneOrMore(expr), parse using Dict(OneOrMore(Group(expr))) - Dict will auto-assign names - result = Dict(OneOrMore(Group(attr_expr))).parseString(text) - print(result.dump()) - - # access named fields as dict entries, or output as dict - print(result['shape']) - print(result.asDict()) - prints:: - ['shape', 'SQUARE', 'posn', 'upper left', 'color', 'light blue', 'texture', 'burlap'] - - [['shape', 'SQUARE'], ['posn', 'upper left'], ['color', 'light blue'], ['texture', 'burlap']] - - color: light blue - - posn: upper left - - shape: SQUARE - - texture: burlap - SQUARE - {'color': 'light blue', 'posn': 'upper left', 'texture': 'burlap', 'shape': 'SQUARE'} - See more examples at L{ParseResults} of accessing fields by results name. - """ - def __init__( self, expr ): - super(Dict,self).__init__( expr ) - self.saveAsList = True - - def postParse( self, instring, loc, tokenlist ): - for i,tok in enumerate(tokenlist): - if len(tok) == 0: - continue - ikey = tok[0] - if isinstance(ikey,int): - ikey = _ustr(tok[0]).strip() - if len(tok)==1: - tokenlist[ikey] = _ParseResultsWithOffset("",i) - elif len(tok)==2 and not isinstance(tok[1],ParseResults): - tokenlist[ikey] = _ParseResultsWithOffset(tok[1],i) - else: - dictvalue = tok.copy() #ParseResults(i) - del dictvalue[0] - if len(dictvalue)!= 1 or (isinstance(dictvalue,ParseResults) and dictvalue.haskeys()): - tokenlist[ikey] = _ParseResultsWithOffset(dictvalue,i) - else: - tokenlist[ikey] = _ParseResultsWithOffset(dictvalue[0],i) - - if self.resultsName: - return [ tokenlist ] - else: - return tokenlist - - -class Suppress(TokenConverter): - """ - Converter for ignoring the results of a parsed expression. - - Example:: - source = "a, b, c,d" - wd = Word(alphas) - wd_list1 = wd + ZeroOrMore(',' + wd) - print(wd_list1.parseString(source)) - - # often, delimiters that are useful during parsing are just in the - # way afterward - use Suppress to keep them out of the parsed output - wd_list2 = wd + ZeroOrMore(Suppress(',') + wd) - print(wd_list2.parseString(source)) - prints:: - ['a', ',', 'b', ',', 'c', ',', 'd'] - ['a', 'b', 'c', 'd'] - (See also L{delimitedList}.) - """ - def postParse( self, instring, loc, tokenlist ): - return [] - - def suppress( self ): - return self - - -class OnlyOnce(object): - """ - Wrapper for parse actions, to ensure they are only called once. - """ - def __init__(self, methodCall): - self.callable = _trim_arity(methodCall) - self.called = False - def __call__(self,s,l,t): - if not self.called: - results = self.callable(s,l,t) - self.called = True - return results - raise ParseException(s,l,"") - def reset(self): - self.called = False - -def traceParseAction(f): - """ - Decorator for debugging parse actions. - - When the parse action is called, this decorator will print C{">> entering I{method-name}(line:I{current_source_line}, I{parse_location}, I{matched_tokens})".} - When the parse action completes, the decorator will print C{"<<"} followed by the returned value, or any exception that the parse action raised. - - Example:: - wd = Word(alphas) - - @traceParseAction - def remove_duplicate_chars(tokens): - return ''.join(sorted(set(''.join(tokens)))) - - wds = OneOrMore(wd).setParseAction(remove_duplicate_chars) - print(wds.parseString("slkdjs sld sldd sdlf sdljf")) - prints:: - >>entering remove_duplicate_chars(line: 'slkdjs sld sldd sdlf sdljf', 0, (['slkdjs', 'sld', 'sldd', 'sdlf', 'sdljf'], {})) - <3: - thisFunc = paArgs[0].__class__.__name__ + '.' + thisFunc - sys.stderr.write( ">>entering %s(line: '%s', %d, %r)\n" % (thisFunc,line(l,s),l,t) ) - try: - ret = f(*paArgs) - except Exception as exc: - sys.stderr.write( "< ['aa', 'bb', 'cc'] - delimitedList(Word(hexnums), delim=':', combine=True).parseString("AA:BB:CC:DD:EE") # -> ['AA:BB:CC:DD:EE'] - """ - dlName = _ustr(expr)+" ["+_ustr(delim)+" "+_ustr(expr)+"]..." - if combine: - return Combine( expr + ZeroOrMore( delim + expr ) ).setName(dlName) - else: - return ( expr + ZeroOrMore( Suppress( delim ) + expr ) ).setName(dlName) - -def countedArray( expr, intExpr=None ): - """ - Helper to define a counted list of expressions. - This helper defines a pattern of the form:: - integer expr expr expr... - where the leading integer tells how many expr expressions follow. - The matched tokens returns the array of expr tokens as a list - the leading count token is suppressed. - - If C{intExpr} is specified, it should be a pyparsing expression that produces an integer value. - - Example:: - countedArray(Word(alphas)).parseString('2 ab cd ef') # -> ['ab', 'cd'] - - # in this parser, the leading integer value is given in binary, - # '10' indicating that 2 values are in the array - binaryConstant = Word('01').setParseAction(lambda t: int(t[0], 2)) - countedArray(Word(alphas), intExpr=binaryConstant).parseString('10 ab cd ef') # -> ['ab', 'cd'] - """ - arrayExpr = Forward() - def countFieldParseAction(s,l,t): - n = t[0] - arrayExpr << (n and Group(And([expr]*n)) or Group(empty)) - return [] - if intExpr is None: - intExpr = Word(nums).setParseAction(lambda t:int(t[0])) - else: - intExpr = intExpr.copy() - intExpr.setName("arrayLen") - intExpr.addParseAction(countFieldParseAction, callDuringTry=True) - return ( intExpr + arrayExpr ).setName('(len) ' + _ustr(expr) + '...') - -def _flatten(L): - ret = [] - for i in L: - if isinstance(i,list): - ret.extend(_flatten(i)) - else: - ret.append(i) - return ret - -def matchPreviousLiteral(expr): - """ - Helper to define an expression that is indirectly defined from - the tokens matched in a previous expression, that is, it looks - for a 'repeat' of a previous expression. For example:: - first = Word(nums) - second = matchPreviousLiteral(first) - matchExpr = first + ":" + second - will match C{"1:1"}, but not C{"1:2"}. Because this matches a - previous literal, will also match the leading C{"1:1"} in C{"1:10"}. - If this is not desired, use C{matchPreviousExpr}. - Do I{not} use with packrat parsing enabled. - """ - rep = Forward() - def copyTokenToRepeater(s,l,t): - if t: - if len(t) == 1: - rep << t[0] - else: - # flatten t tokens - tflat = _flatten(t.asList()) - rep << And(Literal(tt) for tt in tflat) - else: - rep << Empty() - expr.addParseAction(copyTokenToRepeater, callDuringTry=True) - rep.setName('(prev) ' + _ustr(expr)) - return rep - -def matchPreviousExpr(expr): - """ - Helper to define an expression that is indirectly defined from - the tokens matched in a previous expression, that is, it looks - for a 'repeat' of a previous expression. For example:: - first = Word(nums) - second = matchPreviousExpr(first) - matchExpr = first + ":" + second - will match C{"1:1"}, but not C{"1:2"}. Because this matches by - expressions, will I{not} match the leading C{"1:1"} in C{"1:10"}; - the expressions are evaluated first, and then compared, so - C{"1"} is compared with C{"10"}. - Do I{not} use with packrat parsing enabled. - """ - rep = Forward() - e2 = expr.copy() - rep <<= e2 - def copyTokenToRepeater(s,l,t): - matchTokens = _flatten(t.asList()) - def mustMatchTheseTokens(s,l,t): - theseTokens = _flatten(t.asList()) - if theseTokens != matchTokens: - raise ParseException("",0,"") - rep.setParseAction( mustMatchTheseTokens, callDuringTry=True ) - expr.addParseAction(copyTokenToRepeater, callDuringTry=True) - rep.setName('(prev) ' + _ustr(expr)) - return rep - -def _escapeRegexRangeChars(s): - #~ escape these chars: ^-] - for c in r"\^-]": - s = s.replace(c,_bslash+c) - s = s.replace("\n",r"\n") - s = s.replace("\t",r"\t") - return _ustr(s) - -def oneOf( strs, caseless=False, useRegex=True ): - """ - Helper to quickly define a set of alternative Literals, and makes sure to do - longest-first testing when there is a conflict, regardless of the input order, - but returns a C{L{MatchFirst}} for best performance. - - Parameters: - - strs - a string of space-delimited literals, or a collection of string literals - - caseless - (default=C{False}) - treat all literals as caseless - - useRegex - (default=C{True}) - as an optimization, will generate a Regex - object; otherwise, will generate a C{MatchFirst} object (if C{caseless=True}, or - if creating a C{Regex} raises an exception) - - Example:: - comp_oper = oneOf("< = > <= >= !=") - var = Word(alphas) - number = Word(nums) - term = var | number - comparison_expr = term + comp_oper + term - print(comparison_expr.searchString("B = 12 AA=23 B<=AA AA>12")) - prints:: - [['B', '=', '12'], ['AA', '=', '23'], ['B', '<=', 'AA'], ['AA', '>', '12']] - """ - if caseless: - isequal = ( lambda a,b: a.upper() == b.upper() ) - masks = ( lambda a,b: b.upper().startswith(a.upper()) ) - parseElementClass = CaselessLiteral - else: - isequal = ( lambda a,b: a == b ) - masks = ( lambda a,b: b.startswith(a) ) - parseElementClass = Literal - - symbols = [] - if isinstance(strs,basestring): - symbols = strs.split() - elif isinstance(strs, Iterable): - symbols = list(strs) - else: - warnings.warn("Invalid argument to oneOf, expected string or iterable", - SyntaxWarning, stacklevel=2) - if not symbols: - return NoMatch() - - i = 0 - while i < len(symbols)-1: - cur = symbols[i] - for j,other in enumerate(symbols[i+1:]): - if ( isequal(other, cur) ): - del symbols[i+j+1] - break - elif ( masks(cur, other) ): - del symbols[i+j+1] - symbols.insert(i,other) - cur = other - break - else: - i += 1 - - if not caseless and useRegex: - #~ print (strs,"->", "|".join( [ _escapeRegexChars(sym) for sym in symbols] )) - try: - if len(symbols)==len("".join(symbols)): - return Regex( "[%s]" % "".join(_escapeRegexRangeChars(sym) for sym in symbols) ).setName(' | '.join(symbols)) - else: - return Regex( "|".join(re.escape(sym) for sym in symbols) ).setName(' | '.join(symbols)) - except Exception: - warnings.warn("Exception creating Regex for oneOf, building MatchFirst", - SyntaxWarning, stacklevel=2) - - - # last resort, just use MatchFirst - return MatchFirst(parseElementClass(sym) for sym in symbols).setName(' | '.join(symbols)) - -def dictOf( key, value ): - """ - Helper to easily and clearly define a dictionary by specifying the respective patterns - for the key and value. Takes care of defining the C{L{Dict}}, C{L{ZeroOrMore}}, and C{L{Group}} tokens - in the proper order. The key pattern can include delimiting markers or punctuation, - as long as they are suppressed, thereby leaving the significant key text. The value - pattern can include named results, so that the C{Dict} results can include named token - fields. - - Example:: - text = "shape: SQUARE posn: upper left color: light blue texture: burlap" - attr_expr = (label + Suppress(':') + OneOrMore(data_word, stopOn=label).setParseAction(' '.join)) - print(OneOrMore(attr_expr).parseString(text).dump()) - - attr_label = label - attr_value = Suppress(':') + OneOrMore(data_word, stopOn=label).setParseAction(' '.join) - - # similar to Dict, but simpler call format - result = dictOf(attr_label, attr_value).parseString(text) - print(result.dump()) - print(result['shape']) - print(result.shape) # object attribute access works too - print(result.asDict()) - prints:: - [['shape', 'SQUARE'], ['posn', 'upper left'], ['color', 'light blue'], ['texture', 'burlap']] - - color: light blue - - posn: upper left - - shape: SQUARE - - texture: burlap - SQUARE - SQUARE - {'color': 'light blue', 'shape': 'SQUARE', 'posn': 'upper left', 'texture': 'burlap'} - """ - return Dict( ZeroOrMore( Group ( key + value ) ) ) - -def originalTextFor(expr, asString=True): - """ - Helper to return the original, untokenized text for a given expression. Useful to - restore the parsed fields of an HTML start tag into the raw tag text itself, or to - revert separate tokens with intervening whitespace back to the original matching - input text. By default, returns astring containing the original parsed text. - - If the optional C{asString} argument is passed as C{False}, then the return value is a - C{L{ParseResults}} containing any results names that were originally matched, and a - single token containing the original matched text from the input string. So if - the expression passed to C{L{originalTextFor}} contains expressions with defined - results names, you must set C{asString} to C{False} if you want to preserve those - results name values. - - Example:: - src = "this is test bold text normal text " - for tag in ("b","i"): - opener,closer = makeHTMLTags(tag) - patt = originalTextFor(opener + SkipTo(closer) + closer) - print(patt.searchString(src)[0]) - prints:: - [' bold text '] - ['text'] - """ - locMarker = Empty().setParseAction(lambda s,loc,t: loc) - endlocMarker = locMarker.copy() - endlocMarker.callPreparse = False - matchExpr = locMarker("_original_start") + expr + endlocMarker("_original_end") - if asString: - extractText = lambda s,l,t: s[t._original_start:t._original_end] - else: - def extractText(s,l,t): - t[:] = [s[t.pop('_original_start'):t.pop('_original_end')]] - matchExpr.setParseAction(extractText) - matchExpr.ignoreExprs = expr.ignoreExprs - return matchExpr - -def ungroup(expr): - """ - Helper to undo pyparsing's default grouping of And expressions, even - if all but one are non-empty. - """ - return TokenConverter(expr).setParseAction(lambda t:t[0]) - -def locatedExpr(expr): - """ - Helper to decorate a returned token with its starting and ending locations in the input string. - This helper adds the following results names: - - locn_start = location where matched expression begins - - locn_end = location where matched expression ends - - value = the actual parsed results - - Be careful if the input text contains C{} characters, you may want to call - C{L{ParserElement.parseWithTabs}} - - Example:: - wd = Word(alphas) - for match in locatedExpr(wd).searchString("ljsdf123lksdjjf123lkkjj1222"): - print(match) - prints:: - [[0, 'ljsdf', 5]] - [[8, 'lksdjjf', 15]] - [[18, 'lkkjj', 23]] - """ - locator = Empty().setParseAction(lambda s,l,t: l) - return Group(locator("locn_start") + expr("value") + locator.copy().leaveWhitespace()("locn_end")) - - -# convenience constants for positional expressions -empty = Empty().setName("empty") -lineStart = LineStart().setName("lineStart") -lineEnd = LineEnd().setName("lineEnd") -stringStart = StringStart().setName("stringStart") -stringEnd = StringEnd().setName("stringEnd") - -_escapedPunc = Word( _bslash, r"\[]-*.$+^?()~ ", exact=2 ).setParseAction(lambda s,l,t:t[0][1]) -_escapedHexChar = Regex(r"\\0?[xX][0-9a-fA-F]+").setParseAction(lambda s,l,t:unichr(int(t[0].lstrip(r'\0x'),16))) -_escapedOctChar = Regex(r"\\0[0-7]+").setParseAction(lambda s,l,t:unichr(int(t[0][1:],8))) -_singleChar = _escapedPunc | _escapedHexChar | _escapedOctChar | CharsNotIn(r'\]', exact=1) -_charRange = Group(_singleChar + Suppress("-") + _singleChar) -_reBracketExpr = Literal("[") + Optional("^").setResultsName("negate") + Group( OneOrMore( _charRange | _singleChar ) ).setResultsName("body") + "]" - -def srange(s): - r""" - Helper to easily define string ranges for use in Word construction. Borrows - syntax from regexp '[]' string range definitions:: - srange("[0-9]") -> "0123456789" - srange("[a-z]") -> "abcdefghijklmnopqrstuvwxyz" - srange("[a-z$_]") -> "abcdefghijklmnopqrstuvwxyz$_" - The input string must be enclosed in []'s, and the returned string is the expanded - character set joined into a single string. - The values enclosed in the []'s may be: - - a single character - - an escaped character with a leading backslash (such as C{\-} or C{\]}) - - an escaped hex character with a leading C{'\x'} (C{\x21}, which is a C{'!'} character) - (C{\0x##} is also supported for backwards compatibility) - - an escaped octal character with a leading C{'\0'} (C{\041}, which is a C{'!'} character) - - a range of any of the above, separated by a dash (C{'a-z'}, etc.) - - any combination of the above (C{'aeiouy'}, C{'a-zA-Z0-9_$'}, etc.) - """ - _expanded = lambda p: p if not isinstance(p,ParseResults) else ''.join(unichr(c) for c in range(ord(p[0]),ord(p[1])+1)) - try: - return "".join(_expanded(part) for part in _reBracketExpr.parseString(s).body) - except Exception: - return "" - -def matchOnlyAtCol(n): - """ - Helper method for defining parse actions that require matching at a specific - column in the input text. - """ - def verifyCol(strg,locn,toks): - if col(locn,strg) != n: - raise ParseException(strg,locn,"matched token not at column %d" % n) - return verifyCol - -def replaceWith(replStr): - """ - Helper method for common parse actions that simply return a literal value. Especially - useful when used with C{L{transformString}()}. - - Example:: - num = Word(nums).setParseAction(lambda toks: int(toks[0])) - na = oneOf("N/A NA").setParseAction(replaceWith(math.nan)) - term = na | num - - OneOrMore(term).parseString("324 234 N/A 234") # -> [324, 234, nan, 234] - """ - return lambda s,l,t: [replStr] - -def removeQuotes(s,l,t): - """ - Helper parse action for removing quotation marks from parsed quoted strings. - - Example:: - # by default, quotation marks are included in parsed results - quotedString.parseString("'Now is the Winter of our Discontent'") # -> ["'Now is the Winter of our Discontent'"] - - # use removeQuotes to strip quotation marks from parsed results - quotedString.setParseAction(removeQuotes) - quotedString.parseString("'Now is the Winter of our Discontent'") # -> ["Now is the Winter of our Discontent"] - """ - return t[0][1:-1] - -def tokenMap(func, *args): - """ - Helper to define a parse action by mapping a function to all elements of a ParseResults list.If any additional - args are passed, they are forwarded to the given function as additional arguments after - the token, as in C{hex_integer = Word(hexnums).setParseAction(tokenMap(int, 16))}, which will convert the - parsed data to an integer using base 16. - - Example (compare the last to example in L{ParserElement.transformString}:: - hex_ints = OneOrMore(Word(hexnums)).setParseAction(tokenMap(int, 16)) - hex_ints.runTests(''' - 00 11 22 aa FF 0a 0d 1a - ''') - - upperword = Word(alphas).setParseAction(tokenMap(str.upper)) - OneOrMore(upperword).runTests(''' - my kingdom for a horse - ''') - - wd = Word(alphas).setParseAction(tokenMap(str.title)) - OneOrMore(wd).setParseAction(' '.join).runTests(''' - now is the winter of our discontent made glorious summer by this sun of york - ''') - prints:: - 00 11 22 aa FF 0a 0d 1a - [0, 17, 34, 170, 255, 10, 13, 26] - - my kingdom for a horse - ['MY', 'KINGDOM', 'FOR', 'A', 'HORSE'] - - now is the winter of our discontent made glorious summer by this sun of york - ['Now Is The Winter Of Our Discontent Made Glorious Summer By This Sun Of York'] - """ - def pa(s,l,t): - return [func(tokn, *args) for tokn in t] - - try: - func_name = getattr(func, '__name__', - getattr(func, '__class__').__name__) - except Exception: - func_name = str(func) - pa.__name__ = func_name - - return pa - -upcaseTokens = tokenMap(lambda t: _ustr(t).upper()) -"""(Deprecated) Helper parse action to convert tokens to upper case. Deprecated in favor of L{pyparsing_common.upcaseTokens}""" - -downcaseTokens = tokenMap(lambda t: _ustr(t).lower()) -"""(Deprecated) Helper parse action to convert tokens to lower case. Deprecated in favor of L{pyparsing_common.downcaseTokens}""" - -def _makeTags(tagStr, xml): - """Internal helper to construct opening and closing tag expressions, given a tag name""" - if isinstance(tagStr,basestring): - resname = tagStr - tagStr = Keyword(tagStr, caseless=not xml) - else: - resname = tagStr.name - - tagAttrName = Word(alphas,alphanums+"_-:") - if (xml): - tagAttrValue = dblQuotedString.copy().setParseAction( removeQuotes ) - openTag = Suppress("<") + tagStr("tag") + \ - Dict(ZeroOrMore(Group( tagAttrName + Suppress("=") + tagAttrValue ))) + \ - Optional("/",default=[False]).setResultsName("empty").setParseAction(lambda s,l,t:t[0]=='/') + Suppress(">") - else: - printablesLessRAbrack = "".join(c for c in printables if c not in ">") - tagAttrValue = quotedString.copy().setParseAction( removeQuotes ) | Word(printablesLessRAbrack) - openTag = Suppress("<") + tagStr("tag") + \ - Dict(ZeroOrMore(Group( tagAttrName.setParseAction(downcaseTokens) + \ - Optional( Suppress("=") + tagAttrValue ) ))) + \ - Optional("/",default=[False]).setResultsName("empty").setParseAction(lambda s,l,t:t[0]=='/') + Suppress(">") - closeTag = Combine(_L("") - - openTag = openTag.setResultsName("start"+"".join(resname.replace(":"," ").title().split())).setName("<%s>" % resname) - closeTag = closeTag.setResultsName("end"+"".join(resname.replace(":"," ").title().split())).setName("" % resname) - openTag.tag = resname - closeTag.tag = resname - return openTag, closeTag - -def makeHTMLTags(tagStr): - """ - Helper to construct opening and closing tag expressions for HTML, given a tag name. Matches - tags in either upper or lower case, attributes with namespaces and with quoted or unquoted values. - - Example:: - text = 'More info at the pyparsing wiki page' - # makeHTMLTags returns pyparsing expressions for the opening and closing tags as a 2-tuple - a,a_end = makeHTMLTags("A") - link_expr = a + SkipTo(a_end)("link_text") + a_end - - for link in link_expr.searchString(text): - # attributes in the tag (like "href" shown here) are also accessible as named results - print(link.link_text, '->', link.href) - prints:: - pyparsing -> http://pyparsing.wikispaces.com - """ - return _makeTags( tagStr, False ) - -def makeXMLTags(tagStr): - """ - Helper to construct opening and closing tag expressions for XML, given a tag name. Matches - tags only in the given upper/lower case. - - Example: similar to L{makeHTMLTags} - """ - return _makeTags( tagStr, True ) - -def withAttribute(*args,**attrDict): - """ - Helper to create a validating parse action to be used with start tags created - with C{L{makeXMLTags}} or C{L{makeHTMLTags}}. Use C{withAttribute} to qualify a starting tag - with a required attribute value, to avoid false matches on common tags such as - C{} or C{
    }. - - Call C{withAttribute} with a series of attribute names and values. Specify the list - of filter attributes names and values as: - - keyword arguments, as in C{(align="right")}, or - - as an explicit dict with C{**} operator, when an attribute name is also a Python - reserved word, as in C{**{"class":"Customer", "align":"right"}} - - a list of name-value tuples, as in ( ("ns1:class", "Customer"), ("ns2:align","right") ) - For attribute names with a namespace prefix, you must use the second form. Attribute - names are matched insensitive to upper/lower case. - - If just testing for C{class} (with or without a namespace), use C{L{withClass}}. - - To verify that the attribute exists, but without specifying a value, pass - C{withAttribute.ANY_VALUE} as the value. - - Example:: - html = ''' -
    - Some text -
    1 4 0 1 0
    -
    1,3 2,3 1,1
    -
    this has no type
    -
    - - ''' - div,div_end = makeHTMLTags("div") - - # only match div tag having a type attribute with value "grid" - div_grid = div().setParseAction(withAttribute(type="grid")) - grid_expr = div_grid + SkipTo(div | div_end)("body") - for grid_header in grid_expr.searchString(html): - print(grid_header.body) - - # construct a match with any div tag having a type attribute, regardless of the value - div_any_type = div().setParseAction(withAttribute(type=withAttribute.ANY_VALUE)) - div_expr = div_any_type + SkipTo(div | div_end)("body") - for div_header in div_expr.searchString(html): - print(div_header.body) - prints:: - 1 4 0 1 0 - - 1 4 0 1 0 - 1,3 2,3 1,1 - """ - if args: - attrs = args[:] - else: - attrs = attrDict.items() - attrs = [(k,v) for k,v in attrs] - def pa(s,l,tokens): - for attrName,attrValue in attrs: - if attrName not in tokens: - raise ParseException(s,l,"no matching attribute " + attrName) - if attrValue != withAttribute.ANY_VALUE and tokens[attrName] != attrValue: - raise ParseException(s,l,"attribute '%s' has value '%s', must be '%s'" % - (attrName, tokens[attrName], attrValue)) - return pa -withAttribute.ANY_VALUE = object() - -def withClass(classname, namespace=''): - """ - Simplified version of C{L{withAttribute}} when matching on a div class - made - difficult because C{class} is a reserved word in Python. - - Example:: - html = ''' -
    - Some text -
    1 4 0 1 0
    -
    1,3 2,3 1,1
    -
    this <div> has no class
    -
    - - ''' - div,div_end = makeHTMLTags("div") - div_grid = div().setParseAction(withClass("grid")) - - grid_expr = div_grid + SkipTo(div | div_end)("body") - for grid_header in grid_expr.searchString(html): - print(grid_header.body) - - div_any_type = div().setParseAction(withClass(withAttribute.ANY_VALUE)) - div_expr = div_any_type + SkipTo(div | div_end)("body") - for div_header in div_expr.searchString(html): - print(div_header.body) - prints:: - 1 4 0 1 0 - - 1 4 0 1 0 - 1,3 2,3 1,1 - """ - classattr = "%s:class" % namespace if namespace else "class" - return withAttribute(**{classattr : classname}) - -opAssoc = _Constants() -opAssoc.LEFT = object() -opAssoc.RIGHT = object() - -def infixNotation( baseExpr, opList, lpar=Suppress('('), rpar=Suppress(')') ): - """ - Helper method for constructing grammars of expressions made up of - operators working in a precedence hierarchy. Operators may be unary or - binary, left- or right-associative. Parse actions can also be attached - to operator expressions. The generated parser will also recognize the use - of parentheses to override operator precedences (see example below). - - Note: if you define a deep operator list, you may see performance issues - when using infixNotation. See L{ParserElement.enablePackrat} for a - mechanism to potentially improve your parser performance. - - Parameters: - - baseExpr - expression representing the most basic element for the nested - - opList - list of tuples, one for each operator precedence level in the - expression grammar; each tuple is of the form - (opExpr, numTerms, rightLeftAssoc, parseAction), where: - - opExpr is the pyparsing expression for the operator; - may also be a string, which will be converted to a Literal; - if numTerms is 3, opExpr is a tuple of two expressions, for the - two operators separating the 3 terms - - numTerms is the number of terms for this operator (must - be 1, 2, or 3) - - rightLeftAssoc is the indicator whether the operator is - right or left associative, using the pyparsing-defined - constants C{opAssoc.RIGHT} and C{opAssoc.LEFT}. - - parseAction is the parse action to be associated with - expressions matching this operator expression (the - parse action tuple member may be omitted); if the parse action - is passed a tuple or list of functions, this is equivalent to - calling C{setParseAction(*fn)} (L{ParserElement.setParseAction}) - - lpar - expression for matching left-parentheses (default=C{Suppress('(')}) - - rpar - expression for matching right-parentheses (default=C{Suppress(')')}) - - Example:: - # simple example of four-function arithmetic with ints and variable names - integer = pyparsing_common.signed_integer - varname = pyparsing_common.identifier - - arith_expr = infixNotation(integer | varname, - [ - ('-', 1, opAssoc.RIGHT), - (oneOf('* /'), 2, opAssoc.LEFT), - (oneOf('+ -'), 2, opAssoc.LEFT), - ]) - - arith_expr.runTests(''' - 5+3*6 - (5+3)*6 - -2--11 - ''', fullDump=False) - prints:: - 5+3*6 - [[5, '+', [3, '*', 6]]] - - (5+3)*6 - [[[5, '+', 3], '*', 6]] - - -2--11 - [[['-', 2], '-', ['-', 11]]] - """ - ret = Forward() - lastExpr = baseExpr | ( lpar + ret + rpar ) - for i,operDef in enumerate(opList): - opExpr,arity,rightLeftAssoc,pa = (operDef + (None,))[:4] - termName = "%s term" % opExpr if arity < 3 else "%s%s term" % opExpr - if arity == 3: - if opExpr is None or len(opExpr) != 2: - raise ValueError("if numterms=3, opExpr must be a tuple or list of two expressions") - opExpr1, opExpr2 = opExpr - thisExpr = Forward().setName(termName) - if rightLeftAssoc == opAssoc.LEFT: - if arity == 1: - matchExpr = FollowedBy(lastExpr + opExpr) + Group( lastExpr + OneOrMore( opExpr ) ) - elif arity == 2: - if opExpr is not None: - matchExpr = FollowedBy(lastExpr + opExpr + lastExpr) + Group( lastExpr + OneOrMore( opExpr + lastExpr ) ) - else: - matchExpr = FollowedBy(lastExpr+lastExpr) + Group( lastExpr + OneOrMore(lastExpr) ) - elif arity == 3: - matchExpr = FollowedBy(lastExpr + opExpr1 + lastExpr + opExpr2 + lastExpr) + \ - Group( lastExpr + opExpr1 + lastExpr + opExpr2 + lastExpr ) - else: - raise ValueError("operator must be unary (1), binary (2), or ternary (3)") - elif rightLeftAssoc == opAssoc.RIGHT: - if arity == 1: - # try to avoid LR with this extra test - if not isinstance(opExpr, Optional): - opExpr = Optional(opExpr) - matchExpr = FollowedBy(opExpr.expr + thisExpr) + Group( opExpr + thisExpr ) - elif arity == 2: - if opExpr is not None: - matchExpr = FollowedBy(lastExpr + opExpr + thisExpr) + Group( lastExpr + OneOrMore( opExpr + thisExpr ) ) - else: - matchExpr = FollowedBy(lastExpr + thisExpr) + Group( lastExpr + OneOrMore( thisExpr ) ) - elif arity == 3: - matchExpr = FollowedBy(lastExpr + opExpr1 + thisExpr + opExpr2 + thisExpr) + \ - Group( lastExpr + opExpr1 + thisExpr + opExpr2 + thisExpr ) - else: - raise ValueError("operator must be unary (1), binary (2), or ternary (3)") - else: - raise ValueError("operator must indicate right or left associativity") - if pa: - if isinstance(pa, (tuple, list)): - matchExpr.setParseAction(*pa) - else: - matchExpr.setParseAction(pa) - thisExpr <<= ( matchExpr.setName(termName) | lastExpr ) - lastExpr = thisExpr - ret <<= lastExpr - return ret - -operatorPrecedence = infixNotation -"""(Deprecated) Former name of C{L{infixNotation}}, will be dropped in a future release.""" - -dblQuotedString = Combine(Regex(r'"(?:[^"\n\r\\]|(?:"")|(?:\\(?:[^x]|x[0-9a-fA-F]+)))*')+'"').setName("string enclosed in double quotes") -sglQuotedString = Combine(Regex(r"'(?:[^'\n\r\\]|(?:'')|(?:\\(?:[^x]|x[0-9a-fA-F]+)))*")+"'").setName("string enclosed in single quotes") -quotedString = Combine(Regex(r'"(?:[^"\n\r\\]|(?:"")|(?:\\(?:[^x]|x[0-9a-fA-F]+)))*')+'"'| - Regex(r"'(?:[^'\n\r\\]|(?:'')|(?:\\(?:[^x]|x[0-9a-fA-F]+)))*")+"'").setName("quotedString using single or double quotes") -unicodeString = Combine(_L('u') + quotedString.copy()).setName("unicode string literal") - -def nestedExpr(opener="(", closer=")", content=None, ignoreExpr=quotedString.copy()): - """ - Helper method for defining nested lists enclosed in opening and closing - delimiters ("(" and ")" are the default). - - Parameters: - - opener - opening character for a nested list (default=C{"("}); can also be a pyparsing expression - - closer - closing character for a nested list (default=C{")"}); can also be a pyparsing expression - - content - expression for items within the nested lists (default=C{None}) - - ignoreExpr - expression for ignoring opening and closing delimiters (default=C{quotedString}) - - If an expression is not provided for the content argument, the nested - expression will capture all whitespace-delimited content between delimiters - as a list of separate values. - - Use the C{ignoreExpr} argument to define expressions that may contain - opening or closing characters that should not be treated as opening - or closing characters for nesting, such as quotedString or a comment - expression. Specify multiple expressions using an C{L{Or}} or C{L{MatchFirst}}. - The default is L{quotedString}, but if no expressions are to be ignored, - then pass C{None} for this argument. - - Example:: - data_type = oneOf("void int short long char float double") - decl_data_type = Combine(data_type + Optional(Word('*'))) - ident = Word(alphas+'_', alphanums+'_') - number = pyparsing_common.number - arg = Group(decl_data_type + ident) - LPAR,RPAR = map(Suppress, "()") - - code_body = nestedExpr('{', '}', ignoreExpr=(quotedString | cStyleComment)) - - c_function = (decl_data_type("type") - + ident("name") - + LPAR + Optional(delimitedList(arg), [])("args") + RPAR - + code_body("body")) - c_function.ignore(cStyleComment) - - source_code = ''' - int is_odd(int x) { - return (x%2); - } - - int dec_to_hex(char hchar) { - if (hchar >= '0' && hchar <= '9') { - return (ord(hchar)-ord('0')); - } else { - return (10+ord(hchar)-ord('A')); - } - } - ''' - for func in c_function.searchString(source_code): - print("%(name)s (%(type)s) args: %(args)s" % func) - - prints:: - is_odd (int) args: [['int', 'x']] - dec_to_hex (int) args: [['char', 'hchar']] - """ - if opener == closer: - raise ValueError("opening and closing strings cannot be the same") - if content is None: - if isinstance(opener,basestring) and isinstance(closer,basestring): - if len(opener) == 1 and len(closer)==1: - if ignoreExpr is not None: - content = (Combine(OneOrMore(~ignoreExpr + - CharsNotIn(opener+closer+ParserElement.DEFAULT_WHITE_CHARS,exact=1)) - ).setParseAction(lambda t:t[0].strip())) - else: - content = (empty.copy()+CharsNotIn(opener+closer+ParserElement.DEFAULT_WHITE_CHARS - ).setParseAction(lambda t:t[0].strip())) - else: - if ignoreExpr is not None: - content = (Combine(OneOrMore(~ignoreExpr + - ~Literal(opener) + ~Literal(closer) + - CharsNotIn(ParserElement.DEFAULT_WHITE_CHARS,exact=1)) - ).setParseAction(lambda t:t[0].strip())) - else: - content = (Combine(OneOrMore(~Literal(opener) + ~Literal(closer) + - CharsNotIn(ParserElement.DEFAULT_WHITE_CHARS,exact=1)) - ).setParseAction(lambda t:t[0].strip())) - else: - raise ValueError("opening and closing arguments must be strings if no content expression is given") - ret = Forward() - if ignoreExpr is not None: - ret <<= Group( Suppress(opener) + ZeroOrMore( ignoreExpr | ret | content ) + Suppress(closer) ) - else: - ret <<= Group( Suppress(opener) + ZeroOrMore( ret | content ) + Suppress(closer) ) - ret.setName('nested %s%s expression' % (opener,closer)) - return ret - -def indentedBlock(blockStatementExpr, indentStack, indent=True): - """ - Helper method for defining space-delimited indentation blocks, such as - those used to define block statements in Python source code. - - Parameters: - - blockStatementExpr - expression defining syntax of statement that - is repeated within the indented block - - indentStack - list created by caller to manage indentation stack - (multiple statementWithIndentedBlock expressions within a single grammar - should share a common indentStack) - - indent - boolean indicating whether block must be indented beyond the - the current level; set to False for block of left-most statements - (default=C{True}) - - A valid block must contain at least one C{blockStatement}. - - Example:: - data = ''' - def A(z): - A1 - B = 100 - G = A2 - A2 - A3 - B - def BB(a,b,c): - BB1 - def BBA(): - bba1 - bba2 - bba3 - C - D - def spam(x,y): - def eggs(z): - pass - ''' - - - indentStack = [1] - stmt = Forward() - - identifier = Word(alphas, alphanums) - funcDecl = ("def" + identifier + Group( "(" + Optional( delimitedList(identifier) ) + ")" ) + ":") - func_body = indentedBlock(stmt, indentStack) - funcDef = Group( funcDecl + func_body ) - - rvalue = Forward() - funcCall = Group(identifier + "(" + Optional(delimitedList(rvalue)) + ")") - rvalue << (funcCall | identifier | Word(nums)) - assignment = Group(identifier + "=" + rvalue) - stmt << ( funcDef | assignment | identifier ) - - module_body = OneOrMore(stmt) - - parseTree = module_body.parseString(data) - parseTree.pprint() - prints:: - [['def', - 'A', - ['(', 'z', ')'], - ':', - [['A1'], [['B', '=', '100']], [['G', '=', 'A2']], ['A2'], ['A3']]], - 'B', - ['def', - 'BB', - ['(', 'a', 'b', 'c', ')'], - ':', - [['BB1'], [['def', 'BBA', ['(', ')'], ':', [['bba1'], ['bba2'], ['bba3']]]]]], - 'C', - 'D', - ['def', - 'spam', - ['(', 'x', 'y', ')'], - ':', - [[['def', 'eggs', ['(', 'z', ')'], ':', [['pass']]]]]]] - """ - def checkPeerIndent(s,l,t): - if l >= len(s): return - curCol = col(l,s) - if curCol != indentStack[-1]: - if curCol > indentStack[-1]: - raise ParseFatalException(s,l,"illegal nesting") - raise ParseException(s,l,"not a peer entry") - - def checkSubIndent(s,l,t): - curCol = col(l,s) - if curCol > indentStack[-1]: - indentStack.append( curCol ) - else: - raise ParseException(s,l,"not a subentry") - - def checkUnindent(s,l,t): - if l >= len(s): return - curCol = col(l,s) - if not(indentStack and curCol < indentStack[-1] and curCol <= indentStack[-2]): - raise ParseException(s,l,"not an unindent") - indentStack.pop() - - NL = OneOrMore(LineEnd().setWhitespaceChars("\t ").suppress()) - INDENT = (Empty() + Empty().setParseAction(checkSubIndent)).setName('INDENT') - PEER = Empty().setParseAction(checkPeerIndent).setName('') - UNDENT = Empty().setParseAction(checkUnindent).setName('UNINDENT') - if indent: - smExpr = Group( Optional(NL) + - #~ FollowedBy(blockStatementExpr) + - INDENT + (OneOrMore( PEER + Group(blockStatementExpr) + Optional(NL) )) + UNDENT) - else: - smExpr = Group( Optional(NL) + - (OneOrMore( PEER + Group(blockStatementExpr) + Optional(NL) )) ) - blockStatementExpr.ignore(_bslash + LineEnd()) - return smExpr.setName('indented block') - -alphas8bit = srange(r"[\0xc0-\0xd6\0xd8-\0xf6\0xf8-\0xff]") -punc8bit = srange(r"[\0xa1-\0xbf\0xd7\0xf7]") - -anyOpenTag,anyCloseTag = makeHTMLTags(Word(alphas,alphanums+"_:").setName('any tag')) -_htmlEntityMap = dict(zip("gt lt amp nbsp quot apos".split(),'><& "\'')) -commonHTMLEntity = Regex('&(?P' + '|'.join(_htmlEntityMap.keys()) +");").setName("common HTML entity") -def replaceHTMLEntity(t): - """Helper parser action to replace common HTML entities with their special characters""" - return _htmlEntityMap.get(t.entity) - -# it's easy to get these comment structures wrong - they're very common, so may as well make them available -cStyleComment = Combine(Regex(r"/\*(?:[^*]|\*(?!/))*") + '*/').setName("C style comment") -"Comment of the form C{/* ... */}" - -htmlComment = Regex(r"").setName("HTML comment") -"Comment of the form C{}" - -restOfLine = Regex(r".*").leaveWhitespace().setName("rest of line") -dblSlashComment = Regex(r"//(?:\\\n|[^\n])*").setName("// comment") -"Comment of the form C{// ... (to end of line)}" - -cppStyleComment = Combine(Regex(r"/\*(?:[^*]|\*(?!/))*") + '*/'| dblSlashComment).setName("C++ style comment") -"Comment of either form C{L{cStyleComment}} or C{L{dblSlashComment}}" - -javaStyleComment = cppStyleComment -"Same as C{L{cppStyleComment}}" - -pythonStyleComment = Regex(r"#.*").setName("Python style comment") -"Comment of the form C{# ... (to end of line)}" - -_commasepitem = Combine(OneOrMore(Word(printables, excludeChars=',') + - Optional( Word(" \t") + - ~Literal(",") + ~LineEnd() ) ) ).streamline().setName("commaItem") -commaSeparatedList = delimitedList( Optional( quotedString.copy() | _commasepitem, default="") ).setName("commaSeparatedList") -"""(Deprecated) Predefined expression of 1 or more printable words or quoted strings, separated by commas. - This expression is deprecated in favor of L{pyparsing_common.comma_separated_list}.""" - -# some other useful expressions - using lower-case class name since we are really using this as a namespace -class pyparsing_common: - """ - Here are some common low-level expressions that may be useful in jump-starting parser development: - - numeric forms (L{integers}, L{reals}, L{scientific notation}) - - common L{programming identifiers} - - network addresses (L{MAC}, L{IPv4}, L{IPv6}) - - ISO8601 L{dates} and L{datetime} - - L{UUID} - - L{comma-separated list} - Parse actions: - - C{L{convertToInteger}} - - C{L{convertToFloat}} - - C{L{convertToDate}} - - C{L{convertToDatetime}} - - C{L{stripHTMLTags}} - - C{L{upcaseTokens}} - - C{L{downcaseTokens}} - - Example:: - pyparsing_common.number.runTests(''' - # any int or real number, returned as the appropriate type - 100 - -100 - +100 - 3.14159 - 6.02e23 - 1e-12 - ''') - - pyparsing_common.fnumber.runTests(''' - # any int or real number, returned as float - 100 - -100 - +100 - 3.14159 - 6.02e23 - 1e-12 - ''') - - pyparsing_common.hex_integer.runTests(''' - # hex numbers - 100 - FF - ''') - - pyparsing_common.fraction.runTests(''' - # fractions - 1/2 - -3/4 - ''') - - pyparsing_common.mixed_integer.runTests(''' - # mixed fractions - 1 - 1/2 - -3/4 - 1-3/4 - ''') - - import uuid - pyparsing_common.uuid.setParseAction(tokenMap(uuid.UUID)) - pyparsing_common.uuid.runTests(''' - # uuid - 12345678-1234-5678-1234-567812345678 - ''') - prints:: - # any int or real number, returned as the appropriate type - 100 - [100] - - -100 - [-100] - - +100 - [100] - - 3.14159 - [3.14159] - - 6.02e23 - [6.02e+23] - - 1e-12 - [1e-12] - - # any int or real number, returned as float - 100 - [100.0] - - -100 - [-100.0] - - +100 - [100.0] - - 3.14159 - [3.14159] - - 6.02e23 - [6.02e+23] - - 1e-12 - [1e-12] - - # hex numbers - 100 - [256] - - FF - [255] - - # fractions - 1/2 - [0.5] - - -3/4 - [-0.75] - - # mixed fractions - 1 - [1] - - 1/2 - [0.5] - - -3/4 - [-0.75] - - 1-3/4 - [1.75] - - # uuid - 12345678-1234-5678-1234-567812345678 - [UUID('12345678-1234-5678-1234-567812345678')] - """ - - convertToInteger = tokenMap(int) - """ - Parse action for converting parsed integers to Python int - """ - - convertToFloat = tokenMap(float) - """ - Parse action for converting parsed numbers to Python float - """ - - integer = Word(nums).setName("integer").setParseAction(convertToInteger) - """expression that parses an unsigned integer, returns an int""" - - hex_integer = Word(hexnums).setName("hex integer").setParseAction(tokenMap(int,16)) - """expression that parses a hexadecimal integer, returns an int""" - - signed_integer = Regex(r'[+-]?\d+').setName("signed integer").setParseAction(convertToInteger) - """expression that parses an integer with optional leading sign, returns an int""" - - fraction = (signed_integer().setParseAction(convertToFloat) + '/' + signed_integer().setParseAction(convertToFloat)).setName("fraction") - """fractional expression of an integer divided by an integer, returns a float""" - fraction.addParseAction(lambda t: t[0]/t[-1]) - - mixed_integer = (fraction | signed_integer + Optional(Optional('-').suppress() + fraction)).setName("fraction or mixed integer-fraction") - """mixed integer of the form 'integer - fraction', with optional leading integer, returns float""" - mixed_integer.addParseAction(sum) - - real = Regex(r'[+-]?\d+\.\d*').setName("real number").setParseAction(convertToFloat) - """expression that parses a floating point number and returns a float""" - - sci_real = Regex(r'[+-]?\d+([eE][+-]?\d+|\.\d*([eE][+-]?\d+)?)').setName("real number with scientific notation").setParseAction(convertToFloat) - """expression that parses a floating point number with optional scientific notation and returns a float""" - - # streamlining this expression makes the docs nicer-looking - number = (sci_real | real | signed_integer).streamline() - """any numeric expression, returns the corresponding Python type""" - - fnumber = Regex(r'[+-]?\d+\.?\d*([eE][+-]?\d+)?').setName("fnumber").setParseAction(convertToFloat) - """any int or real number, returned as float""" - - identifier = Word(alphas+'_', alphanums+'_').setName("identifier") - """typical code identifier (leading alpha or '_', followed by 0 or more alphas, nums, or '_')""" - - ipv4_address = Regex(r'(25[0-5]|2[0-4][0-9]|1?[0-9]{1,2})(\.(25[0-5]|2[0-4][0-9]|1?[0-9]{1,2})){3}').setName("IPv4 address") - "IPv4 address (C{0.0.0.0 - 255.255.255.255})" - - _ipv6_part = Regex(r'[0-9a-fA-F]{1,4}').setName("hex_integer") - _full_ipv6_address = (_ipv6_part + (':' + _ipv6_part)*7).setName("full IPv6 address") - _short_ipv6_address = (Optional(_ipv6_part + (':' + _ipv6_part)*(0,6)) + "::" + Optional(_ipv6_part + (':' + _ipv6_part)*(0,6))).setName("short IPv6 address") - _short_ipv6_address.addCondition(lambda t: sum(1 for tt in t if pyparsing_common._ipv6_part.matches(tt)) < 8) - _mixed_ipv6_address = ("::ffff:" + ipv4_address).setName("mixed IPv6 address") - ipv6_address = Combine((_full_ipv6_address | _mixed_ipv6_address | _short_ipv6_address).setName("IPv6 address")).setName("IPv6 address") - "IPv6 address (long, short, or mixed form)" - - mac_address = Regex(r'[0-9a-fA-F]{2}([:.-])[0-9a-fA-F]{2}(?:\1[0-9a-fA-F]{2}){4}').setName("MAC address") - "MAC address xx:xx:xx:xx:xx (may also have '-' or '.' delimiters)" - - @staticmethod - def convertToDate(fmt="%Y-%m-%d"): - """ - Helper to create a parse action for converting parsed date string to Python datetime.date - - Params - - - fmt - format to be passed to datetime.strptime (default=C{"%Y-%m-%d"}) - - Example:: - date_expr = pyparsing_common.iso8601_date.copy() - date_expr.setParseAction(pyparsing_common.convertToDate()) - print(date_expr.parseString("1999-12-31")) - prints:: - [datetime.date(1999, 12, 31)] - """ - def cvt_fn(s,l,t): - try: - return datetime.strptime(t[0], fmt).date() - except ValueError as ve: - raise ParseException(s, l, str(ve)) - return cvt_fn - - @staticmethod - def convertToDatetime(fmt="%Y-%m-%dT%H:%M:%S.%f"): - """ - Helper to create a parse action for converting parsed datetime string to Python datetime.datetime - - Params - - - fmt - format to be passed to datetime.strptime (default=C{"%Y-%m-%dT%H:%M:%S.%f"}) - - Example:: - dt_expr = pyparsing_common.iso8601_datetime.copy() - dt_expr.setParseAction(pyparsing_common.convertToDatetime()) - print(dt_expr.parseString("1999-12-31T23:59:59.999")) - prints:: - [datetime.datetime(1999, 12, 31, 23, 59, 59, 999000)] - """ - def cvt_fn(s,l,t): - try: - return datetime.strptime(t[0], fmt) - except ValueError as ve: - raise ParseException(s, l, str(ve)) - return cvt_fn - - iso8601_date = Regex(r'(?P\d{4})(?:-(?P\d\d)(?:-(?P\d\d))?)?').setName("ISO8601 date") - "ISO8601 date (C{yyyy-mm-dd})" - - iso8601_datetime = Regex(r'(?P\d{4})-(?P\d\d)-(?P\d\d)[T ](?P\d\d):(?P\d\d)(:(?P\d\d(\.\d*)?)?)?(?PZ|[+-]\d\d:?\d\d)?').setName("ISO8601 datetime") - "ISO8601 datetime (C{yyyy-mm-ddThh:mm:ss.s(Z|+-00:00)}) - trailing seconds, milliseconds, and timezone optional; accepts separating C{'T'} or C{' '}" - - uuid = Regex(r'[0-9a-fA-F]{8}(-[0-9a-fA-F]{4}){3}-[0-9a-fA-F]{12}').setName("UUID") - "UUID (C{xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx})" - - _html_stripper = anyOpenTag.suppress() | anyCloseTag.suppress() - @staticmethod - def stripHTMLTags(s, l, tokens): - """ - Parse action to remove HTML tags from web page HTML source - - Example:: - # strip HTML links from normal text - text = 'More info at the
    pyparsing wiki page' - td,td_end = makeHTMLTags("TD") - table_text = td + SkipTo(td_end).setParseAction(pyparsing_common.stripHTMLTags)("body") + td_end - - print(table_text.parseString(text).body) # -> 'More info at the pyparsing wiki page' - """ - return pyparsing_common._html_stripper.transformString(tokens[0]) - - _commasepitem = Combine(OneOrMore(~Literal(",") + ~LineEnd() + Word(printables, excludeChars=',') - + Optional( White(" \t") ) ) ).streamline().setName("commaItem") - comma_separated_list = delimitedList( Optional( quotedString.copy() | _commasepitem, default="") ).setName("comma separated list") - """Predefined expression of 1 or more printable words or quoted strings, separated by commas.""" - - upcaseTokens = staticmethod(tokenMap(lambda t: _ustr(t).upper())) - """Parse action to convert tokens to upper case.""" - - downcaseTokens = staticmethod(tokenMap(lambda t: _ustr(t).lower())) - """Parse action to convert tokens to lower case.""" - - -if __name__ == "__main__": - - selectToken = CaselessLiteral("select") - fromToken = CaselessLiteral("from") - - ident = Word(alphas, alphanums + "_$") - - columnName = delimitedList(ident, ".", combine=True).setParseAction(upcaseTokens) - columnNameList = Group(delimitedList(columnName)).setName("columns") - columnSpec = ('*' | columnNameList) - - tableName = delimitedList(ident, ".", combine=True).setParseAction(upcaseTokens) - tableNameList = Group(delimitedList(tableName)).setName("tables") - - simpleSQL = selectToken("command") + columnSpec("columns") + fromToken + tableNameList("tables") - - # demo runTests method, including embedded comments in test string - simpleSQL.runTests(""" - # '*' as column list and dotted table name - select * from SYS.XYZZY - - # caseless match on "SELECT", and casts back to "select" - SELECT * from XYZZY, ABC - - # list of column names, and mixed case SELECT keyword - Select AA,BB,CC from Sys.dual - - # multiple tables - Select A, B, C from Sys.dual, Table2 - - # invalid SELECT keyword - should fail - Xelect A, B, C from Sys.dual - - # incomplete command - should fail - Select - - # invalid column name - should fail - Select ^^^ frox Sys.dual - - """) - - pyparsing_common.number.runTests(""" - 100 - -100 - +100 - 3.14159 - 6.02e23 - 1e-12 - """) - - # any int or real number, returned as float - pyparsing_common.fnumber.runTests(""" - 100 - -100 - +100 - 3.14159 - 6.02e23 - 1e-12 - """) - - pyparsing_common.hex_integer.runTests(""" - 100 - FF - """) - - import uuid - pyparsing_common.uuid.setParseAction(tokenMap(uuid.UUID)) - pyparsing_common.uuid.runTests(""" - 12345678-1234-5678-1234-567812345678 - """) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/shellingham/posix/ps.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/shellingham/posix/ps.py deleted file mode 100644 index 3bc39a74a56390c263e63bfead028f6bce4df3cb..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/shellingham/posix/ps.py +++ /dev/null @@ -1,51 +0,0 @@ -import errno -import subprocess -import sys - -from ._core import Process - - -class PsNotAvailable(EnvironmentError): - pass - - -def iter_process_parents(pid, max_depth=10): - """Try to look up the process tree via the output of `ps`.""" - try: - cmd = ["ps", "-ww", "-o", "pid=", "-o", "ppid=", "-o", "args="] - output = subprocess.check_output(cmd) - except OSError as e: # Python 2-compatible FileNotFoundError. - if e.errno != errno.ENOENT: - raise - raise PsNotAvailable("ps not found") - except subprocess.CalledProcessError as e: - # `ps` can return 1 if the process list is completely empty. - # (sarugaku/shellingham#15) - if not e.output.strip(): - return - raise - if not isinstance(output, str): - encoding = sys.getfilesystemencoding() or sys.getdefaultencoding() - output = output.decode(encoding) - - processes_mapping = {} - for line in output.split("\n"): - try: - _pid, ppid, args = line.strip().split(None, 2) - # XXX: This is not right, but we are really out of options. - # ps does not offer a sane way to decode the argument display, - # and this is "Good Enough" for obtaining shell names. Hopefully - # people don't name their shell with a space, or have something - # like "/usr/bin/xonsh is uber". (sarugaku/shellingham#14) - args = tuple(a.strip() for a in args.split(" ")) - except ValueError: - continue - processes_mapping[_pid] = Process(args=args, pid=_pid, ppid=ppid) - - for _ in range(max_depth): - try: - process = processes_mapping[pid] - except KeyError: - return - yield process - pid = process.ppid diff --git a/spaces/psistolar/pop-music-transformer/.ipynb_checkpoints/README-checkpoint.md b/spaces/psistolar/pop-music-transformer/.ipynb_checkpoints/README-checkpoint.md deleted file mode 100644 index 9c149603bf46cf64815159cf9ffececca630521b..0000000000000000000000000000000000000000 --- a/spaces/psistolar/pop-music-transformer/.ipynb_checkpoints/README-checkpoint.md +++ /dev/null @@ -1,37 +0,0 @@ ---- -title: Music Maker Transformer -emoji: 🎶 -colorFrom: pink -colorTo: gray -sdk: gradio -app_file: app.py -pinned: false ---- - -# Configuration - -`title`: _string_ -Display title for the Space - -`emoji`: _string_ -Space emoji (emoji-only character allowed) - -`colorFrom`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`colorTo`: grey -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`sdk`: _string_ -Can be either `gradio` or `streamlit` - -`sdk_version` : _string_ -Only applicable for `streamlit` SDK. -See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions. - -`app_file`: _string_ -Path to your main application file (which contains either `gradio` or `streamlit` Python code). -Path is relative to the root of the repository. - -`pinned`: _boolean_ -Whether the Space stays on top of your list. diff --git a/spaces/pycui/RealChar/client/web/src/components/Common/IconButton.js b/spaces/pycui/RealChar/client/web/src/components/Common/IconButton.js deleted file mode 100644 index 72d62762f6229262f33a5b064878eb11be9ff3dc..0000000000000000000000000000000000000000 --- a/spaces/pycui/RealChar/client/web/src/components/Common/IconButton.js +++ /dev/null @@ -1,19 +0,0 @@ -/** - * src/components/Common/IconButton.jsx - * A general-purpose Icon Button component - * - * created by Lynchee on 7/19/23 - */ - -import React from 'react'; -import './styles.css'; - -const IconButton = ({ Icon, className, onClick, bgcolor="default"}) => { - return ( -
    - -
    - ); -}; - -export default IconButton; diff --git a/spaces/pyodide-demo/self-hosted/parso.js b/spaces/pyodide-demo/self-hosted/parso.js deleted file mode 100644 index 59c4b1e3fe8e238375170b116e074f7670c08f43..0000000000000000000000000000000000000000 --- a/spaces/pyodide-demo/self-hosted/parso.js +++ /dev/null @@ -1 +0,0 @@ -var Module=typeof globalThis.__pyodide_module!=="undefined"?globalThis.__pyodide_module:{};if(!Module.expectedDataFileDownloads){Module.expectedDataFileDownloads=0}Module.expectedDataFileDownloads++;(function(){var loadPackage=function(metadata){var PACKAGE_PATH="";if(typeof window==="object"){PACKAGE_PATH=window["encodeURIComponent"](window.location.pathname.toString().substring(0,window.location.pathname.toString().lastIndexOf("/"))+"/")}else if(typeof process==="undefined"&&typeof location!=="undefined"){PACKAGE_PATH=encodeURIComponent(location.pathname.toString().substring(0,location.pathname.toString().lastIndexOf("/"))+"/")}var PACKAGE_NAME="parso.data";var REMOTE_PACKAGE_BASE="parso.data";if(typeof Module["locateFilePackage"]==="function"&&!Module["locateFile"]){Module["locateFile"]=Module["locateFilePackage"];err("warning: you defined Module.locateFilePackage, that has been renamed to Module.locateFile (using your locateFilePackage for now)")}var REMOTE_PACKAGE_NAME=Module["locateFile"]?Module["locateFile"](REMOTE_PACKAGE_BASE,""):REMOTE_PACKAGE_BASE;var REMOTE_PACKAGE_SIZE=metadata["remote_package_size"];var PACKAGE_UUID=metadata["package_uuid"];function fetchRemotePackage(packageName,packageSize,callback,errback){if(typeof process==="object"){require("fs").readFile(packageName,(function(err,contents){if(err){errback(err)}else{callback(contents.buffer)}}));return}var xhr=new XMLHttpRequest;xhr.open("GET",packageName,true);xhr.responseType="arraybuffer";xhr.onprogress=function(event){var url=packageName;var size=packageSize;if(event.total)size=event.total;if(event.loaded){if(!xhr.addedTotal){xhr.addedTotal=true;if(!Module.dataFileDownloads)Module.dataFileDownloads={};Module.dataFileDownloads[url]={loaded:event.loaded,total:size}}else{Module.dataFileDownloads[url].loaded=event.loaded}var total=0;var loaded=0;var num=0;for(var download in Module.dataFileDownloads){var data=Module.dataFileDownloads[download];total+=data.total;loaded+=data.loaded;num++}total=Math.ceil(total*Module.expectedDataFileDownloads/num);if(Module["setStatus"])Module["setStatus"]("Downloading data... ("+loaded+"/"+total+")")}else if(!Module.dataFileDownloads){if(Module["setStatus"])Module["setStatus"]("Downloading data...")}};xhr.onerror=function(event){throw new Error("NetworkError for: "+packageName)};xhr.onload=function(event){if(xhr.status==200||xhr.status==304||xhr.status==206||xhr.status==0&&xhr.response){var packageData=xhr.response;callback(packageData)}else{throw new Error(xhr.statusText+" : "+xhr.responseURL)}};xhr.send(null)}function handleError(error){console.error("package error:",error)}var fetchedCallback=null;var fetched=Module["getPreloadedPackage"]?Module["getPreloadedPackage"](REMOTE_PACKAGE_NAME,REMOTE_PACKAGE_SIZE):null;if(!fetched)fetchRemotePackage(REMOTE_PACKAGE_NAME,REMOTE_PACKAGE_SIZE,(function(data){if(fetchedCallback){fetchedCallback(data);fetchedCallback=null}else{fetched=data}}),handleError);function runWithFS(){function assert(check,msg){if(!check)throw msg+(new Error).stack}Module["FS_createPath"]("/","lib",true,true);Module["FS_createPath"]("/lib","python3.9",true,true);Module["FS_createPath"]("/lib/python3.9","site-packages",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages","parso",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/parso","pgen2",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/parso","python",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages","parso-0.8.3-py3.9.egg-info",true,true);function processPackageData(arrayBuffer){assert(arrayBuffer,"Loading data file failed.");assert(arrayBuffer instanceof ArrayBuffer,"bad input to processPackageData");var byteArray=new Uint8Array(arrayBuffer);var curr;var compressedData={data:null,cachedOffset:186064,cachedIndexes:[-1,-1],cachedChunks:[null,null],offsets:[0,1475,2966,4137,5332,6610,7966,9257,10531,11531,12596,13822,14967,16004,17452,18745,19891,21257,22345,23082,24382,25378,26611,27734,28811,30205,31570,32943,34175,35579,36864,38245,39416,40563,41776,42783,43948,44922,46076,47583,48761,50038,51265,52559,53684,54747,55886,57077,58130,59242,60261,61431,62587,63605,64738,66052,67012,68207,69305,70533,71549,72657,73814,74874,75926,76948,78161,79257,80298,81412,82438,83335,84385,85380,86208,87208,88345,89285,90283,91418,92650,93872,95014,96329,97261,98379,99485,100513,101575,102655,103767,104697,105387,105904,106964,107867,108728,109721,110731,111984,113205,114414,115697,116869,118132,119356,120408,121783,122908,123859,124840,125874,126764,128007,129452,130669,131758,132822,133888,135197,136429,137636,138459,139731,140811,141958,143045,144200,145257,146426,147380,148645,149596,150769,152105,153574,154451,155664,157068,158294,159332,160691,162033,163281,164498,165926,167260,168440,169785,171275,172112,173320,174751,176089,177085,178390,179749,180828,182345,183769,185088,185854],sizes:[1475,1491,1171,1195,1278,1356,1291,1274,1e3,1065,1226,1145,1037,1448,1293,1146,1366,1088,737,1300,996,1233,1123,1077,1394,1365,1373,1232,1404,1285,1381,1171,1147,1213,1007,1165,974,1154,1507,1178,1277,1227,1294,1125,1063,1139,1191,1053,1112,1019,1170,1156,1018,1133,1314,960,1195,1098,1228,1016,1108,1157,1060,1052,1022,1213,1096,1041,1114,1026,897,1050,995,828,1e3,1137,940,998,1135,1232,1222,1142,1315,932,1118,1106,1028,1062,1080,1112,930,690,517,1060,903,861,993,1010,1253,1221,1209,1283,1172,1263,1224,1052,1375,1125,951,981,1034,890,1243,1445,1217,1089,1064,1066,1309,1232,1207,823,1272,1080,1147,1087,1155,1057,1169,954,1265,951,1173,1336,1469,877,1213,1404,1226,1038,1359,1342,1248,1217,1428,1334,1180,1345,1490,837,1208,1431,1338,996,1305,1359,1079,1517,1424,1319,766,210],successes:[1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1]};compressedData["data"]=byteArray;assert(typeof Module.LZ4==="object","LZ4 not present - was your app build with -s LZ4=1 ?");Module.LZ4.loadPackage({metadata:metadata,compressedData:compressedData},true);Module["removeRunDependency"]("datafile_parso.data")}Module["addRunDependency"]("datafile_parso.data");if(!Module.preloadResults)Module.preloadResults={};Module.preloadResults[PACKAGE_NAME]={fromCache:false};if(fetched){processPackageData(fetched);fetched=null}else{fetchedCallback=processPackageData}}if(Module["calledRun"]){runWithFS()}else{if(!Module["preRun"])Module["preRun"]=[];Module["preRun"].push(runWithFS)}};loadPackage({files:[{filename:"/lib/python3.9/site-packages/parso/__init__.py",start:0,end:1607,audio:0},{filename:"/lib/python3.9/site-packages/parso/_compatibility.py",start:1607,end:1677,audio:0},{filename:"/lib/python3.9/site-packages/parso/cache.py",start:1677,end:10129,audio:0},{filename:"/lib/python3.9/site-packages/parso/file_io.py",start:10129,end:11152,audio:0},{filename:"/lib/python3.9/site-packages/parso/grammar.py",start:11152,end:21635,audio:0},{filename:"/lib/python3.9/site-packages/parso/normalizer.py",start:21635,end:27232,audio:0},{filename:"/lib/python3.9/site-packages/parso/parser.py",start:27232,end:34414,audio:0},{filename:"/lib/python3.9/site-packages/parso/tree.py",start:34414,end:50567,audio:0},{filename:"/lib/python3.9/site-packages/parso/utils.py",start:50567,end:57187,audio:0},{filename:"/lib/python3.9/site-packages/parso/pgen2/__init__.py",start:57187,end:57569,audio:0},{filename:"/lib/python3.9/site-packages/parso/pgen2/generator.py",start:57569,end:72139,audio:0},{filename:"/lib/python3.9/site-packages/parso/pgen2/grammar_parser.py",start:72139,end:77654,audio:0},{filename:"/lib/python3.9/site-packages/parso/python/__init__.py",start:77654,end:77654,audio:0},{filename:"/lib/python3.9/site-packages/parso/python/diff.py",start:77654,end:111860,audio:0},{filename:"/lib/python3.9/site-packages/parso/python/errors.py",start:111860,end:159815,audio:0},{filename:"/lib/python3.9/site-packages/parso/python/parser.py",start:159815,end:167923,audio:0},{filename:"/lib/python3.9/site-packages/parso/python/pep8.py",start:167923,end:201702,audio:0},{filename:"/lib/python3.9/site-packages/parso/python/prefix.py",start:201702,end:204445,audio:0},{filename:"/lib/python3.9/site-packages/parso/python/token.py",start:204445,end:205354,audio:0},{filename:"/lib/python3.9/site-packages/parso/python/tokenize.py",start:205354,end:231149,audio:0},{filename:"/lib/python3.9/site-packages/parso/python/tree.py",start:231149,end:268336,audio:0},{filename:"/lib/python3.9/site-packages/parso/python/grammar310.txt",start:268336,end:275847,audio:0},{filename:"/lib/python3.9/site-packages/parso/python/grammar311.txt",start:275847,end:283358,audio:0},{filename:"/lib/python3.9/site-packages/parso/python/grammar312.txt",start:283358,end:290869,audio:0},{filename:"/lib/python3.9/site-packages/parso/python/grammar36.txt",start:290869,end:297817,audio:0},{filename:"/lib/python3.9/site-packages/parso/python/grammar37.txt",start:297817,end:304621,audio:0},{filename:"/lib/python3.9/site-packages/parso/python/grammar38.txt",start:304621,end:312212,audio:0},{filename:"/lib/python3.9/site-packages/parso/python/grammar39.txt",start:312212,end:319711,audio:0},{filename:"/lib/python3.9/site-packages/parso-0.8.3-py3.9.egg-info/PKG-INFO",start:319711,end:327120,audio:0},{filename:"/lib/python3.9/site-packages/parso-0.8.3-py3.9.egg-info/SOURCES.txt",start:327120,end:330209,audio:0},{filename:"/lib/python3.9/site-packages/parso-0.8.3-py3.9.egg-info/dependency_links.txt",start:330209,end:330210,audio:0},{filename:"/lib/python3.9/site-packages/parso-0.8.3-py3.9.egg-info/requires.txt",start:330210,end:330273,audio:0},{filename:"/lib/python3.9/site-packages/parso-0.8.3-py3.9.egg-info/top_level.txt",start:330273,end:330279,audio:0}],remote_package_size:190160,package_uuid:"2bc726e7-38d1-47b8-b0b0-942b27860169"})})(); \ No newline at end of file diff --git a/spaces/qdd319/ChuanhuChatGPT/assets/custom.js b/spaces/qdd319/ChuanhuChatGPT/assets/custom.js deleted file mode 100644 index 476d1144c60bf4a2074caa97369bba91a4ece081..0000000000000000000000000000000000000000 --- a/spaces/qdd319/ChuanhuChatGPT/assets/custom.js +++ /dev/null @@ -1,70 +0,0 @@ -// custom javascript here -const MAX_HISTORY_LENGTH = 32; - -var key_down_history = []; -var currentIndex = -1; -var user_input_ta; - -var ga = document.getElementsByTagName("gradio-app"); -var targetNode = ga[0]; -var observer = new MutationObserver(function(mutations) { - for (var i = 0; i < mutations.length; i++) { - if (mutations[i].addedNodes.length) { - var user_input_tb = document.getElementById('user_input_tb'); - if (user_input_tb) { - // 监听到user_input_tb被添加到DOM树中 - // 这里可以编写元素加载完成后需要执行的代码 - user_input_ta = user_input_tb.querySelector("textarea"); - if (user_input_ta){ - observer.disconnect(); // 停止监听 - // 在 textarea 上监听 keydown 事件 - user_input_ta.addEventListener("keydown", function (event) { - var value = user_input_ta.value.trim(); - // 判断按下的是否为方向键 - if (event.code === 'ArrowUp' || event.code === 'ArrowDown') { - // 如果按下的是方向键,且输入框中有内容,且历史记录中没有该内容,则不执行操作 - if(value && key_down_history.indexOf(value) === -1) - return; - // 对于需要响应的动作,阻止默认行为。 - event.preventDefault(); - var length = key_down_history.length; - if(length === 0) { - currentIndex = -1; // 如果历史记录为空,直接将当前选中的记录重置 - return; - } - if (currentIndex === -1) { - currentIndex = length; - } - if (event.code === 'ArrowUp' && currentIndex > 0) { - currentIndex--; - user_input_ta.value = key_down_history[currentIndex]; - } else if (event.code === 'ArrowDown' && currentIndex < length - 1) { - currentIndex++; - user_input_ta.value = key_down_history[currentIndex]; - } - user_input_ta.selectionStart = user_input_ta.value.length; - user_input_ta.selectionEnd = user_input_ta.value.length; - const input_event = new InputEvent("input", {bubbles: true, cancelable: true}); - user_input_ta.dispatchEvent(input_event); - }else if(event.code === "Enter") { - if (value) { - currentIndex = -1; - if(key_down_history.indexOf(value) === -1){ - key_down_history.push(value); - if (key_down_history.length > MAX_HISTORY_LENGTH) { - key_down_history.shift(); - } - } - } - } - }); - break; - } - } - } - } -}); - -// 监听目标节点的子节点列表是否发生变化 -observer.observe(targetNode, { childList: true , subtree: true }); - diff --git a/spaces/quidiaMuxgu/Expedit-SAM/Maratonci Trce Pocasni Krug Free Download.md b/spaces/quidiaMuxgu/Expedit-SAM/Maratonci Trce Pocasni Krug Free Download.md deleted file mode 100644 index c98ca916958dc0ca40f4421c64e1fb722220ed27..0000000000000000000000000000000000000000 --- a/spaces/quidiaMuxgu/Expedit-SAM/Maratonci Trce Pocasni Krug Free Download.md +++ /dev/null @@ -1,6 +0,0 @@ -

    Maratonci Trce Pocasni Krug Free Download


    Downloadhttps://geags.com/2uCrS7



    -
    - 4fefd39f24
    -
    -
    -

    diff --git a/spaces/raedeXanto/academic-chatgpt-beta/Adobe Illustrator Photoshop CS6 Portable Error Fix Crack A Comprehensive Tutorial on How to Use and Fix the Program.md b/spaces/raedeXanto/academic-chatgpt-beta/Adobe Illustrator Photoshop CS6 Portable Error Fix Crack A Comprehensive Tutorial on How to Use and Fix the Program.md deleted file mode 100644 index c657b20eaf41e35d00ce76528da318e29b0ff0ae..0000000000000000000000000000000000000000 --- a/spaces/raedeXanto/academic-chatgpt-beta/Adobe Illustrator Photoshop CS6 Portable Error Fix Crack A Comprehensive Tutorial on How to Use and Fix the Program.md +++ /dev/null @@ -1,80 +0,0 @@ - -

    Adobe Illustrator Photoshop CS6 Portable Error Fix Crack

    -

    If you are looking for a way to use Adobe Illustrator and Photoshop without installing them on your computer, you might have come across portable versions of these software. Portable versions are standalone applications that can run from any removable drive or folder without affecting the system registry or other installed programs. However, portable versions of Adobe software are not officially supported by Adobe and may cause errors or conflicts with other Adobe products on your computer. In this article, we will show you how to fix these errors and enjoy using Adobe Illustrator and Photoshop CS6 portable without any hassle.

    -

    Adobe Illustrator Photoshop CS6 Portable Error Fix Crack


    DOWNLOAD 🆓 https://tinourl.com/2uL0Qz



    -

    How to fix errors caused by Adobe Illustrator CS6 Portable

    -

    Adobe Illustrator CS6 Portable is a vector graphics editor that allows you to create logos, icons, illustrations, and other graphics. However, when you run it on your computer, it may backup some existing Adobe folders and registry entries and replace them with its own files. This may cause temporary or permanent errors for other Adobe products, such as Reader, Flash Player, or other versions of Illustrator. To fix these errors, you need to follow these steps:

    -

    Step 1: Install Illustrator-CS6-Portable.exe on any fixed drive

    -

    The first step is to install the portable version of Illustrator on any fixed drive on your computer, such as C:, D:, or E:. Do not install it on a removable drive or a network drive. You can download the portable version from here.

    -

    Step 2: Copy files from IllustratorPortable folder to common Adobe folders

    -

    The next step is to copy some files from the IllustratorPortable folder to the common Adobe folders on your system. These files are needed for the portable version to run properly. To do this, go to "\\IllustratorPortable\\App\\DefaultData\\Adobe\\" and copy all files to "C:\\Program Files\\Common Files\\Adobe\\". Merge the folders and skip the files one by one. Do not overwrite any existing file.

    -

    How to fix Adobe Photoshop CS6 portable error on Windows 10
    -Adobe Illustrator CS6 portable crack download free
    -Troubleshooting Adobe Photoshop CS6 portable error code 16
    -Adobe Illustrator CS6 portable error loading plugins solution
    -Adobe Photoshop CS6 portable crack for Mac OS X
    -Adobe Illustrator CS6 portable error unknown format encountered
    -Adobe Photoshop CS6 portable error could not complete your request
    -Adobe Illustrator CS6 portable crack serial number generator
    -Adobe Photoshop CS6 portable error scratch disks are full
    -Adobe Illustrator CS6 portable error the licensing library has failed to initialize
    -Adobe Photoshop CS6 portable crack patch download
    -Adobe Illustrator CS6 portable error this application has failed to start
    -Adobe Photoshop CS6 portable error not enough RAM
    -Adobe Illustrator CS6 portable crack keygen activation code
    -Adobe Photoshop CS6 portable error the file is locked
    -Adobe Illustrator CS6 portable error an unknown error has occurred
    -Adobe Photoshop CS6 portable crack license key free download
    -Adobe Illustrator CS6 portable error cannot save the illustration
    -Adobe Photoshop CS6 portable error the program can't start because MSVCR100.dll is missing
    -Adobe Illustrator CS6 portable crack full version download
    -Adobe Photoshop CS6 portable error there is not enough memory to complete this operation
    -Adobe Illustrator CS6 portable error can't finish previewing
    -Adobe Photoshop CS6 portable crack no survey no password
    -Adobe Illustrator CS6 portable error the operation cannot complete because of an unknown error [CANT]
    -Adobe Photoshop CS6 portable error photoshop has encountered a problem with the display driver
    -Adobe Illustrator CS6 portable crack working 100%
    -Adobe Photoshop CS6 portable error an integer between 96 and 8 is required
    -Adobe Illustrator CS6 portable error the file was not found
    -Adobe Photoshop CS6 portable crack rar password
    -Adobe Illustrator CS6 portable error illustrator can't open the illustration
    -Adobe Photoshop CS6 portable error photoshop detected graphics hardware that is not officially supported
    -Adobe Illustrator CS6 portable crack only download
    -Adobe Photoshop CS6 portable error dynamiclink not available
    -Adobe Illustrator CS6 portable error the document contains PDF objects that have been reinterpreted
    -Adobe Photoshop CS6 portable crack for windows 7 64 bit
    -Adobe Illustrator CS6 portable error illustrator has detected a problem with the following fonts
    -Adobe Photoshop CS6 portable error photoshop.exe - application error
    -Adobe Illustrator CS6 portable crack for windows 8.1 32 bit
    -Adobe Photoshop CS6 portable error could not use the move tool because the target channel is hidden
    -Adobe Illustrator CS6 portable error can't print the illustration
    -Adobe Photoshop CS6 portable crack for windows 10 64 bit
    -Adobe Illustrator CS6 portable error illustrator has encountered an unrecoverable problem and must quit now
    -Adobe Photoshop CS6 portable error could not initialize photoshop because the preferences file was invalid (it has been deleted)
    -Adobe Illustrator CS6 portable crack for macbook pro retina display
    -Adobe Photoshop CS6 portable error could not complete your request because of a program error
    -Adobe Illustrator CS6 portable error can't finish previewing because of insufficient memory (RAM)
    -Adobe Photoshop CS6 portable crack for linux ubuntu 18.04 LTS

    -

    Step 3: Add registry files from IllustratorPortable folder

    -

    The third step is to add some registry files from the IllustratorPortable folder to your system registry. These files are needed for the portable version to recognize your system settings and preferences. To do this, go to "\\IllustratorPortable\\App\\DefaultData\\settings\\" and add the three registry files by right-clicking on them and choosing Merge.

    -

    Step 4: Launch IllustratorPortable.exe and wait till Illustrator is ready

    -

    The fourth step is to launch the portable version of Illustrator by double-clicking on IllustratorPortable.exe. Wait till the program is ready and do not close it.

    -

    Step 5: Backup existing Adobe folders in AppData

    -

    The fifth step is to backup some existing Adobe folders in your AppData folder. These folders contain important data for other Adobe products on your computer, such as preferences, settings, cache, etc. To do this, go to 'Folder Options' and make 'Show hidden files' and uncheck 'Hide protected Operating System files'. Then go to "C:\\users\\\\Appdata\\Local\\Adobe" and copy all content into "C:\\users\\\\Appdata\\Local\\Adobe-BackupByIllustratorPotable". Merge the folders and skip the files one by one. Do the same for "C:\\users\\\\Appdata\\LocalLow\\Adobe" and "C:\\users\\\\Appdata\\Roaming\\Adobe".

    -

    Step 6: Exit Illustrator and launch Illustrator.exe from Support Files folder

    -

    The sixth step is to exit the portable version of Illustrator by closing the program window. Then go to "\\IllustratorPortable\\App\\Illustrator\\Support Files\\Contents\\Windows\\" and launch 'Illustrator.exe'. If it works, everything went fine.

    -

    Step 7: Create a shortcut for Illustrator.exe and delete IllustratorPortable.exe

    -

    The final step is to create a shortcut for 'Illustrator.exe' on your Start Menu or Desktop for easy access. You can also delete 'IllustratorPortable.exe' as you don't need it anymore.

    -

    How to fix errors caused by Adobe Photoshop CS6 Portable

    -

    Adobe Photoshop CS6 Portable is a raster graphics editor that allows you to edit photos, create designs, and manipulate images. However, when you run it on your computer, it may backup some existing Adobe folders and registry entries and replace them with its own files. This may cause temporary or permanent errors for other Adobe products, such as Reader, Flash Player, or other versions of Photoshop. To fix these errors, you need to follow these steps:

    -

    Step 1: Install Photoshop-CS6-Portable.exe on any fixed drive

    -

    The first step is to install the portable version of Photoshop on any fixed drive on your computer, such as C:, D:, or E:. Do not install it on a removable drive or a network drive. You can download the portable version from here.

    -

    Step 2: Copy files from PhotoshopPortable folder to common Adobe folders

    -

    The next step is to copy some files from the PhotoshopPortable folder to the common Adobe folders on your system. These files are needed for the portable version to run properly. To do this, go to "\\PhotoshopPortable\\App\\DefaultData\\PhotoshopCS6\\CommonFiles\\Adobe" and copy all files to "C:\\Program Files\\Common Files\\Adobe\\". Merge the folders and skip the files one by one. Do not overwrite any existing file.

    -

    Step 3: Add registry files from PhotoshopPortable folder

    -

    The third step is to add some registry files from the PhotoshopPortable folder to your system registry. These files are needed for the portable version to recognize your system settings and preferences. To do this, go to "\\PhotoshopPortable\\App\\DefaultData\\settings\\" and add the three registry files by right-clicking on them and choosing Merge.

    -

    Step 4: Launch PhotoshopPortable.exe and wait till Photoshop is ready

    -

    The fourth step is to launch the portable version of Photoshop by double-clicking on PhotoshopPortable.exe. Wait till the program is ready and do not close it.

    -

    Step 5: Backup existing Adobe folders in AppData 0a6ba089eb
    -
    -
    \ No newline at end of file diff --git a/spaces/raedeXanto/academic-chatgpt-beta/Barakhadi In English Pdf Download __HOT__.md b/spaces/raedeXanto/academic-chatgpt-beta/Barakhadi In English Pdf Download __HOT__.md deleted file mode 100644 index 09c650bdb63d75548e2967a1a0e4c0d369f28b34..0000000000000000000000000000000000000000 --- a/spaces/raedeXanto/academic-chatgpt-beta/Barakhadi In English Pdf Download __HOT__.md +++ /dev/null @@ -1,33 +0,0 @@ -
    -

    How to Learn Barakhadi in English with PDF Download

    -

    Barakhadi is a term used to describe the combination of consonants and vowels in Hindi and other Indian languages. It is also known as varnamala or aksharmala. Barakhadi helps learners to pronounce and write words correctly by breaking them down into syllables. In this article, we will explain what barakhadi is, how it works, and how you can download a PDF chart of barakhadi in English for free.

    -

    barakhadi in english pdf download


    Downloadhttps://tinourl.com/2uL0SG



    -

    What is Barakhadi?

    -

    Barakhadi is derived from two words: bara, which means twelve, and khadi, which means line. It refers to the twelve lines or rows of letters that are formed by adding different vowels (matras) to each consonant in Hindi. For example, the first consonant क (ka) can be combined with twelve vowels to form क (ka), का (kaa), कि (ki), की (kee), कु (ku), कू (koo), के (ke), कै (kai), को (ko), कौ (kau), कं (kam), and कः (kah). These are the twelve forms of ka in barakhadi.

    -

    There are 33 consonants and 11 vowels in Hindi, which means there are 33 x 12 = 396 possible combinations of letters in barakhadi. However, some of these combinations are rarely used or pronounced differently depending on the context. For example, ज्ञ (dnya) is pronounced as gya or jna depending on the word. Therefore, it is important to learn barakhadi with examples and practice.

    -

    How to Learn Barakhadi in English?

    -

    One of the easiest ways to learn barakhadi in English is to use a chart that shows the transliteration and pronunciation of each letter in Roman script. This way, you can compare the Hindi letters with their English equivalents and learn how to say them aloud. You can also use flashcards, worksheets, games, songs, videos, and apps to practice barakhadi and improve your reading and writing skills.

    -

    To help you learn barakhadi in English, we have created a PDF chart that you can download for free from the link below. The chart shows the 33 consonants and their 12 forms with vowels in Hindi and English. You can print it out or save it on your device for easy reference. You can also share it with your friends and family who want to learn Hindi.

    -

    Download Barakhadi in English PDF Chart

    -

    To download the barakhadi in English PDF chart, click on the button below. The file size is 1.2 MB and it contains two pages. The first page shows the barakhadi chart for the first 16 consonants from क to ट. The second page shows the barakhadi chart for the remaining 17 consonants from ठ to ज्ञ.

    -

    -Download Barakhadi in English PDF Chart -

    We hope this article and PDF chart will help you learn barakhadi in English easily and quickly. If you have any questions or feedback, please leave a comment below. Happy learning!

    - -

    Why is Barakhadi Important?

    -

    Barakhadi is important for several reasons. First, it helps you to learn the basic sounds and structure of Hindi language. By knowing how to combine consonants and vowels, you can form words and sentences with ease. Second, it helps you to improve your pronunciation and spelling. By practicing barakhadi, you can avoid common mistakes and sound more natural and fluent. Third, it helps you to expand your vocabulary and comprehension. By learning barakhadi, you can recognize and understand new words and their meanings.

    -

    How to Practice Barakhadi?

    -

    There are many ways to practice barakhadi and make it fun and engaging. Here are some tips and suggestions for you:

    -
      -
    • Use the barakhadi chart as a guide and repeat each letter aloud. Try to say them clearly and correctly.
    • -
    • Write down each letter in Hindi and English on a notebook or a paper. Try to copy them neatly and accurately.
    • -
    • Make flashcards with the Hindi letters on one side and the English transliteration on the other side. Shuffle them and test yourself or a partner.
    • -
    • Find words that use the letters in barakhadi and write them down. For example, कमल (kamal) means lotus, ख़ुशी (khushi) means happiness, गाना (gana) means song, etc.
    • -
    • Read books, magazines, newspapers, websites, or blogs in Hindi and look for the letters in barakhadi. Try to read them aloud and understand their meanings.
    • -
    • Listen to songs, podcasts, audiobooks, or videos in Hindi and pay attention to the pronunciation of the letters in barakhadi. Try to sing along or repeat after the speaker.
    • -
    • Play games or quizzes that involve barakhadi. For example, you can play hangman, crossword, word search, bingo, etc.
    • -
    • Join a Hindi learning community online or offline and practice barakhadi with other learners or native speakers. You can ask questions, share tips, or have conversations.
    • -
    -

    We hope these tips will help you practice barakhadi effectively and enjoyably. Remember to be consistent and patient with your learning process. You will soon see the results of your hard work.

    cec2833e83
    -
    -
    \ No newline at end of file diff --git a/spaces/raedeXanto/academic-chatgpt-beta/Call Of Duty 5 World At War V 1.7 Full FREE Game - AviaRa - Crack.md b/spaces/raedeXanto/academic-chatgpt-beta/Call Of Duty 5 World At War V 1.7 Full FREE Game - AviaRa - Crack.md deleted file mode 100644 index f60434c82ed21a02c2255b221c849ee92010f22c..0000000000000000000000000000000000000000 --- a/spaces/raedeXanto/academic-chatgpt-beta/Call Of Duty 5 World At War V 1.7 Full FREE Game - AviaRa - Crack.md +++ /dev/null @@ -1,132 +0,0 @@ -
    -

    Call Of Duty 5 World At War V 1.7 Full Game -=AviaRa=- Crack

    -

    Introduction

    -

    If you are a fan of first-person shooter games, you have probably heard of Call Of Duty 5 World At War, one of the most popular and acclaimed titles in the franchise. This game takes you back to the World War II era, where you can experience epic battles, intense combat, and realistic graphics. You can play as an American soldier fighting against the Japanese in the Pacific theater, or as a Soviet soldier pushing back the Nazis in Eastern Europe. You can also enjoy multiplayer modes, co-op missions, and zombie survival scenarios.

    -

    Call Of Duty 5 World At War V 1.7 Full Game - AviaRa - Crack


    Download File →→→ https://tinourl.com/2uL2qs



    -

    However, if you want to play this game on your PC, you might encounter some problems. The game is not free, and you need to buy a CD key to activate it. Moreover, you need to have an internet connection to verify your key and play online. This can be frustrating if you don't have a valid key, or if you want to play offline or with mods. That's why some people resort to using cracks, which are modified files that bypass the game's security measures.

    -

    In this article, we will show you how to download and install Call Of Duty 5 World At War V 1.7 Full Game -=AviaRa=- Crack, which is one of the most popular and reliable cracks for this game. We will explain what this crack does, how to use it, and what are the pros and cons of using it. We will also answer some frequently asked questions about this topic. By the end of this article, you will be able to enjoy this amazing game without any hassle or limitation.

    -

    How to download and install the game and the crack

    -

    Downloading the game

    -

    Requirements and specifications

    -

    Before you download the game, you need to make sure that your PC meets the minimum requirements to run it smoothly. Here are the specifications you need:

    -
      -
    • Operating System: Windows XP/Vista/7/8/10
    • -
    • Processor: Intel Pentium 4 3 GHz or AMD Athlon 64 3200+ or better
    • -
    • Memory: 512 MB RAM (XP) or 1 GB RAM (Vista/7/8/10)
    • -
    • Graphics: Nvidia GeForce 6600 GT or ATI Radeon X1600 XT or better
    • -
    • DirectX: Version 9.0c
    • -
    • Storage: 8 GB available space
    • -
    • Sound Card: DirectX compatible
    • -
    • Internet Connection: Broadband for online play
    • -
    -

    If your PC meets these requirements, you can proceed to download the game.

    -

    Sources and links

    -

    The easiest way to download Call Of Duty 5 World At War V 1.7 Full Game -=AviaRa=- Crack is to use a torrent client, such as uTorrent or BitTorrent. A torrent client is a software that allows you to download files from other users who have them on their computers. This way, you don't have to rely on a single server or website that might be slow or unreliable.

    -

    To download the game using a torrent client, you need to find a torrent file that contains the information about the game files. You can find many torrent files for this game on various websites, such as The Pirate Bay, Kickass Torrents, or RARBG. However, be careful when choosing a torrent file, as some of them might be fake or infected with malware. To avoid this, check the comments and ratings of other users who have downloaded the same file before.

    -

    Once you have found a trustworthy torrent file for Call Of Duty 5 World At War V 1.7 Full Game -=AviaRa=- Crack, download it and open it with your torrent client. The torrent client will start downloading the game files from other users who have them on their computers. Depending on your internet speed and availability of seeders (users who have completed downloading), this process might take from a few minutes to several hours.

    -

    -

    When the download is complete, you will have a folder containing all the game files on your PC.

    -

    Tips and warnings

    -

    Here are some tips and warnings to keep in mind when downloading the game:

    -
      -
    • Make sure you have enough space on your hard drive before downloading the game. The game files are about 7 GB in size.
    • -
    • Do not close your torrent client while downloading the game. If you do so, you will interrupt the download process and might have to start over again.
    • -
    • Do not delete or modify any of the game files while downloading or after downloading. This might cause errors or corruption of the files.
    • -
    • Be aware of the legal risks of downloading pirated games. Depending on your country's laws and regulations, you might face fines or penalties for violating intellectual property rights.
    • -
    • Be respectful of other users who are sharing their files with you. Do not stop seeding (uploading) after you finish downloading. This way, you will help other users who want to download the same file.
    • -
    -

    Installing the game

    -

    Steps and instructions

    -

    After downloading all the game files, you need to install them on your PC. Here are the steps and instructions to follow:

    -
      -
    1. Open the folder containing all the game files.
    2. -
    3. Double-click on setup.exe file.
    4. -
    5. A window will pop up asking you to choose a language for installation. Choose English or any other language you prefer.
    6. -
    7. A window will pop up asking you to choose a destination folder for installation. Choose any folder you want or leave it as default.
    8. -
    9. A window will pop up asking you to choose components for installation. Make sure all components are checked (Full Installation).
    10. -
    11. A window will pop up asking you to confirm installation settings. Click Install.
    12. -
    13. The installation process will begin. Wait until it is finished.
    14. -
    15. A window will pop up asking you to install DirectX 9.0c. Click Yes.
    16. -
    17. A window will pop up asking you to install PunkBuster Anti-Cheat Software. Click Yes.
    18. -
    19. A window will pop up asking you to create desktop shortcuts for singleplayer and multiplayer modes. Choose Yes or No according to your preference.
    20. -
    21. A window will pop up asking you to launch singleplayer mode after installation. Choose Yes or No according to your preference.
    22. -
    23. The installation process is complete.
    24. -
    -

    Troubleshooting and errors

    -

    If you encounter any problems or errors during or after installation, here are some possible solutions:

    -
      -
    • If setup.exe file does not run or gives an error message, try running it as administrator (right-click on it and choose Run as administrator).
    • -
    • If installation process stops or freezes at some point, try restarting your PC and running setup.exe file again.
    • -
    • If installation process gives an error message about missing or corrupted files, try redownloading them from another source or checking them with a tool such as WinRAR.
    • -
    • If installation process gives an error message about insufficient disk space, try freeing up some space on your hard drive by deleting unnecessary files or moving them to another location.
    • -
    • If installation process gives an error message about incompatible operating system or DirectX version, try updating your operating system or DirectX version from Microsoft's website.
    • -
    • If installation process gives an error message about invalid CD key or activation code, try using another CD key or activation code from online sources (such as serials.ws) or generating one with a tool such as KeyGen.exe (included in some torrent files).
    • -
    -

    Verifying and testing

    -

    To verify that installation was successful and test if everything works properly:

    -
      -
    • Open the folder where you installed the game.
    • -
    • Double-click on CoDWaW.exe file to launch singleplayer mode.
    • -
    • A window will pop up asking you to create a profile name. Enter any name you want or leave it as default.
    • -
    • A window will pop up asking you to adjust brightness settings. Adjust them according to your preference or leave them as default.
    • -
    • A window will pop up asking you to select a difficulty level. Choose any level you want or leave it as default.
    • -
    • The game will start. You can access the main menu by pressing Esc key.
    • -
    • From the main menu, you can choose to play the campaign mode, the co-op mode, or the zombie mode.
    • -
    • To play multiplayer mode, you need to apply the crack first (see below).
    • -
    • Enjoy the game!
    • -
    -

    Applying the crack

    -

    What does the crack do?

    -

    The crack is a modified file that replaces the original CoDWaW.exe file in your game folder. The crack does two things:

    -
      -
    • It bypasses the CD key verification and activation process, allowing you to play the game without a valid key.
    • -
    • It enables you to play multiplayer mode online with other players who have the same crack, without requiring an internet connection or a PunkBuster account.
    • -
    -

    The crack is compatible with version 1.7 of the game, which is the latest and final update released by the developers. The crack also includes all previous updates and patches for the game.

    -

    How to use the crack?

    -

    To use the crack, you need to follow these steps:

    -
      -
    1. Open the folder containing all the game files.
    2. -
    3. Open the folder named -=AviaRa=- Crack.
    4. -
    5. Copy the file named CoDWaW.exe from this folder.
    6. -
    7. Paste it in the folder where you installed the game, replacing the original CoDWaW.exe file.
    8. -
    9. A window will pop up asking you to confirm file replacement. Click Yes.
    10. -
    11. The crack is applied.
    12. -
    -

    Potential risks and benefits

    -

    Using the crack has some advantages and disadvantages that you should be aware of:

    -
      -
    • The main advantage of using the crack is that you can play the game without any restriction or limitation, regardless of having a valid CD key or an internet connection. You can also enjoy multiplayer mode online with other players who have the same crack, without worrying about PunkBuster bans or server issues.
    • -
    • The main disadvantage of using the crack is that you might face some technical problems or errors while playing the game, such as crashes, freezes, glitches, or bugs. You might also encounter some compatibility issues with some mods or custom maps that require a different version of the game or a different crack. Moreover, you might face some legal risks for violating intellectual property rights or terms of service of the game developers or publishers.
    • -
    -

    Therefore, use the crack at your own risk and discretion. We do not endorse or support piracy or illegal activities in any way. We are only providing this information for educational and entertainment purposes only.

    -

    Conclusion

    -

    Summary and recap

    -

    In this article, we have shown you how to download and install Call Of Duty 5 World At War V 1.7 Full Game -=AviaRa=- Crack, which is one of the most popular and reliable cracks for this game. We have explained what this crack does, how to use it, and what are the pros and cons of using it. We have also answered some frequently asked questions about this topic.

    -

    By following our guide, you will be able to enjoy this amazing game without any hassle or limitation. You will be able to experience epic battles, intense combat, and realistic graphics in singleplayer mode, co-op mode, zombie mode, and multiplayer mode online with other players who have the same crack.

    -

    We hope you have found this article helpful and informative. If you have any questions or feedback, feel free to leave a comment below. Thank you for reading!

    -

    Recommendations and alternatives

    -

    If you are looking for more games like Call Of Duty 5 World At War, here are some recommendations and alternatives that you might like:

    -
      -
    • Call Of Duty 4 Modern Warfare: The previous installment in the franchise that introduced modern warfare settings and features.
    • -
    • Call Of Duty Black Ops: The sequel to World At War that takes place during the Cold War era and features new weapons and modes.
    • -
    • Battlefield 1942: A classic first-person shooter game that focuses on large-scale battles and vehicles in World War II scenarios.
    • -
    • Medal Of Honor Allied Assault: A realistic and immersive first-person shooter game that follows the missions of an American soldier in World War II.
    • -
    • Wolfenstein: A series of first-person shooter games that combine World War II themes with sci-fi and horror elements.
    • -
    -

    FAQs

    -

    Here are some frequently asked questions about Call Of Duty 5 World At War V 1.7 Full Game -=AviaRa=- Crack:

    -
      -
    1. Q: Is this crack safe and virus-free?
    2. -
    3. A: Yes, this crack is safe and virus-free. We have tested it ourselves and scanned it with various antivirus programs. However, some antivirus programs might detect it as a false positive (a harmless file that is mistakenly identified as harmful) because it modifies an original file. If this happens, just ignore or disable your antivirus program while using the crack.
    4. -
    5. Q: Can I play online with other players who have a different version of the game or a different crack?
    6. -
    7. A: No, you can only play online with other players who have exactly the same version of the game and exactly the same crack as you do. Otherwise, you will not be able to join their servers or they will not be able to join yours. To avoid this problem, make sure you download and install Call Of Duty 5 World At War V 1.7 Full Game -=AviaRa=- Crack from a reliable source (such as ours) and do not update or change anything after installation.
    8. -
    9. Q: Can I play online with other players who have a valid CD key or an official copy of the game?
    10. -
    11. A: No, you cannot play online with other players who have a valid CD key or an official copy of the game. They will be playing on official servers that are protected by PunkBuster Anti-Cheat Software, which will detect and ban anyone who uses a cracked version of the game. To avoid this problem, do not try to join official servers or use official accounts while using the crack.
    12. -
    13. Q: Can I use mods or custom maps with this crack?
    14. -
    15. A: Yes, you can use mods or custom maps with this crack as long as they are compatible with version 1.7 of the game and do not require a different version of the game or a different crack. To use mods or custom maps, just download them from online sources (such as moddb.com) and follow their installation instructions. Usually, you just need to copy and paste them in your game folder or use a mod manager tool (such as Mod Organizer).
    16. -
    17. Q: Can I uninstall this crack if I want to revert back to the original version of the game?
    18. -
    19. A: Yes, you can uninstall this crack if you want to revert back to the original version of the game. To do so, just delete the CoDWaW.exe file from your game folder and replace it with the original CoDWaW.exe file that you backed up before applying the crack (or download it again from online sources). However, be aware that you will lose the ability to play online with other players who have the same crack as you do, and you will need a valid CD key or an internet connection to activate and play the game again.
    -

    0a6ba089eb
    -
    -
    \ No newline at end of file diff --git a/spaces/raedeXanto/academic-chatgpt-beta/Downloadkonamiwinningeleven8fullversionfor51 The Benefits and Advantages of Playing the Full Version of the Game.md b/spaces/raedeXanto/academic-chatgpt-beta/Downloadkonamiwinningeleven8fullversionfor51 The Benefits and Advantages of Playing the Full Version of the Game.md deleted file mode 100644 index 6de9b575686c475ee8378550f232a72b322c0b1b..0000000000000000000000000000000000000000 --- a/spaces/raedeXanto/academic-chatgpt-beta/Downloadkonamiwinningeleven8fullversionfor51 The Benefits and Advantages of Playing the Full Version of the Game.md +++ /dev/null @@ -1,129 +0,0 @@ - -

    Download Konami Winning Eleven 8 Full Version for 51

    -

    If you are a fan of soccer games, you might have heard of Konami Winning Eleven 8, one of the most popular and realistic soccer games ever made. But did you know that you can download Konami Winning Eleven 8 full version for 51, a special edition of the game that includes many features and enhancements? In this article, we will tell you everything you need to know about this game, how to download it, and how to play it like a pro.

    -

    Introduction

    -

    What is Konami Winning Eleven 8?

    -

    Konami Winning Eleven 8 is a soccer game developed by Konami Computer Entertainment Tokyo and published by Konami in 2004. It is also known as Pro Evolution Soccer 4 or PES 4 in some regions. It is the eighth installment of the Winning Eleven series, which is one of the most successful and acclaimed soccer game franchises in the world.

    -

    downloadkonamiwinningeleven8fullversionfor51


    Download File ⚹⚹⚹ https://tinourl.com/2uL2Xu



    -

    Why download Konami Winning Eleven 8 full version for 51?

    -

    Konami Winning Eleven 8 full version for 51 is a special edition of the game that was released in some countries as a promotional offer. It includes many features and improvements that make the game more realistic, enjoyable, and challenging. Some of these features are:

    -
      -
    • A net energy gain when carrying out a nuclear fusion experiment
    • -
    • A sustained, stable experiment that lasted for 30 seconds
    • -
    • A temperature of over 100 million°C, which is seven times hotter than the core of the Sun
    • -
    • A new referee system that can detect fouls and issue cards more accurately
    • -
    • A new penalty system that allows players to control the direction, power, and curve of their shots
    • -
    • A new skill system that lets players increase their abilities as they win trophies
    • -
    • A new edit mode that lets players customize their teams, players, stadiums, kits, and logos
    • -
    • More teams and players from different leagues and countries
    • -
    • Improved graphics and gameplay that make the game more immersive and realistic
    • -
    -

    With all these features, Konami Winning Eleven 8 full version for 51 is definitely worth downloading if you want to experience one of the best soccer games ever made.

    -

    download konami winning eleven 8 full version for pc
    -konami winning eleven 8 free download full version
    -how to download and install konami winning eleven 8
    -konami winning eleven 8 iso download for windows
    -download konami winning eleven 8 setup exe
    -konami winning eleven 8 crack download
    -konami winning eleven 8 patch download
    -konami winning eleven 8 serial key download
    -konami winning eleven 8 game download for android
    -konami winning eleven 8 apk download
    -konami winning eleven 8 mod download
    -konami winning eleven 8 cheats download
    -konami winning eleven 8 trainer download
    -konami winning eleven 8 save game download
    -konami winning eleven 8 soundtrack download
    -konami winning eleven 8 online play download
    -konami winning eleven 8 update download
    -konami winning eleven 8 system requirements
    -konami winning eleven 8 gameplay video download
    -konami winning eleven 8 review and rating download
    -konami winning eleven 8 tips and tricks download
    -konami winning eleven 8 best players and teams download
    -konami winning eleven 8 custom kits and logos download
    -konami winning eleven 8 editor and creator download
    -konami winning eleven 8 license and registration download
    -konami winning eleven 8 official website and support download
    -konami winning eleven 8 demo download
    -konami winning eleven 8 torrent download
    -konami winning eleven 8 direct download link
    -konami winning eleven 8 compressed download size
    -konami winning eleven 8 rar file password download
    -konami winning eleven 8 alternative and similar games download
    -konami winning eleven 8 old and new versions download
    -konami winning eleven 8 compatible and incompatible devices download
    -konami winning eleven 8 error and bug fix download
    -konami winning eleven 8 language and region settings download
    -konami winning eleven 8 keyboard and controller settings download
    -konami winning eleven 8 graphics and sound settings download
    -konami winning eleven 8 multiplayer and co-op modes download
    -konami winning eleven 8 achievements and trophies download
    -konami winning eleven 8 secrets and easter eggs download
    -konami winning eleven 8 mods and hacks download
    -konami winning eleven 8 skins and themes download
    -konami winning eleven 8 wallpapers and screensavers download
    -konami winning eleven 8 guides and walkthroughs download
    -konami winning eleven 8 forums and communities download
    -konami winning eleven 8 news and updates download
    -konami winning eleven 8 fan art and memes download
    -konami winning eleven 8 merchandise and collectibles download

    -

    How to download Konami Winning Eleven 8 full version for 51

    -

    Step 1: Find a reliable source

    -

    The first step to download Konami Winning Eleven 8 full version for 51 is to find a reliable source that offers the game file. You can search online for websites that provide free or paid downloads of the game. However, be careful not to download from untrusted or illegal sources that might contain viruses or malware. Some of the websites that offer safe and legitimate downloads of the game are:

    - - - - - - - -
    WebsiteDescription
    My AbandonwareThis website offers free downloads of old games that are no longer available or supported by their publishers. You can find World Soccer: Winning Eleven 8 International, which is the name given to Konami Winning Eleven 8 in some regions.
    MalavidaThis website offers free downloads of software and games for different platforms. You can find World Soccer Winning Eleven demo, which is a trial version of Konami Winning Eleven 8 that lets you play an exhibition match with a limited selection of teams.
    DocsLibThis website offers free downloads of documents and files for various purposes. You can find Download Konami Winning Eleven 8 Full Version for 51, which is a PDF file that contains instructions on how to download and install the game.
    OpenSeaThis website offers free downloads of digital assets and collectibles for various platforms. You can find Download FULL Konami Winning Eleven 8 Full Version For 51, which is a collection of items that includes the game file.
    GrablustbloodThis website offers free downloads of games and software for various platforms. You can find World Soccer Winning Eleven 9 Free Download Full Version, which is an updated version of Konami Winning Eleven 8 that includes more features and enhancements.
    -

    You can choose any of these websites or look for other sources that suit your preferences. However, make sure to check the reviews, ratings, comments, and feedback from other users before downloading anything.

    -

    Step 2: Download the game file

    -

    The second step to download Konami Winning Eleven 8 full version for 51 is to download the game file from your chosen source. Depending on the website, you might need to register an account, complete a survey, enter a captcha code, or follow some other steps before accessing the download link. Once you get the link, click on it and save the file on your PC. The file size might vary depending on the source, but it should be around 1.7 GB.

    -

    Step 3: Install the game on your PC

    -

    Features of Konami Winning Eleven 8 full version for 51

    -

    Improved graphics and gameplay

    -

    One of the main features of Konami Winning Eleven 8 full version for 51 is the improved graphics and gameplay that make the game more immersive and realistic. The game uses a new engine that enhances the animations, lighting, shadows, textures, and physics of the game. The players look more lifelike and have more expressions and emotions. The stadiums are more detailed and have different weather effects and crowd noises. The gameplay is more fluid and responsive, with better controls and commands. The game also supports different resolutions and aspect ratios, making it compatible with different monitors and devices.

    -

    More teams and players

    -

    Another feature of Konami Winning Eleven 8 full version for 51 is the increased number of teams and players that you can choose from. The game includes over 200 teams and over 4,500 players from different leagues and countries. You can play with teams from England, Spain, Italy, Germany, France, Portugal, Brazil, Argentina, Japan, Korea, and more. You can also play with legendary teams and players from the past, such as Pele, Maradona, Zidane, Beckham, Ronaldo, and more. You can also create your own teams and players using the edit mode.

    -

    Enhanced referee and penalty system

    -

    A third feature of Konami Winning Eleven 8 full version for 51 is the enhanced referee and penalty system that makes the game more fair and realistic. The game uses a new referee system that can detect fouls and issue cards more accurately. The referee can also consult with the assistant referees and the video assistant referee (VAR) to review controversial decisions. The game also uses a new penalty system that allows players to control the direction, power, and curve of their shots. You can also choose to take a manual or automatic penalty kick.

    -

    Customizable skills and tactics

    -

    A fourth feature of Konami Winning Eleven 8 full version for 51 is the customizable skills and tactics that let you play the game your way. The game uses a new skill system that lets players increase their abilities as they win trophies. You can improve your skills such as dribbling, passing, shooting, heading, tackling, speed, stamina, and more. You can also customize your tactics such as formation, strategy, style, mentality, position, role, and more. You can also assign different commands to different players to create your own plays.

    -

    Tips and tricks for playing Konami Winning Eleven 8 full version for 51

    -

    Master the controls and commands

    - lob passes, chip shots, volleys, headers, bicycle kicks, slide tackles, and more. You can also customize the controls and commands to suit your preferences. You can find the default controls and commands in the game manual or in the options menu.

    -

    Use different strategies and formations

    -

    Another tip for playing Konami Winning Eleven 8 full version for 51 is to use different strategies and formations depending on your opponent and situation. The game lets you choose from different formations such as 4-4-2, 4-3-3, 3-5-2, 5-3-2, and more. You can also choose from different strategies such as attacking, defending, counterattacking, pressing, long ball, short passing, wing play, and more. You can also change your formation and strategy during the game using the directional pad or the keyboard. You can also create your own formation and strategy using the edit mode.

    -

    Practice your skills and techniques

    -

    A third tip for playing Konami Winning Eleven 8 full version for 51 is to practice your skills and techniques to improve your performance and score more goals. The game lets you practice your skills and techniques in different modes such as training mode, free kick mode, penalty mode, challenge mode, and more. You can practice your skills such as dribbling, passing, shooting, heading, tackling, and more. You can also practice your techniques such as through balls, lob passes, chip shots, volleys, headers, bicycle kicks, slide tackles, and more. You can also learn new skills and techniques by watching tutorials or playing with other players.

    -

    Challenge yourself with different modes and levels

    -

    A fourth tip for playing Konami Winning Eleven 8 full version for 51 is to challenge yourself with different modes and levels to test your skills and have more fun. The game offers different modes such as exhibition mode, league mode, cup mode, master league mode, online mode, and more. You can play with different teams and players from different leagues and countries. You can also play with different rules and settings such as time limit, difficulty level, weather condition, stadium type, ball type, and more. You can also play with or against other players online or offline.

    -

    Conclusion

    - enhanced referee and penalty system, customizable skills and tactics, and different modes and levels. You can download Konami Winning Eleven 8 full version for 51 from various sources online and install it on your PC. You can also follow the tips and tricks we shared to play the game like a pro. If you are a fan of soccer games, you should definitely try Konami Winning Eleven 8 full version for 51 and enjoy one of the most exciting and realistic soccer games ever made.

    -

    FAQs

    -

    Here are some of the frequently asked questions about Konami Winning Eleven 8 full version for 51:

    -

    Q: What are the system requirements for Konami Winning Eleven 8 full version for 51?

    -

    A: The minimum system requirements for Konami Winning Eleven 8 full version for 51 are:

    -
      -
    • Operating system: Windows XP or higher
    • -
    • Processor: Pentium III 800 MHz or higher
    • -
    • Memory: 256 MB RAM or higher
    • -
    • Graphics: 64 MB VRAM or higher
    • -
    • Sound: DirectX compatible sound card
    • -
    • Storage: 2 GB available space or higher
    • -
    • CD-ROM drive: 4x speed or higher
    • -
    -

    Q: How can I play Konami Winning Eleven 8 full version for 51 online?

    -

    A: To play Konami Winning Eleven 8 full version for 51 online, you need to have an internet connection and a valid account on the game's official website. You can register an account for free and log in with your username and password. You can then choose online mode from the main menu and join or create a match with other players. You can also chat with other players and invite your friends to play with you.

    -

    Q: How can I update Konami Winning Eleven 8 full version for 51?

    -

    A: To update Konami Winning Eleven 8 full version for 51, you need to have an internet connection and visit the game's official website. You can then download the latest patch or update file and install it on your PC. The patch or update file will fix any bugs or errors and add new features and enhancements to the game.

    -

    Q: How can I get more teams and players for Konami Winning Eleven 8 full version for 51?

    -

    A: To get more teams and players for Konami Winning Eleven 8 full version for 51, you can use the edit mode to create your own teams and players. You can also download additional teams and players from various websites online that offer fan-made or official content. You can then import the files to your game folder and enjoy playing with more teams and players.

    -

    Q: How can I get help or support for Konami Winning Eleven 8 full version for 51?

    -

    A: To get help or support for Konami Winning Eleven 8 full version for 51, you can visit the game's official website and check the FAQ section or the forum section. You can also contact the game's customer service by email or phone. You can also ask other players online or offline for help or advice.

    -

    0a6ba089eb
    -
    -
    \ No newline at end of file diff --git a/spaces/rahgadda/MigrationUtility/Dashboard.py b/spaces/rahgadda/MigrationUtility/Dashboard.py deleted file mode 100644 index d42b48457383860d37fa05bcbcc44ee8b7691746..0000000000000000000000000000000000000000 --- a/spaces/rahgadda/MigrationUtility/Dashboard.py +++ /dev/null @@ -1,42 +0,0 @@ -import streamlit as st -import lib.gdrive as gdrive -import os -import sys -import pandas as pd - -################################ -######### Variables ############ -################################ -# -- Loading Variables -script_directory = os.path.dirname(os.path.abspath(sys.argv[0])) - -# -- Loading Session Data -if 'project_data' not in st.session_state: - st.session_state.project_data = pd.read_csv(script_directory+'/data/project.csv') - -################################ -####### GenericFunctions ####### -################################ -# -- Save Files -def save_data_files(): - if not os.listdir(script_directory+"/data"): - gdrive.download_file("project.csv",script_directory+"/data/") - else: - print("Project details already exists") - -################################ -####### Display of data ######## -################################ -# -- Streamlit Settings -st.set_page_config(layout='wide') -st.title("Dashboard") - -# -- Load base files from Google Drive -save_data_files() - -# -- Show Metrics -col1, col2, col3 = st.columns(3) -col1.metric("Projects", len(st.session_state.project_data)) - -# -- Transformations Performed -col2.metric("Transformations", "12") \ No newline at end of file diff --git a/spaces/rajistics/h2o_wave_transformers/app.py b/spaces/rajistics/h2o_wave_transformers/app.py deleted file mode 100644 index 5dda30cc678e7fbee7c8d0f098fd3ecf49e51529..0000000000000000000000000000000000000000 --- a/spaces/rajistics/h2o_wave_transformers/app.py +++ /dev/null @@ -1,65 +0,0 @@ -from h2o_wave import main, app, Q, ui, copy_expando -from transformers import pipeline - -async def init(q: Q): - if not q.client.app_initialized: - q.app.model = pipeline("text-generation") - q.client.app_initialized = True - - q.page.drop() - - q.page["title"] = ui.header_card( - box="1 1 8 1", - title="Text Generation", - subtitle="Generate text using Huggingface pipelines", - icon="AddNotes", - icon_color="Blue", - ) - -async def get_inputs(q: Q): - q.page['main'] = ui.form_card(box="1 2 8 5", items=[ - ui.text_xl('Enter your text input for generation:'), - ui.textbox(name="input_text", - label='', - value=q.app.input_text, - multiline=True), - ui.separator(), - ui.slider(name="num_words_to_generate", - label="Maximum number of words to generate (including input text)", - min=5, - max=50, - step=1, - value=q.app.num_words_to_generate if q.app.num_words_to_generate else 12, - ), - ui.separator(), - ui.buttons([ui.button(name="generate_text", label='Generate', primary=True), - ]) - ]) - -async def show_results(q: Q): - q.page['main'] = ui.form_card(box="1 2 4 5", items=[ - ui.text_xl("Input Text:"), - ui.separator(), - ui.text(q.app.input_text), - ui.separator(), - ui.buttons([ui.button(name="get_inputs", label='Try Again!', primary=True), - ]) - ]) - - result = q.app.model(q.app.input_text, max_length=q.app.num_words_to_generate, do_sample=False)[0] - q.app.generated_text = result["generated_text"] - q.page['visualization'] = ui.form_card(box="5 2 4 5", items=[ - ui.text_xl("Generated Text:"), - ui.separator(''), - ui.text(q.app.generated_text) - ]) - -@app("/") -async def serve(q: Q): - await init(q) - if q.args.generate_text: - copy_expando(q.args, q.app) - await show_results(q) - else: - await get_inputs(q) - await q.page.save() \ No newline at end of file diff --git a/spaces/ramiin2/AutoGPT/run_continuous.bat b/spaces/ramiin2/AutoGPT/run_continuous.bat deleted file mode 100644 index 812aa01c1c5506c452665610c0e9e83a17c426f2..0000000000000000000000000000000000000000 --- a/spaces/ramiin2/AutoGPT/run_continuous.bat +++ /dev/null @@ -1,3 +0,0 @@ -@echo off -set argument=--continuous -call run.bat %argument% diff --git a/spaces/rayan-saleh/whisper2notion/server/node_modules/cors/lib/index.js b/spaces/rayan-saleh/whisper2notion/server/node_modules/cors/lib/index.js deleted file mode 100644 index 5475aecd6d1d271cfde489f0bfc289cddcf1f9d9..0000000000000000000000000000000000000000 --- a/spaces/rayan-saleh/whisper2notion/server/node_modules/cors/lib/index.js +++ /dev/null @@ -1,238 +0,0 @@ -(function () { - - 'use strict'; - - var assign = require('object-assign'); - var vary = require('vary'); - - var defaults = { - origin: '*', - methods: 'GET,HEAD,PUT,PATCH,POST,DELETE', - preflightContinue: false, - optionsSuccessStatus: 204 - }; - - function isString(s) { - return typeof s === 'string' || s instanceof String; - } - - function isOriginAllowed(origin, allowedOrigin) { - if (Array.isArray(allowedOrigin)) { - for (var i = 0; i < allowedOrigin.length; ++i) { - if (isOriginAllowed(origin, allowedOrigin[i])) { - return true; - } - } - return false; - } else if (isString(allowedOrigin)) { - return origin === allowedOrigin; - } else if (allowedOrigin instanceof RegExp) { - return allowedOrigin.test(origin); - } else { - return !!allowedOrigin; - } - } - - function configureOrigin(options, req) { - var requestOrigin = req.headers.origin, - headers = [], - isAllowed; - - if (!options.origin || options.origin === '*') { - // allow any origin - headers.push([{ - key: 'Access-Control-Allow-Origin', - value: '*' - }]); - } else if (isString(options.origin)) { - // fixed origin - headers.push([{ - key: 'Access-Control-Allow-Origin', - value: options.origin - }]); - headers.push([{ - key: 'Vary', - value: 'Origin' - }]); - } else { - isAllowed = isOriginAllowed(requestOrigin, options.origin); - // reflect origin - headers.push([{ - key: 'Access-Control-Allow-Origin', - value: isAllowed ? requestOrigin : false - }]); - headers.push([{ - key: 'Vary', - value: 'Origin' - }]); - } - - return headers; - } - - function configureMethods(options) { - var methods = options.methods; - if (methods.join) { - methods = options.methods.join(','); // .methods is an array, so turn it into a string - } - return { - key: 'Access-Control-Allow-Methods', - value: methods - }; - } - - function configureCredentials(options) { - if (options.credentials === true) { - return { - key: 'Access-Control-Allow-Credentials', - value: 'true' - }; - } - return null; - } - - function configureAllowedHeaders(options, req) { - var allowedHeaders = options.allowedHeaders || options.headers; - var headers = []; - - if (!allowedHeaders) { - allowedHeaders = req.headers['access-control-request-headers']; // .headers wasn't specified, so reflect the request headers - headers.push([{ - key: 'Vary', - value: 'Access-Control-Request-Headers' - }]); - } else if (allowedHeaders.join) { - allowedHeaders = allowedHeaders.join(','); // .headers is an array, so turn it into a string - } - if (allowedHeaders && allowedHeaders.length) { - headers.push([{ - key: 'Access-Control-Allow-Headers', - value: allowedHeaders - }]); - } - - return headers; - } - - function configureExposedHeaders(options) { - var headers = options.exposedHeaders; - if (!headers) { - return null; - } else if (headers.join) { - headers = headers.join(','); // .headers is an array, so turn it into a string - } - if (headers && headers.length) { - return { - key: 'Access-Control-Expose-Headers', - value: headers - }; - } - return null; - } - - function configureMaxAge(options) { - var maxAge = (typeof options.maxAge === 'number' || options.maxAge) && options.maxAge.toString() - if (maxAge && maxAge.length) { - return { - key: 'Access-Control-Max-Age', - value: maxAge - }; - } - return null; - } - - function applyHeaders(headers, res) { - for (var i = 0, n = headers.length; i < n; i++) { - var header = headers[i]; - if (header) { - if (Array.isArray(header)) { - applyHeaders(header, res); - } else if (header.key === 'Vary' && header.value) { - vary(res, header.value); - } else if (header.value) { - res.setHeader(header.key, header.value); - } - } - } - } - - function cors(options, req, res, next) { - var headers = [], - method = req.method && req.method.toUpperCase && req.method.toUpperCase(); - - if (method === 'OPTIONS') { - // preflight - headers.push(configureOrigin(options, req)); - headers.push(configureCredentials(options, req)); - headers.push(configureMethods(options, req)); - headers.push(configureAllowedHeaders(options, req)); - headers.push(configureMaxAge(options, req)); - headers.push(configureExposedHeaders(options, req)); - applyHeaders(headers, res); - - if (options.preflightContinue) { - next(); - } else { - // Safari (and potentially other browsers) need content-length 0, - // for 204 or they just hang waiting for a body - res.statusCode = options.optionsSuccessStatus; - res.setHeader('Content-Length', '0'); - res.end(); - } - } else { - // actual response - headers.push(configureOrigin(options, req)); - headers.push(configureCredentials(options, req)); - headers.push(configureExposedHeaders(options, req)); - applyHeaders(headers, res); - next(); - } - } - - function middlewareWrapper(o) { - // if options are static (either via defaults or custom options passed in), wrap in a function - var optionsCallback = null; - if (typeof o === 'function') { - optionsCallback = o; - } else { - optionsCallback = function (req, cb) { - cb(null, o); - }; - } - - return function corsMiddleware(req, res, next) { - optionsCallback(req, function (err, options) { - if (err) { - next(err); - } else { - var corsOptions = assign({}, defaults, options); - var originCallback = null; - if (corsOptions.origin && typeof corsOptions.origin === 'function') { - originCallback = corsOptions.origin; - } else if (corsOptions.origin) { - originCallback = function (origin, cb) { - cb(null, corsOptions.origin); - }; - } - - if (originCallback) { - originCallback(req.headers.origin, function (err2, origin) { - if (err2 || !origin) { - next(err2); - } else { - corsOptions.origin = origin; - cors(corsOptions, req, res, next); - } - }); - } else { - next(); - } - } - }); - }; - } - - // can pass either an options hash, an options delegate, or nothing - module.exports = middlewareWrapper; - -}()); diff --git a/spaces/rd13/Pix2Pix-Video/style.css b/spaces/rd13/Pix2Pix-Video/style.css deleted file mode 100644 index 3cf565d3e03852436a405cf632d1d22433bb4087..0000000000000000000000000000000000000000 --- a/spaces/rd13/Pix2Pix-Video/style.css +++ /dev/null @@ -1,101 +0,0 @@ -#col-container {max-width: 820px; margin-left: auto; margin-right: auto;} -#duplicate-container{ - display: flex; - justify-content: space-between; - align-items: center; - line-height: 1em; - flex-direction: row-reverse; - font-size:1em; -} -a, a:hover, a:visited { - text-decoration-line: underline; - font-weight: 600; - color: #1f2937 !important; -} - -.dark a, .dark a:hover, .dark a:visited { - color: #f3f4f6 !important; -} - -.footer { - margin-bottom: 45px; - margin-top: 10px; - text-align: center; - border-bottom: 1px solid #e5e5e5; -} - -.footer>p { - font-size: .8rem!important; - display: inline-block; - padding: 0 10px; - transform: translateY(26px); - background: white; -} -.dark .footer { - border-color: #303030; -} -.dark .footer>p { - background: #0b0f19; -} - -div#may-like-container > p { - font-size: .8em; - margin-bottom: 4px; -} - -.animate-spin { - animation: spin 1s linear infinite; -} - -@keyframes spin { - from { - transform: rotate(0deg); - } - to { - transform: rotate(360deg); - } -} - -#share-btn-container { - display: flex; - padding-left: 0.5rem !important; - padding-right: 0.5rem !important; - background-color: #000000; - justify-content: center; - align-items: center; - border-radius: 9999px !important; - max-width: 13rem; -} - -#share-btn-container:hover { - background-color: #060606; -} - -#share-btn { - all: initial; - color: #ffffff; - font-weight: 600; - cursor:pointer; - font-family: 'IBM Plex Sans', sans-serif; - margin-left: 0.5rem !important; - padding-top: 0.5rem !important; - padding-bottom: 0.5rem !important; - right:0; -} - -#share-btn * { - all: unset; -} - -#share-btn-container div:nth-child(-n+2){ - width: auto !important; - min-height: 0px !important; -} - -#share-btn-container .wrap { - display: none !important; -} - -#share-btn-container.hidden { - display: none!important; -} \ No newline at end of file diff --git a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/BETTER Download Keygen Xforce For AutoCAD MEP 2010 BETTER Download.md b/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/BETTER Download Keygen Xforce For AutoCAD MEP 2010 BETTER Download.md deleted file mode 100644 index 40be388e52daad5cf7525dc12c2e6192b3cb91e4..0000000000000000000000000000000000000000 --- a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/BETTER Download Keygen Xforce For AutoCAD MEP 2010 BETTER Download.md +++ /dev/null @@ -1,9 +0,0 @@ -

    download keygen xforce for AutoCAD MEP 2010 download


    Download ::: https://urlgoal.com/2uCJFn



    -
    -Determine if you need an activation code to authenticate your Autodesk software, and how to request an activation code. Use this method if you need to verify that the software is authentic. -This is useful for verifying licenses and documents and for ensuring that the software meets your requirements. -An activation code is not required for verification if you use a trial download link for a trial period. -If you use the trial download link for the trial period, you will receive the activation code after the trial period ends. 8a78ff9644
    -
    -
    -

    diff --git a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Florian Poddelka Naked FREE.md b/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Florian Poddelka Naked FREE.md deleted file mode 100644 index 192219bc72cb05a77ea843ad7810b037db6c763e..0000000000000000000000000000000000000000 --- a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Florian Poddelka Naked FREE.md +++ /dev/null @@ -1,6 +0,0 @@ -

    florian poddelka naked


    Download Filehttps://urlgoal.com/2uCKE8



    - - 3cee63e6c2
    -
    -
    -

    diff --git a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Grandhotelsaison1vftorrent.md b/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Grandhotelsaison1vftorrent.md deleted file mode 100644 index 78247e0dff4c796046e64978439bf6118f1b5fa0..0000000000000000000000000000000000000000 --- a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Grandhotelsaison1vftorrent.md +++ /dev/null @@ -1,113 +0,0 @@ -
    -

    Grandhotelsaison1vftorrent: How to Download and Watch the Popular TV Series

    - -

    If you are a fan of drama, romance, and mystery, you might have heard of Grand Hotel, a TV series that aired in 2019 on ABC. The show is based on the Spanish series of the same name, and it follows the lives and secrets of the owners and staff of a luxurious hotel in Miami Beach. The show features a star-studded cast, including Eva Longoria, Demian Bichir, Roselyn Sanchez, and more.

    - -

    However, if you missed the show when it was on air, or if you want to watch it again, you might have trouble finding it online. The show is not available on any streaming platform, such as Netflix, Hulu, or Amazon Prime. The only way to watch it is to download it from torrent sites.

    -

    Grandhotelsaison1vftorrent


    Download File ===== https://urlgoal.com/2uCMna



    - -

    But what are torrent sites? And how can you use them to download Grand Hotel? In this article, we will explain everything you need to know about Grandhotelsaison1vftorrent, the keyword that will help you find and download the show.

    - -

    What is Grandhotelsaison1vftorrent?

    - -

    Grandhotelsaison1vftorrent is a keyword that refers to the French version of Grand Hotel season 1 torrent. A torrent is a file that contains information about other files that can be downloaded from peer-to-peer networks. Peer-to-peer networks are systems where users share files with each other without using a central server.

    - -

    By using Grandhotelsaison1vftorrent, you can find and download the episodes of Grand Hotel season 1 in French. You can also find other versions of the show in different languages or subtitles by using different keywords, such as Grandhotelsaison1engtorrent or Grandhotelsaison1subtorrent.

    - -

    How to Use Grandhotelsaison1vftorrent?

    - -

    To use Grandhotelsaison1vftorrent, you need to follow these steps:

    - -
      -
    1. Download and install a torrent client. A torrent client is a software that allows you to download files from peer-to-peer networks. Some of the most popular torrent clients are uTorrent, BitTorrent, qBittorrent, and Vuze.
    2. -
    3. Go to a torrent site. A torrent site is a website that hosts torrent files and allows users to search for them. Some of the most popular torrent sites are The Pirate Bay, RARBG, 1337x, and EZTV.
    4. -
    5. Search for Grandhotelsaison1vftorrent on the torrent site. You can use the search bar or browse through the categories to find the torrent file that matches your keyword.
    6. -
    7. Download the torrent file. Once you find the torrent file that you want, click on it and download it to your computer. The torrent file is usually very small and does not contain the actual episodes of Grand Hotel.
    8. -
    9. Open the torrent file with your torrent client. Once you download the torrent file, open it with your torrent client. The torrent client will connect to other users who have the episodes of Grand Hotel and start downloading them to your computer.
    10. -
    11. Enjoy watching Grand Hotel season 1 in French. Once the download is complete, you can watch the episodes of Grand Hotel season 1 in French on your computer or transfer them to your device of choice.
    12. -
    - -

    What are the Benefits of Using Grandhotelsaison1vftorrent?

    - -

    By using Grandhotelsaison1vftorrent, you can enjoy many benefits that can enhance your viewing experience of Grand Hotel season 1. Here are some of them:

    - -
      -
    • You can watch Grand Hotel season 1 in French for free. You do not need to pay for any subscription or membership fee to access the show.
    • -
    • You can watch Grand Hotel season 1 in French offline. You do not need an internet connection to watch the show once you download it to your computer or device.
    • -
    • You can watch Grand Hotel season 1 in French at your own pace. You do not need to follow any schedule or wait for any ads or interruptions to watch the show.
    • -
    • You can watch Grand Hotel season 1 in French with high quality. You can choose the resolution and format that suits your preferences and device capabilities.
    • -
    • You can watch Grand Hotel season 1 in French with subtitles or dubbing. You can choose the language or subtitle option that suits your needs and understanding.
    • -
    - -

    What are the Risks of Using Grandhotelsaison1vftorrent?

    - -

    While using Grandhotelsaison1vftorrent has many benefits, it also has some risks that you should be aware of. Here are some of them:

    - -
      -
    • You might violate copyright laws. Downloading and watching Grand Hotel season 1 in French from torrent sites might be illegal in some countries or regions. You might face legal consequences or penalties if you are caught by authorities or rights holders.
    • -
    • You might expose your computer or device to malware or viruses. Downloading and opening torrent files from unknown sources might infect your computer or device with malicious software that can harm your system or steal your data.
    • -
    • You might compromise your privacy or security. Downloading and sharing files from peer-to-peer networks might reveal your IP address or location to other users or third parties. You might also encounter phishing or scamming attempts that can trick you into giving up your personal or financial information.
    • -
    - -

    How to Avoid the Risks of Using Grandhotelsaison1vftorrent?

    - -

    If you want to use Grandhotelsaison1vftorrent safely and securely, you need to follow these tips:

    -

    - -
      -
    • Use a VPN service. A VPN service is a software that encrypts your internet traffic and hides your IP address and location from other users or third parties. By using a VPN service, you can protect your privacy and security while downloading and watching Grand Hotel season 1 in French from torrent sites.
    • -
    • Use an antivirus software. An antivirus software is a software that scans and removes any malware or viruses from your computer or device. By using an antivirus software, you can protect your system and data while downloading and opening torrent files from unknown sources.
    • -
    • Use a reputable torrent site and client. A reputable torrent site and client are websites and software that have good reviews and ratings from users and experts. By using a reputable torrent site and client, you can avoid phishing or scamming attempts while searching for and downloading Grandhotelsaison1vftorrent.
    • -
    - -

    Conclusion

    - -

    Grandhotelsaison1vftorrent is a keyword that can help you find and download Grand Hotel season 1 in French from torrent sites. By using this keyword, you can enjoy watching the popular TV series for free, offline, at your own pace, with high quality, and with subtitles or dubbing.

    - -

    However, using this keyword also has some risks that might violate copyright laws, expose your computer or device to malware or viruses, or compromise your privacy or security. To avoid these risks, you need to use a VPN service, an antivirus software, and a reputable torrent site and client.

    - -

    If you are looking for a reliable and versatile software that can help you download and watch Grand Hotel season 1 in French safely and securely,

    -

    Where to Find Grandhotelsaison1vftorrent?

    - -

    There are many torrent sites that offer Grandhotelsaison1vftorrent, but not all of them are reliable or safe. Some of them might have fake or corrupted files, or they might contain malware or viruses that can harm your computer or device. Some of them might also have low-quality or incomplete episodes of Grand Hotel season 1 in French.

    - -

    To avoid these problems, you need to use a reputable torrent site that has good reviews and ratings from users and experts. You also need to check the comments and feedback from other users who have downloaded Grandhotelsaison1vftorrent before you. You also need to verify the file size and format of the torrent file before you download it.

    - -

    Here are some of the best torrent sites that offer Grandhotelsaison1vftorrent:

    - -
      -
    • EZTV: EZTV is one of the most popular torrent sites for TV shows and movies. It has a large and updated collection of Grandhotelsaison1vftorrent, as well as other versions of Grand Hotel season 1 in different languages or subtitles. It also has a user-friendly interface and a fast download speed.
    • -
    • The Pirate Bay: The Pirate Bay is one of the oldest and most famous torrent sites in the world. It has a huge and diverse collection of Grandhotelsaison1vftorrent, as well as other types of files, such as music, games, software, and more. It also has a simple and easy-to-use interface and a powerful search engine.
    • -
    • 1337x: 1337x is one of the most visited and trusted torrent sites in the world. It has a high-quality and updated collection of Grandhotelsaison1vftorrent, as well as other categories of files, such as anime, documentaries, ebooks, and more. It also has a modern and attractive interface and a dedicated community.
    • -
    - -

    How to Watch Grandhotelsaison1vftorrent?

    - -

    After you download Grandhotelsaison1vftorrent to your computer or device, you need to use a media player that can play torrent files. A media player is a software that allows you to watch videos and listen to audio files on your computer or device. Some of the most popular media players are VLC, Media Player Classic, KMPlayer, and GOM Player.

    - -

    To watch Grandhotelsaison1vftorrent with a media player, you need to follow these steps:

    - -
      -
    1. Download and install a media player that can play torrent files on your computer or device.
    2. -
    3. Open the media player and click on the "Open File" or "Open Folder" option.
    4. -
    5. Browse through your computer or device and select the folder where you saved Grandhotelsaison1vftorrent.
    6. -
    7. Select the episode of Grand Hotel season 1 in French that you want to watch and click on "Open" or "Play".
    8. -
    9. Enjoy watching Grand Hotel season 1 in French on your computer or device.
    10. -
    - -

    Conclusion

    - -

    Grandhotelsaison1vftorrent is a keyword that can help you find and download Grand Hotel season 1 in French from torrent sites. By using this keyword, you can enjoy watching the popular TV series for free, offline, at your own pace, with high quality, and with subtitles or dubbing.

    - -

    However, using this keyword also has some risks that might violate copyright laws, expose your computer or device to malware or viruses, or compromise your privacy or security. To avoid these risks, you need to use a VPN service, an antivirus software, and a reputable torrent site and client.

    - -

    If you are looking for a reliable and versatile software that can help you download and watch Grand Hotel season 1 in French safely and securely, -

    Conclusion

    - -

    Grandhotelsaison1vftorrent is a keyword that can help you find and download Grand Hotel season 1 in French from torrent sites. By using this keyword, you can enjoy watching the popular TV series for free, offline, at your own pace, with high quality, and with subtitles or dubbing.

    - -

    However, using this keyword also has some risks that might violate copyright laws, expose your computer or device to malware or viruses, or compromise your privacy or security. To avoid these risks, you need to use a VPN service, an antivirus software, and a reputable torrent site and client.

    - -

    If you are looking for a reliable and versatile software that can help you download and watch Grand Hotel season 1 in French safely and securely,

    3cee63e6c2
    -
    -
    \ No newline at end of file diff --git a/spaces/red1xe/codeGPT/htmlTemplates.py b/spaces/red1xe/codeGPT/htmlTemplates.py deleted file mode 100644 index 0100f733217136d15ba2e2ec098640d6b4e25d36..0000000000000000000000000000000000000000 --- a/spaces/red1xe/codeGPT/htmlTemplates.py +++ /dev/null @@ -1,44 +0,0 @@ -css = ''' -' - - - def add_row(self, a, b): - tmp = """ -
    -
    REPLACE_A
    -
    REPLACE_B
    -
    - """ - from toolbox import markdown_convertion - tmp = tmp.replace('REPLACE_A', markdown_convertion(a)) - tmp = tmp.replace('REPLACE_B', markdown_convertion(b)) - self.html_string += tmp - - - def save_file(self, file_name): - with open(f'./gpt_log/{file_name}', 'w', encoding='utf8') as f: - f.write(self.html_string.encode('utf-8', 'ignore').decode()) - diff --git a/spaces/zhoupin30/zhoupin30/src/components/user-menu.tsx b/spaces/zhoupin30/zhoupin30/src/components/user-menu.tsx deleted file mode 100644 index 9bd1edc9cf9f39b63629b021f0c1186b1a7c1341..0000000000000000000000000000000000000000 --- a/spaces/zhoupin30/zhoupin30/src/components/user-menu.tsx +++ /dev/null @@ -1,113 +0,0 @@ -'use client' - -import { useEffect, useState } from 'react' -import Image from 'next/image' -import { toast } from 'react-hot-toast' -import { Button } from '@/components/ui/button' -import pkg from '../../package.json' -import { - DropdownMenu, - DropdownMenuContent, - DropdownMenuItem, - DropdownMenuSeparator, - DropdownMenuTrigger -} from '@/components/ui/dropdown-menu' -import { IconCopy, IconExternalLink, IconGitHub } from '@/components/ui/icons' -import SettingIcon from '@/assets/images/settings.svg' -import { useCopyToClipboard } from '@/lib/hooks/use-copy-to-clipboard' - -export function UserMenu() { - const [host, setHost] = useState('') - const { isCopied, copyToClipboard } = useCopyToClipboard({ timeout: 2000 }) - useEffect(() => { - setHost(location.host) - }, []) - - useEffect(() => { - if (isCopied) { - toast.success('复制成功') - } - }, [isCopied]) - return ( -
    - - - - - - - location.href='#dialog="settings"' - } - className="cursor-pointer" - > - 设置用户 - - - - location.href='#dialog="voice"' - } - className="cursor-pointer" - > - 语音设置 - - - - - 开源地址 - - - - - - - - 托管地址 - 🤗 - - - - - - - 复制站点 - - - - - -
    版本信息 {pkg.version}
    -
    - - -
    站点域名
    -
    copyToClipboard(host)} className="flex gap-1 text-xs text-zinc-500 cursor-pointer"> - {host} -
    -
    -
    -
    -
    - ) -} diff --git a/spaces/zideliu/styledrop/timm/models/selecsls.py b/spaces/zideliu/styledrop/timm/models/selecsls.py deleted file mode 100644 index 73bc7732833c1d373f7f32ab5da695852be63c5f..0000000000000000000000000000000000000000 --- a/spaces/zideliu/styledrop/timm/models/selecsls.py +++ /dev/null @@ -1,359 +0,0 @@ -"""PyTorch SelecSLS Net example for ImageNet Classification -License: CC BY 4.0 (https://creativecommons.org/licenses/by/4.0/legalcode) -Author: Dushyant Mehta (@mehtadushy) - -SelecSLS (core) Network Architecture as proposed in "XNect: Real-time Multi-person 3D -Human Pose Estimation with a Single RGB Camera, Mehta et al." -https://arxiv.org/abs/1907.00837 - -Based on ResNet implementation in https://github.com/rwightman/pytorch-image-models -and SelecSLS Net implementation in https://github.com/mehtadushy/SelecSLS-Pytorch -""" -from typing import List - -import torch -import torch.nn as nn -import torch.nn.functional as F - -from timm.data import IMAGENET_DEFAULT_MEAN, IMAGENET_DEFAULT_STD -from .helpers import build_model_with_cfg -from .layers import create_classifier -from .registry import register_model - -__all__ = ['SelecSLS'] # model_registry will add each entrypoint fn to this - - -def _cfg(url='', **kwargs): - return { - 'url': url, - 'num_classes': 1000, 'input_size': (3, 224, 224), 'pool_size': (4, 4), - 'crop_pct': 0.875, 'interpolation': 'bilinear', - 'mean': IMAGENET_DEFAULT_MEAN, 'std': IMAGENET_DEFAULT_STD, - 'first_conv': 'stem.0', 'classifier': 'fc', - **kwargs - } - - -default_cfgs = { - 'selecsls42': _cfg( - url='', - interpolation='bicubic'), - 'selecsls42b': _cfg( - url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-selecsls/selecsls42b-8af30141.pth', - interpolation='bicubic'), - 'selecsls60': _cfg( - url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-selecsls/selecsls60-bbf87526.pth', - interpolation='bicubic'), - 'selecsls60b': _cfg( - url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-selecsls/selecsls60b-94e619b5.pth', - interpolation='bicubic'), - 'selecsls84': _cfg( - url='', - interpolation='bicubic'), -} - - -class SequentialList(nn.Sequential): - - def __init__(self, *args): - super(SequentialList, self).__init__(*args) - - @torch.jit._overload_method # noqa: F811 - def forward(self, x): - # type: (List[torch.Tensor]) -> (List[torch.Tensor]) - pass - - @torch.jit._overload_method # noqa: F811 - def forward(self, x): - # type: (torch.Tensor) -> (List[torch.Tensor]) - pass - - def forward(self, x) -> List[torch.Tensor]: - for module in self: - x = module(x) - return x - - -class SelectSeq(nn.Module): - def __init__(self, mode='index', index=0): - super(SelectSeq, self).__init__() - self.mode = mode - self.index = index - - @torch.jit._overload_method # noqa: F811 - def forward(self, x): - # type: (List[torch.Tensor]) -> (torch.Tensor) - pass - - @torch.jit._overload_method # noqa: F811 - def forward(self, x): - # type: (Tuple[torch.Tensor]) -> (torch.Tensor) - pass - - def forward(self, x) -> torch.Tensor: - if self.mode == 'index': - return x[self.index] - else: - return torch.cat(x, dim=1) - - -def conv_bn(in_chs, out_chs, k=3, stride=1, padding=None, dilation=1): - if padding is None: - padding = ((stride - 1) + dilation * (k - 1)) // 2 - return nn.Sequential( - nn.Conv2d(in_chs, out_chs, k, stride, padding=padding, dilation=dilation, bias=False), - nn.BatchNorm2d(out_chs), - nn.ReLU(inplace=True) - ) - - -class SelecSLSBlock(nn.Module): - def __init__(self, in_chs, skip_chs, mid_chs, out_chs, is_first, stride, dilation=1): - super(SelecSLSBlock, self).__init__() - self.stride = stride - self.is_first = is_first - assert stride in [1, 2] - - # Process input with 4 conv blocks with the same number of input and output channels - self.conv1 = conv_bn(in_chs, mid_chs, 3, stride, dilation=dilation) - self.conv2 = conv_bn(mid_chs, mid_chs, 1) - self.conv3 = conv_bn(mid_chs, mid_chs // 2, 3) - self.conv4 = conv_bn(mid_chs // 2, mid_chs, 1) - self.conv5 = conv_bn(mid_chs, mid_chs // 2, 3) - self.conv6 = conv_bn(2 * mid_chs + (0 if is_first else skip_chs), out_chs, 1) - - def forward(self, x: List[torch.Tensor]) -> List[torch.Tensor]: - if not isinstance(x, list): - x = [x] - assert len(x) in [1, 2] - - d1 = self.conv1(x[0]) - d2 = self.conv3(self.conv2(d1)) - d3 = self.conv5(self.conv4(d2)) - if self.is_first: - out = self.conv6(torch.cat([d1, d2, d3], 1)) - return [out, out] - else: - return [self.conv6(torch.cat([d1, d2, d3, x[1]], 1)), x[1]] - - -class SelecSLS(nn.Module): - """SelecSLS42 / SelecSLS60 / SelecSLS84 - - Parameters - ---------- - cfg : network config dictionary specifying block type, feature, and head args - num_classes : int, default 1000 - Number of classification classes. - in_chans : int, default 3 - Number of input (color) channels. - drop_rate : float, default 0. - Dropout probability before classifier, for training - global_pool : str, default 'avg' - Global pooling type. One of 'avg', 'max', 'avgmax', 'catavgmax' - """ - - def __init__(self, cfg, num_classes=1000, in_chans=3, drop_rate=0.0, global_pool='avg'): - self.num_classes = num_classes - self.drop_rate = drop_rate - super(SelecSLS, self).__init__() - - self.stem = conv_bn(in_chans, 32, stride=2) - self.features = SequentialList(*[cfg['block'](*block_args) for block_args in cfg['features']]) - self.from_seq = SelectSeq() # from List[tensor] -> Tensor in module compatible way - self.head = nn.Sequential(*[conv_bn(*conv_args) for conv_args in cfg['head']]) - self.num_features = cfg['num_features'] - self.feature_info = cfg['feature_info'] - - self.global_pool, self.fc = create_classifier(self.num_features, self.num_classes, pool_type=global_pool) - - for n, m in self.named_modules(): - if isinstance(m, nn.Conv2d): - nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu') - elif isinstance(m, nn.BatchNorm2d): - nn.init.constant_(m.weight, 1.) - nn.init.constant_(m.bias, 0.) - - def get_classifier(self): - return self.fc - - def reset_classifier(self, num_classes, global_pool='avg'): - self.num_classes = num_classes - self.global_pool, self.fc = create_classifier(self.num_features, self.num_classes, pool_type=global_pool) - - def forward_features(self, x): - x = self.stem(x) - x = self.features(x) - x = self.head(self.from_seq(x)) - return x - - def forward(self, x): - x = self.forward_features(x) - x = self.global_pool(x) - if self.drop_rate > 0.: - x = F.dropout(x, p=self.drop_rate, training=self.training) - x = self.fc(x) - return x - - -def _create_selecsls(variant, pretrained, model_kwargs): - cfg = {} - feature_info = [dict(num_chs=32, reduction=2, module='stem.2')] - if variant.startswith('selecsls42'): - cfg['block'] = SelecSLSBlock - # Define configuration of the network after the initial neck - cfg['features'] = [ - # in_chs, skip_chs, mid_chs, out_chs, is_first, stride - (32, 0, 64, 64, True, 2), - (64, 64, 64, 128, False, 1), - (128, 0, 144, 144, True, 2), - (144, 144, 144, 288, False, 1), - (288, 0, 304, 304, True, 2), - (304, 304, 304, 480, False, 1), - ] - feature_info.extend([ - dict(num_chs=128, reduction=4, module='features.1'), - dict(num_chs=288, reduction=8, module='features.3'), - dict(num_chs=480, reduction=16, module='features.5'), - ]) - # Head can be replaced with alternative configurations depending on the problem - feature_info.append(dict(num_chs=1024, reduction=32, module='head.1')) - if variant == 'selecsls42b': - cfg['head'] = [ - (480, 960, 3, 2), - (960, 1024, 3, 1), - (1024, 1280, 3, 2), - (1280, 1024, 1, 1), - ] - feature_info.append(dict(num_chs=1024, reduction=64, module='head.3')) - cfg['num_features'] = 1024 - else: - cfg['head'] = [ - (480, 960, 3, 2), - (960, 1024, 3, 1), - (1024, 1024, 3, 2), - (1024, 1280, 1, 1), - ] - feature_info.append(dict(num_chs=1280, reduction=64, module='head.3')) - cfg['num_features'] = 1280 - - elif variant.startswith('selecsls60'): - cfg['block'] = SelecSLSBlock - # Define configuration of the network after the initial neck - cfg['features'] = [ - # in_chs, skip_chs, mid_chs, out_chs, is_first, stride - (32, 0, 64, 64, True, 2), - (64, 64, 64, 128, False, 1), - (128, 0, 128, 128, True, 2), - (128, 128, 128, 128, False, 1), - (128, 128, 128, 288, False, 1), - (288, 0, 288, 288, True, 2), - (288, 288, 288, 288, False, 1), - (288, 288, 288, 288, False, 1), - (288, 288, 288, 416, False, 1), - ] - feature_info.extend([ - dict(num_chs=128, reduction=4, module='features.1'), - dict(num_chs=288, reduction=8, module='features.4'), - dict(num_chs=416, reduction=16, module='features.8'), - ]) - # Head can be replaced with alternative configurations depending on the problem - feature_info.append(dict(num_chs=1024, reduction=32, module='head.1')) - if variant == 'selecsls60b': - cfg['head'] = [ - (416, 756, 3, 2), - (756, 1024, 3, 1), - (1024, 1280, 3, 2), - (1280, 1024, 1, 1), - ] - feature_info.append(dict(num_chs=1024, reduction=64, module='head.3')) - cfg['num_features'] = 1024 - else: - cfg['head'] = [ - (416, 756, 3, 2), - (756, 1024, 3, 1), - (1024, 1024, 3, 2), - (1024, 1280, 1, 1), - ] - feature_info.append(dict(num_chs=1280, reduction=64, module='head.3')) - cfg['num_features'] = 1280 - - elif variant == 'selecsls84': - cfg['block'] = SelecSLSBlock - # Define configuration of the network after the initial neck - cfg['features'] = [ - # in_chs, skip_chs, mid_chs, out_chs, is_first, stride - (32, 0, 64, 64, True, 2), - (64, 64, 64, 144, False, 1), - (144, 0, 144, 144, True, 2), - (144, 144, 144, 144, False, 1), - (144, 144, 144, 144, False, 1), - (144, 144, 144, 144, False, 1), - (144, 144, 144, 304, False, 1), - (304, 0, 304, 304, True, 2), - (304, 304, 304, 304, False, 1), - (304, 304, 304, 304, False, 1), - (304, 304, 304, 304, False, 1), - (304, 304, 304, 304, False, 1), - (304, 304, 304, 512, False, 1), - ] - feature_info.extend([ - dict(num_chs=144, reduction=4, module='features.1'), - dict(num_chs=304, reduction=8, module='features.6'), - dict(num_chs=512, reduction=16, module='features.12'), - ]) - # Head can be replaced with alternative configurations depending on the problem - cfg['head'] = [ - (512, 960, 3, 2), - (960, 1024, 3, 1), - (1024, 1024, 3, 2), - (1024, 1280, 3, 1), - ] - cfg['num_features'] = 1280 - feature_info.extend([ - dict(num_chs=1024, reduction=32, module='head.1'), - dict(num_chs=1280, reduction=64, module='head.3') - ]) - else: - raise ValueError('Invalid net configuration ' + variant + ' !!!') - cfg['feature_info'] = feature_info - - # this model can do 6 feature levels by default, unlike most others, leave as 0-4 to avoid surprises? - return build_model_with_cfg( - SelecSLS, variant, pretrained, default_cfg=default_cfgs[variant], model_cfg=cfg, - feature_cfg=dict(out_indices=(0, 1, 2, 3, 4), flatten_sequential=True), **model_kwargs) - - -@register_model -def selecsls42(pretrained=False, **kwargs): - """Constructs a SelecSLS42 model. - """ - return _create_selecsls('selecsls42', pretrained, kwargs) - - -@register_model -def selecsls42b(pretrained=False, **kwargs): - """Constructs a SelecSLS42_B model. - """ - return _create_selecsls('selecsls42b', pretrained, kwargs) - - -@register_model -def selecsls60(pretrained=False, **kwargs): - """Constructs a SelecSLS60 model. - """ - return _create_selecsls('selecsls60', pretrained, kwargs) - - -@register_model -def selecsls60b(pretrained=False, **kwargs): - """Constructs a SelecSLS60_B model. - """ - return _create_selecsls('selecsls60b', pretrained, kwargs) - - -@register_model -def selecsls84(pretrained=False, **kwargs): - """Constructs a SelecSLS84 model. - """ - return _create_selecsls('selecsls84', pretrained, kwargs) diff --git a/spaces/zishuqianyu001/img-to-music/share_btn.py b/spaces/zishuqianyu001/img-to-music/share_btn.py deleted file mode 100644 index 1a2ac6a6e74b114dbd54c2f24723a87180db51ef..0000000000000000000000000000000000000000 --- a/spaces/zishuqianyu001/img-to-music/share_btn.py +++ /dev/null @@ -1,100 +0,0 @@ -community_icon_html = """""" - -loading_icon_html = """""" - -share_js = """async () => { - async function uploadFile(file){ - const UPLOAD_URL = 'https://huggingface.co/uploads'; - const response = await fetch(UPLOAD_URL, { - method: 'POST', - headers: { - 'Content-Type': file.type, - 'X-Requested-With': 'XMLHttpRequest', - }, - body: file, /// <- File inherits from Blob - }); - const url = await response.text(); - return url; - } - async function getInputImgFile(imgEl){ - const res = await fetch(imgEl.src); - const blob = await res.blob(); - const imgId = Date.now() % 200; - const isPng = imgEl.src.startsWith(`data:image/png`); - if(isPng){ - const fileName = `sd-perception-${{imgId}}.png`; - return new File([blob], fileName, { type: 'image/png' }); - }else{ - const fileName = `sd-perception-${{imgId}}.jpg`; - return new File([blob], fileName, { type: 'image/jpeg' }); - } - } - async function getOutputMusicFile(audioEL){ - const res = await fetch(audioEL.src); - const blob = await res.blob(); - const audioId = Date.now() % 200; - const fileName = `img-to-music-${{audioId}}.wav`; - const musicBlob = new File([blob], fileName, { type: 'audio/wav' }); - console.log(musicBlob); - return musicBlob; - } - - async function audioToBase64(audioFile) { - return new Promise((resolve, reject) => { - let reader = new FileReader(); - reader.readAsDataURL(audioFile); - reader.onload = () => resolve(reader.result); - reader.onerror = error => reject(error); - - }); - } - const gradioEl = document.querySelector('body > gradio-app'); - // const gradioEl = document.querySelector("gradio-app").shadowRoot; - const inputImgEl = gradioEl.querySelector('#input-img img'); - const outputMusic = gradioEl.querySelector('#music-output audio'); - const outputMusic_src = gradioEl.querySelector('#music-output audio').src; - const outputMusic_name = outputMusic_src.split('/').pop(); - let titleTxt = outputMusic_name; - //if(titleTxt.length > 100){ - // titleTxt = titleTxt.slice(0, 100) + ' ...'; - //} - const shareBtnEl = gradioEl.querySelector('#share-btn'); - const shareIconEl = gradioEl.querySelector('#share-btn-share-icon'); - const loadingIconEl = gradioEl.querySelector('#share-btn-loading-icon'); - if(!outputMusic){ - return; - }; - shareBtnEl.style.pointerEvents = 'none'; - shareIconEl.style.display = 'none'; - loadingIconEl.style.removeProperty('display'); - const inputFile = await getInputImgFile(inputImgEl); - const urlInputImg = await uploadFile(inputFile); - const musicFile = await getOutputMusicFile(outputMusic); - const dataOutputMusic = await uploadFile(musicFile); - - const descriptionMd = `#### Input img: - - -#### Music: - - -`; - const params = new URLSearchParams({ - title: titleTxt, - description: descriptionMd, - }); - const paramsStr = params.toString(); - window.open(`https://huggingface.co/spaces/fffiloni/img-to-music/discussions/new?${paramsStr}`, '_blank'); - shareBtnEl.style.removeProperty('pointer-events'); - shareIconEl.style.removeProperty('display'); - loadingIconEl.style.display = 'none'; -}""" \ No newline at end of file diff --git a/spaces/zxy666/bingo-chatai666/src/components/ui/alert-dialog.tsx b/spaces/zxy666/bingo-chatai666/src/components/ui/alert-dialog.tsx deleted file mode 100644 index 17fec4d16510328deacc1416569173c97761ef72..0000000000000000000000000000000000000000 --- a/spaces/zxy666/bingo-chatai666/src/components/ui/alert-dialog.tsx +++ /dev/null @@ -1,150 +0,0 @@ -'use client' - -import * as React from 'react' -import * as AlertDialogPrimitive from '@radix-ui/react-alert-dialog' - -import { cn } from '@/lib/utils' -import { buttonVariants } from '@/components/ui/button' - -const AlertDialog = AlertDialogPrimitive.Root - -const AlertDialogTrigger = AlertDialogPrimitive.Trigger - -const AlertDialogPortal = ({ - className, - children, - ...props -}: AlertDialogPrimitive.AlertDialogPortalProps) => ( - -
    - {children} -
    -
    -) -AlertDialogPortal.displayName = AlertDialogPrimitive.Portal.displayName - -const AlertDialogOverlay = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, children, ...props }, ref) => ( - -)) -AlertDialogOverlay.displayName = AlertDialogPrimitive.Overlay.displayName - -const AlertDialogContent = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, ...props }, ref) => ( - - - - -)) -AlertDialogContent.displayName = AlertDialogPrimitive.Content.displayName - -const AlertDialogHeader = ({ - className, - ...props -}: React.HTMLAttributes) => ( -
    -) -AlertDialogHeader.displayName = 'AlertDialogHeader' - -const AlertDialogFooter = ({ - className, - ...props -}: React.HTMLAttributes) => ( -
    -) -AlertDialogFooter.displayName = 'AlertDialogFooter' - -const AlertDialogTitle = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, ...props }, ref) => ( - -)) -AlertDialogTitle.displayName = AlertDialogPrimitive.Title.displayName - -const AlertDialogDescription = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, ...props }, ref) => ( - -)) -AlertDialogDescription.displayName = - AlertDialogPrimitive.Description.displayName - -const AlertDialogAction = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, ...props }, ref) => ( - -)) -AlertDialogAction.displayName = AlertDialogPrimitive.Action.displayName - -const AlertDialogCancel = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, ...props }, ref) => ( - -)) -AlertDialogCancel.displayName = AlertDialogPrimitive.Cancel.displayName - -export { - AlertDialog, - AlertDialogTrigger, - AlertDialogContent, - AlertDialogHeader, - AlertDialogFooter, - AlertDialogTitle, - AlertDialogDescription, - AlertDialogAction, - AlertDialogCancel -}