diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Branding.zip Igo Primo 2.4https Scoutmails.com Index301.php K Branding.zip Igo Primo 2.4.md b/spaces/1gistliPinn/ChatGPT4/Examples/Branding.zip Igo Primo 2.4https Scoutmails.com Index301.php K Branding.zip Igo Primo 2.4.md deleted file mode 100644 index 4d77ee036d921c1442003511a125b7900201ff75..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/Branding.zip Igo Primo 2.4https Scoutmails.com Index301.php K Branding.zip Igo Primo 2.4.md +++ /dev/null @@ -1,6 +0,0 @@ -

branding.zip igo primo 2.4https: scoutmails.com index301.php k branding.zip igo primo 2.4


Download Filehttps://imgfil.com/2uxZEF



-
- 4d29de3e1b
-
-
-

diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/CyberLink PowerDirector Ultimate 16.0.2313.0 Inc Keygen Crack REPACK Crack REPACK.md b/spaces/1gistliPinn/ChatGPT4/Examples/CyberLink PowerDirector Ultimate 16.0.2313.0 Inc Keygen Crack REPACK Crack REPACK.md deleted file mode 100644 index 7b1a29ffc394bcaa07d87560d58bfc5600056f3d..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/CyberLink PowerDirector Ultimate 16.0.2313.0 Inc Keygen Crack REPACK Crack REPACK.md +++ /dev/null @@ -1,14 +0,0 @@ -

CyberLink PowerDirector Ultimate 16.0.2313.0 Inc Keygen Crack Crack


DOWNLOADhttps://imgfil.com/2uxY6I



-
-CyberLink PowerDirector Ultimate 16.0.20271 Incl Serial Key ... CyberLink PowerDirector Ultimate 15.0.2509.0 Final + Crack-Keygen - [Softhound]. Download Alawar Games Keys (Alawar) - Download -Game Keys to Alawar (Alawar) -Download Alawar game keys download for free. -New Games Alawar. -Download key to the game Alawar - Search Site -How to choose the right games for your computer on the site Alawar to ... -How to download Alawar games for free, download Alawar games without restrictions and without a key, look for Alawar games and play for free and without a key, ... -Key to the game Alawar: Twilight. -The key to the game Alawar: Twilight / Alawar: Twilight. 8a78ff9644
-
-
-

diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/BombSquad Pro Mod APK Unlock All Features and Enjoy Explosive Fun.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/BombSquad Pro Mod APK Unlock All Features and Enjoy Explosive Fun.md deleted file mode 100644 index 3670a558a9ec3d7b5a12edcf45369a3f39c2aba0..0000000000000000000000000000000000000000 --- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/BombSquad Pro Mod APK Unlock All Features and Enjoy Explosive Fun.md +++ /dev/null @@ -1,88 +0,0 @@ - -

Download BombSquad Pro Mod APK and Enjoy Explosive Fun with Your Friends

-

Do you love blowing things up and having fun with your friends? If yes, then you should try BombSquad, a hilarious and addictive game that lets you play various mini-games with bombs, pirates, ninjas, and more. And if you want to enjoy the game without any limitations, then you should download BombSquad Pro Mod APK, which gives you access to all the characters, tickets, and features of the game for free. In this article, we will tell you everything you need to know about BombSquad and BombSquad Pro Mod APK, and how to download and install it on your device.

-

download bombsquad pro mod apk


Downloadhttps://urlin.us/2uSXdh



-

What is BombSquad?

-

BombSquad is a game developed by Eric Froemling that allows you to blow up your friends in various mini-games ranging from capture-the-flag to hockey. The game features 8 players local/networked multiplayer, gratuitous explosions, advanced ragdoll face-plant physics, pirates, ninjas, barbarians, insane chefs, and more. You can play the game on your Android device, or on your PC using a controller or a keyboard.

-

Features of BombSquad

-

BombSquad has many features that make it a fun and exciting game to play with your friends. Here are some of them:

-

Multiplayer mode

-

You can play BombSquad with up to 8 players on the same device or over the internet. You can also join online servers and play with other players from around the world. You can create your own team or join an existing one, and compete against other teams in various modes.

-

Various mini-games

-

BombSquad has a variety of mini-games that you can choose from, such as capture-the-flag, king-of-the-hill, elimination, race, hockey, football, basketball, and more. Each mini-game has its own rules and objectives, and requires different strategies and skills to win. You can also create your own custom mini-games using the built-in editor.

-

gcash 5.26 2 apk download free
-gcash 5.26 2 apk download latest version
-gcash 5.26 2 apk download for android
-gcash 5.26 2 apk download update
-gcash 5.26 2 apk download old version
-gcash 5.26 2 apk download xapk
-gcash 5.26 2 apk download apkcombo
-gcash 5.26 2 apk download app
-gcash 5.26 2 apk download install
-gcash 5.26 2 apk download offline
-gcash 5.26 2 apk download mod
-gcash 5.26 2 apk download hack
-gcash 5.26 2 apk download no root
-gcash 5.26 2 apk download mirror
-gcash 5.26 2 apk download direct link
-gcash 5.26 2 apk download for pc
-gcash 5.26 2 apk download for windows
-gcash 5.26 2 apk download for mac
-gcash 5.26 2 apk download for laptop
-gcash 5.26 2 apk download for tablet
-gcash 5.26 2 apk download for tv
-gcash 5.26 2 apk download for firestick
-gcash 5.26 2 apk download for chromebook
-gcash 5.26 2 apk download for ios
-gcash 5.26 2 apk download for iphone
-gcash 5.26 2 apk download for ipad
-gcash 5.26 2 apk download for ipod touch
-gcash 5.26 2 apk download review
-gcash 5.26 2 apk download rating
-gcash 5.26 2 apk download feedback
-gcash 5.26 2 apk download features
-gcash 5.26 2 apk download benefits
-gcash 5.26 2 apk download advantages
-gcash 5.26 2 apk download disadvantages
-gcash 5.26 2 apk download pros and cons
-gcash 5.26 2 apk download comparison
-gcash 5.26 2 apk download alternatives
-gcash 5.26 2 apk download competitors
-gcash 5.26

-

Customizable characters

-

You can customize your character in BombSquad by choosing from different outfits, colors, accessories, and taunts. You can also unlock new characters by playing the game or buying them with tickets. Some of the characters include pirates, ninjas, barbarians, robots, zombies, aliens, animals, and more.

-

Ragdoll physics

-

BombSquad has realistic ragdoll physics that make the game more hilarious and enjoyable. You can see your character fly through the air, bounce off walls, fall down stairs, get hit by bombs, and more. You can also use the ragdoll button to make your character go limp at any time.

-

What is BombSquad Pro Mod APK?

-

BombSquad Pro Mod APK is a modified version of the original BombSquad game that gives you access to all the pro features of the game for free. This means that you can enjoy all the characters, tickets, and modes of the game without spending any money or watching any ads.

-

Benefits of BombSquad Pro Mod APK

-

BombSquad Pro Mod APK has many benefits that make it better than the original game. Here are some of them:

All characters unlocked

-

With BombSquad Pro Mod APK, you can unlock all the characters in the game without having to play the game or buy them with tickets. You can choose from over 50 characters, each with their own unique appearance and personality. You can also mix and match different outfits, colors, and accessories to create your own custom character.

-

All tickets unlocked

-

Tickets are the currency of BombSquad that you can use to buy new characters, outfits, accessories, and mini-games. You can earn tickets by playing the game or watching ads, but it can take a long time to accumulate enough tickets to buy everything you want. With BombSquad Pro Mod APK, you can get unlimited tickets for free, and buy anything you want without any restrictions.

-

No ads

-

Ads can be annoying and distracting when you are playing a game, especially when they pop up in the middle of a match or a mini-game. They can also slow down your device and consume your data. With BombSquad Pro Mod APK, you can get rid of all the ads in the game, and enjoy a smooth and uninterrupted gaming experience.

-

How to download and install BombSquad Pro Mod APK?

-

If you want to download and install BombSquad Pro Mod APK on your device, you need to follow some simple steps. Here they are:

-

Steps to download and install BombSquad Pro Mod APK

-

Step 1: Enable unknown sources

-

Before you can install BombSquad Pro Mod APK on your device, you need to enable unknown sources in your device settings. This will allow you to install apps from sources other than the Google Play Store. To do this, go to Settings > Security > Unknown Sources and toggle it on.

-

Step 2: Download the APK file

-

Next, you need to download the APK file of BombSquad Pro Mod APK from a reliable source. You can use the link below to download the latest version of the file:

-

Download BombSquad Pro Mod APK

-

Step 3: Install the APK file

-

Once you have downloaded the APK file, you need to locate it in your device storage and tap on it to start the installation process. You may see a warning message asking for your permission to install the app. Just tap on Install and wait for the installation to complete.

-

Step 4: Launch the game and enjoy

-

After the installation is done, you can launch the game from your app drawer or home screen. You will see that you have access to all the pro features of the game for free. You can now enjoy playing BombSquad with your friends and have explosive fun.

-

Conclusion

-

BombSquad is a fun and addictive game that lets you play various mini-games with bombs and your friends. It has many features that make it an enjoyable game for all ages. However, if you want to enjoy the game without any limitations, you should download BombSquad Pro Mod APK, which gives you access to all the characters, tickets, and features of the game for free. You can download and install BombSquad Pro Mod APK by following the steps mentioned above. We hope this article was helpful for you. If you have any questions or feedback, feel free to leave a comment below.

-

FAQs

-

Here are some frequently asked questions about BombSquad and BombSquad Pro Mod APK:

- - - - - -
Q: Is BombSquad Pro Mod APK safe to use?A: Yes, BombSquad Pro Mod APK is safe to use as long as you download it from a trusted source. However, we recommend that you use it at your own risk, as we are not responsible for any damage or loss caused by using it.
Q: Can I play BombSquad online with BombSquad Pro Mod APK?A: Yes, you can play BombSquad online with BombSquad Pro Mod APK. However, you may face some issues or errors while playing online, as some servers may not support modded versions of the game.
Q: Can I update BombSquad Pro Mod APK?A: Yes, you can update BombSquad Pro Mod APK whenever a new version is available. However, you may lose some of your progress or data if you update it without backing it up first.Q: What are the minimum requirements to play BombSquad on Android?A: The minimum requirements to play BombSquad on Android are: Android 4.4 or higher, 1 GB of RAM, and 100 MB of free storage space.
Q: How can I contact the developer of BombSquad?A: You can contact the developer of BombSquad by visiting his website, or by sending him an email at eric@froemling.net.

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/Beach Buggy Racing for PC Download the EXE File and Race Against Crazy Characters.md b/spaces/1phancelerku/anime-remove-background/Beach Buggy Racing for PC Download the EXE File and Race Against Crazy Characters.md deleted file mode 100644 index 7fa148eb77fc3e96819140e231fd435dd37630fa..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Beach Buggy Racing for PC Download the EXE File and Race Against Crazy Characters.md +++ /dev/null @@ -1,142 +0,0 @@ -
-

Beach Buggy Racing Exe File Download: How to Enjoy This Fun and Free Kart Racing Game on Your PC

-

Do you love kart racing games? If so, you might have heard of Beach Buggy Racing, a popular game for mobile devices that lets you drive into an action-packed, surprise-filled world of off-road kart racing mayhem. You can race against a field of rival drivers, each with unique personalities and special abilities. You can also build a collection of crazy power-ups, like Dodgeball Frenzy, Fireball, and Oil Slick. You can unlock and upgrade a variety of cars, from dune buggies to monster trucks. You can test your skills in 6 different game modes on 15 imaginative 3D race tracks, against a pack of tropical-loving rivals with a serious case of road rage!

-

beach buggy racing exe file download


DOWNLOAD –––––>>> https://jinyurl.com/2uNSgf



-

But what if you want to play Beach Buggy Racing on your PC instead of your mobile device? Is there a way to do that? The answer is yes! You can download and play Beach Buggy Racing on your PC using an exe file. An exe file is an executable file that contains a program or software that can run on your PC. By downloading an exe file of Beach Buggy Racing, you can enjoy this fun and free kart racing game on your PC without any hassle.

-

Why would you want to play Beach Buggy Racing on your PC instead of your mobile device? Well, there are many benefits of playing Beach Buggy Racing on your PC. For example:

- -

As you can see, playing Beach Buggy Racing on your PC using an exe file download has many advantages. But how can you do that? How can you download and install Beach Buggy Racing exe file on your PC? How can you play Beach Buggy Racing on your PC? What are the game features, tips, and tricks that you need to know? In this article, we will answer all these questions and more. So, buckle up and get ready for some beach buggy racing fun!

-

How to Download and Install Beach Buggy Racing Exe File on Your PC

-

Downloading and installing Beach Buggy Racing exe file on your PC is very easy and simple. You just need to follow these steps:

-

beach buggy racing pc game free download
-beach buggy racing windows 10 download
-beach buggy racing for laptop download
-beach buggy racing full version download
-beach buggy racing offline installer download
-beach buggy racing microsoft store download
-beach buggy racing softonic download
-beach buggy racing cnet download
-beach buggy racing kart racing game download
-beach buggy racing sequel game download
-beach buggy racing 3d race tracks download
-beach buggy racing powerups and cars download
-beach buggy racing tropical island adventure download
-beach buggy racing vector unit game download
-beach buggy racing action-packed game download
-beach buggy racing surprise-filled game download
-beach buggy racing off-road karting game download
-beach buggy racing rival drivers game download
-beach buggy racing crazy powerups game download
-beach buggy racing dodgeball frenzy game download
-beach buggy racing fireball game download
-beach buggy racing oil slick game download
-beach buggy racing dune buggies game download
-beach buggy racing monster trucks game download
-beach buggy racing 6 game modes download
-beach buggy racing 15 3d race tracks download
-beach buggy racing road rage game download
-beach buggy racing fast and furious game download
-beach buggy racing fun and free game download
-beach buggy racing split-screen multiplayer mode download
-beach buggy racing premium version download
-beach buggy racing infinite tickets download
-beach buggy racing updates and images download
-beach buggy racing google+ page download
-beach buggy racing facebook page download
-beach buggy racing twitter page download
-beach buggy racing web page download
-beach buggy racing screenshots and reviews download
-beach buggy racing phoenix force game download
-beach buggy racing upward game download
-beach buggy racing dictionary app download
-beach buggy racing high-speed fun in the sun game download
-beach buggy racing tilt controls and powerups game download
-beach buggy racing special move and character game download
-beach buggy racing lavishly-crafted summery tracks game download
-beach buggy racing crabs and seagulls smashing game download
-beach buggy racing mario kart inspired game download
-beach buggy racing battle mode and net energy gain game download

-
    -
  1. Find a reliable and safe source for the exe file download. There are many websites that offer exe file downloads for various games and software, but not all of them are trustworthy and secure. Some of them may contain viruses, malware, or spyware that can harm your PC or steal your personal information. Therefore, you need to be careful and choose a reputable and verified source for the exe file download. One of the best sources for Beach Buggy Racing exe file download is the official website of Vector Unit, the developer of the game. You can visit their website at https://www.vectorunit.com/beach-buggy-racing and click on the "Download for Windows" button to get the exe file.
  2. -
  3. Download the exe file to your PC and run it as an administrator. Once you have found a reliable and safe source for the exe file download, you need to download the exe file to your PC. The file size is about 100 MB, so it should not take too long to download depending on your internet speed. After downloading the exe file, you need to run it as an administrator to start the installation process. To do that, you need to right-click on the exe file and select "Run as administrator" from the menu. This will allow the exe file to make changes to your PC and install the game properly.
  4. -
  5. Follow the installation instructions and launch the game. After running the exe file as an administrator, you will see a window with the installation instructions. You need to follow these instructions carefully and agree to the terms and conditions of the game. You also need to choose a destination folder for the game files and create a shortcut for the game on your desktop or start menu. The installation process should not take more than a few minutes. After completing the installation process, you can launch the game by clicking on the shortcut or by finding it in your destination folder.
  6. -
-

Congratulations! You have successfully downloaded and installed Beach Buggy Racing exe file on your PC. Now you can enjoy this fun and free kart racing game on your PC anytime you want.

How to Play Beach Buggy Racing on Your PC

-

Now that you have downloaded and installed Beach Buggy Racing exe file on your PC, you are ready to play the game. But how can you play Beach Buggy Racing on your PC? What are the settings, controls, and graphics that you need to customize for optimal performance and experience? How can you choose your driver, car, and power-ups for different game modes and tracks? How can you use keyboard, mouse, or gamepad to control your car and activate power-ups? In this section, we will answer all these questions and more. Here is how you can play Beach Buggy Racing on your PC:

- -

Once you have chosen your driver, car, and power-ups for the game mode and track you want to play, you are ready to start the race. But how do you control your car and activate your power-ups? Here is how you can do that:

- -

That's it! You have learned how to play Beach Buggy Racing on your PC using keyboard, mouse, or gamepad. Now you can enjoy this fun and free kart racing game on your PC with better graphics quality, sound effects, controls, performance, stability, security, multiplayer mode, custom mode, etc.

Beach Buggy Racing Game Features, Tips, and Tricks

-

Now that you know how to download, install, and play Beach Buggy Racing on your PC, you may want to learn more about the game features, tips, and tricks that will make your gaming experience more fun and exciting. In this section, we will tell you what are the main game features that make Beach Buggy Racing stand out from other kart racing games. We will also give you some tips and tricks to help you improve your skills and win more races. Here are the game features, tips, and tricks that you need to know:

-

What are the main game features that make Beach Buggy Racing fun and exciting?

-

Beach Buggy Racing is not just another kart racing game. It has many unique and amazing features that make it different from other games in the genre. Here are some of the main game features that make Beach Buggy Racing fun and exciting:

- -

These are some of the main game features that make Beach Buggy Racing fun and exciting. But how can you master these features and win more races? What are some tips and tricks to help you improve your skills? Here are some tips and tricks to help you out:

-

What are some tips and tricks to help you improve your skills and win more races?

-

Beach Buggy Racing is not just a game of luck or chance. It is also a game of skill and strategy. You need to practice regularly and learn from your mistakes to become a better racer. You also need to use some tips and tricks to gain an edge over your opponents. Here are some tips and tricks to help you improve your skills and win more races:

- -

These are some of the tips and tricks that will help you improve your skills and win more races in Beach Buggy Racing. But remember, the most important thing is to have fun and enjoy the game!

-

Conclusion

-

In conclusion, Beach Buggy Racing is a fun and free kart racing game that you can download and play on your PC using an exe file. By downloading an exe file of Beach Buggy Racing, you can enjoy this game on your PC with better graphics quality, sound effects, controls, performance, stability, security, multiplayer mode, custom mode, etc. You can also customize your settings, controls, and graphics according to your preferences and PC specifications. You can also choose your driver, car, and power-ups from a variety of options for different game modes and tracks. You can also use keyboard, mouse, or gamepad to control your car and activate power-ups. You can also learn about the game features, tips, and tricks that will make your gaming experience more fun and exciting.

-

If you love kart racing games, you should definitely try out Beach Buggy Racing on your PC using an exe file download. It is a game that will keep you entertained for hours with its colorful graphics, dynamic gameplay, diverse options, surprises, and challenges. You can also play with your friends or family on the same PC using the split-screen multiplayer mode or with other players from around the world using the online multiplayer mode. You can also create your own rules and challenges using the custom game mode. Beach Buggy Racing is a game that will make you feel like you are in a tropical paradise with its tropical theme and elements. Beach Buggy Racing is a game that will make you smile and laugh with its humorous and quirky characters and power-ups. Beach Buggy Racing is a game that will make you addicted and satisfied with its realistic and physics-based gameplay and effects.

-

So, what are you waiting for? Download Beach Buggy Racing exe file on your PC today and enjoy this fun and free kart racing game on your PC. You will not regret it!

-

If you want to learn more about Beach Buggy Racing, you can visit the official website or social media pages of Vector Unit, the developer of the game. You can also check out some reviews, videos, screenshots, and FAQs of the game online. You can also share your feedback, suggestions, questions, or comments about the game with other players or with the developers. You can also rate and review the game on various platforms and websites.

-

Thank you for reading this article. We hope you found it helpful and informative. We hope you have a great time playing Beach Buggy Racing on your PC using an exe file download. Happy racing!

-

FAQs

-

Here are some frequently asked questions (FAQs) about Beach Buggy Racing exe file download:

- - - - - - - - - - - - - - - - - - - - - - - - - -
QuestionAnswer
Is Beach Buggy Racing exe file download safe and secure?Yes, Beach Buggy Racing exe file download is safe and secure if you download it from a reliable and verified source such as the official website of Vector Unit, the developer of the game. However, you should be careful and avoid downloading the exe file from unknown or suspicious sources as they may contain viruses, malware, or spyware that can harm your PC or steal your personal information.
Is Beach Buggy Racing exe file download free and legal?Yes, Beach Buggy Racing exe file download is free and legal if you download it from a legitimate and authorized source such as the official website of Vector Unit, the developer of the game. However, you should not download or distribute the exe file from illegal or unauthorized sources as they may violate the intellectual property rights of the developer or publisher of the game.
What are the system requirements for Beach Buggy Racing exe file download?The minimum system requirements for Beach Buggy Racing exe file download are as follows: Windows 7 or higher; 2 GB RAM; 1 GB free disk space; DirectX 9.0c or higher; Intel HD Graphics 4000 or better; Keyboard, mouse, or gamepad. The recommended system requirements for Beach Buggy Racing exe file download are as follows: Windows 10; 4 GB RAM; 2 GB free disk space; DirectX 11 or higher; NVIDIA GeForce GTX 650 or better; Keyboard, mouse, or gamepad.
How can I uninstall Beach Buggy Racing exe file from my PC?You can uninstall Beach Buggy Racing exe file from your PC by following these steps: Go to the Control Panel of your PC and click on "Uninstall a program". Find Beach Buggy Racing in the list of programs and click on "Uninstall". Follow the uninstallation instructions and confirm your choice. Alternatively, you can go to the destination folder where you installed Beach Buggy Racing on your PC and run the "unins000.exe" file as an administrator. Follow the uninstallation instructions and confirm your choice.
How can I contact Vector Unit, the developer of Beach Buggy Racing?You can contact Vector Unit, the developer of Beach Buggy Racing, by visiting their website at https://www.vectorunit.com/ and clicking on the "Contact" button. There, you can fill out a form with your name, email address, subject, and message. You can also contact them by sending an email to support@vectorunit.com. You can also follow them on Facebook, Twitter, Instagram, YouTube, Discord, etc.

401be4b1e0
-
-
\ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/Cross Racing Championship Extreme A Classic Racing Game with Modern Features.md b/spaces/1phancelerku/anime-remove-background/Cross Racing Championship Extreme A Classic Racing Game with Modern Features.md deleted file mode 100644 index 16d8f031d76d49d946a92da863e8668e46bd6f40..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Cross Racing Championship Extreme A Classic Racing Game with Modern Features.md +++ /dev/null @@ -1,98 +0,0 @@ -
-

Cross Racing Championship Extreme: A Review

-

If you are looking for a racing game that offers you the thrill of high-speed on and off road racing across vast open terrains, then you should check out Cross Racing Championship Extreme (CRCE). CRCE is a racing simulation game that was originally released in 2005 by Invictus Games Ltd. and has recently been re-released on Steam in an enhanced version. In this article, I will review CRCE and tell you why it is a great racing game that deserves your attention.

-

Introduction

-

CRCE is a racing game that allows you to experience the excitement of various racing disciplines, such as rally, rallycross, autocross, off-road, street racing, and more. You can race in over 60 different events across six distinct environments, ranging from icy mountainous regions and lush countryside to parched desert areas and beaches. You can also choose from a wide selection of cars, from classic hatchbacks and muscle cars to exotic supercars and off-road vehicles.

-

cross racing championship extreme download


Download File ✔✔✔ https://jinyurl.com/2uNL1j



-

One of the main features of CRCE is its realistic handling system and damage model, which make the driving experience more challenging and immersive. You have to take into account the terrain, weather, and car condition when racing, as they affect your performance and control. You also have to deal with the consequences of crashing, as your car can get damaged or even destroyed. You can repair your car in the garage, but it will cost you money and time.

-

Another feature of CRCE is its non-linear career mode, which lets you progress through different racing categories at your own pace. You can choose which events to enter, which cars to buy or sell, and how to upgrade or customize them. You can also unlock new cars, tracks, and modes by completing certain objectives or challenges. You can also earn money by winning races or performing stunts, which you can use to buy new cars or parts.

-

Gameplay

-

Single player mode

-

In single player mode, you can start your career as a rookie racer and work your way up to become a champion. You can enter various events that suit your style and preference, such as circuit races, time trials, drift contests, stunt shows, and more. You can also choose the difficulty level, the number of opponents, the weather conditions, and other settings for each event.

-

One of the most important aspects of single player mode is car customization and tuning. You can modify your car's appearance by changing its color, decals, number plates, and more. You can also improve your car's performance by upgrading its engine, transmission, suspension, brakes, tires, and more. You can also fine-tune your car's settings by adjusting its gear ratios, camber angles, brake bias, and more.

-

As you progress through your career, you will unlock new cars, tracks, and modes. Some of the cars include Ford Focus RS WRC 03, Subaru Impreza WRX STi 04, Mitsubishi Lancer Evolution VIII MR FQ400 04, Porsche 911 GT3 RS , Lamborghini Murcielago R-GT, Ferrari F430 Challenge, and more. Some of the tracks include England, France, Hungary, Egypt, Finland, and more. Some of the modes include Free Ride, where you can explore the open world and perform stunts; Ghost Race, where you can race against your own or other players' best times; and Hot Lap, where you can try to beat the lap records of the developers.

-

cross racing championship extreme free download
-cross racing championship extreme steam
-cross racing championship extreme pc game
-cross racing championship extreme full version
-cross racing championship extreme crack
-cross racing championship extreme gameplay
-cross racing championship extreme system requirements
-cross racing championship extreme mods
-cross racing championship extreme review
-cross racing championship extreme cheats
-cross racing championship extreme online
-cross racing championship extreme windows 10
-cross racing championship extreme patch
-cross racing championship extreme demo
-cross racing championship extreme trainer
-cross racing championship extreme cars
-cross racing championship extreme maps
-cross racing championship extreme multiplayer
-cross racing championship extreme keygen
-cross racing championship extreme serial number
-cross racing championship extreme iso
-cross racing championship extreme rar
-cross racing championship extreme torrent
-cross racing championship extreme direct link
-cross racing championship extreme compressed
-cross racing championship extreme arealgamer.org
-cross racing championship extreme steamunlocked.net
-cross racing championship extreme igg-games.com
-cross racing championship extreme oceanofgames.com
-cross racing championship extreme apunkagames.com
-cross racing championship extreme skidrowreloaded.com
-cross racing championship extreme fitgirl-repacks.site
-cross racing championship extreme gamefabrique.com
-cross racing championship extreme old-games.com
-cross racing championship extreme myabandonware.com
-cross racing championship extreme invictus-games.com
-cross racing championship extreme metacritic.com
-cross racing championship extreme gamespot.com
-cross racing championship extreme youtube.com
-cross racing championship extreme facebook.com
-cross racing championship extreme twitter.com
-cross racing championship extreme reddit.com
-cross racing championship extreme discord.gg
-cross racing championship extreme wikipedia.org
-cross racing championship extreme wikia.org

-

Multiplayer mode

-

In multiplayer mode, you can join or host online lobbies and race with other players from around the world. You can choose from different multiplayer game modes and maps, such as Capture the Flag, Bomb Run, Destruction Zone, and more. You can also compete with other players in ranked or unranked races and rank up on the global leaderboards.

-

Multiplayer mode is a great way to test your skills and have fun with other racing enthusiasts. You can chat with other players, challenge them to duels, or team up with them in cooperative modes. You can also customize your car and show it off to other players. You can also download and share custom cars, tracks, and mods from the Steam Workshop.

-

Graphics and Sound

-

Graphics

-

CRCE features realistic physics and damage system that make the racing experience more authentic and dynamic. You can see your car getting dented, scratched, or even losing parts as you crash into obstacles or other cars. You can also see the dust, smoke, water, and mud effects as you drive on different terrains. You can also see the weather effects, such as rain, snow, fog, and wind, that affect your visibility and traction.

-

CRCE also creates detailed and living environments that make the racing experience more immersive and diverse. You can see the trees swaying in the wind, the birds flying in the sky, the animals roaming in the fields, and the people cheering in the stands. You can also see the landmarks, buildings, bridges, and monuments that add to the realism and variety of each location.

-

CRCE also supports various screen resolutions and aspect ratios that make the racing experience more compatible and customizable. You can choose from different display modes, such as windowed, fullscreen, or borderless. You can also adjust the graphics settings, such as texture quality, shadow quality, anti-aliasing, and more. You can also enable or disable various effects, such as motion blur, lens flare, bloom, and more.

-

Sound

-

CRCE features original rock/metal soundtracks by SZEG that make the racing experience more energetic and exhilarating. You can listen to over 40 tracks that suit the mood and atmosphere of each race. You can also listen to your own music by adding your MP3 files to the game folder.

-

CRCE also allows you to listen to immersive sound effects and engine noises that make the racing experience more realistic and intense. You can hear the roar of your engine, the screech of your tires, the crunch of your collisions, and the blast of your nitro. You can also hear the ambient sounds of each environment, such as the wind blowing, the water splashing, or the crowd cheering.

-

Conclusion

-

In conclusion, CRCE is a racing game that offers you a lot of fun and challenge in various racing disciplines across vast open terrains. It has realistic physics and damage system, detailed graphics , original soundtracks, and non-linear career mode. It also has multiplayer mode, Steam Workshop support, and various customization and tuning options. It is a racing game that will keep you entertained for hours and challenge you to become the best racer.

-

If you are interested in CRCE, you can buy it on Steam for $9.99. You can also visit the official website or the Steam community page for more information and updates. You can also watch some gameplay videos or read some user reviews to see what other players think about CRCE.

-

I hope you enjoyed this article and found it helpful. If you have any questions or comments, feel free to leave them below. Thank you for reading and happy racing!

-

FAQs

-

Here are some frequently asked questions about CRCE:

-

401be4b1e0
-
-
\ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/Egg Inc. APK A Simulation Game with Chickens Research and Space.md b/spaces/1phancelerku/anime-remove-background/Egg Inc. APK A Simulation Game with Chickens Research and Space.md deleted file mode 100644 index 335817a56333fb03cd911cf3dc35394237ae9a19..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Egg Inc. APK A Simulation Game with Chickens Research and Space.md +++ /dev/null @@ -1,161 +0,0 @@ -
-

Egg Inc Download APK: How to Play the Ultimate Egg Farming Game on Your Android Device

-

If you are looking for a fun and addictive simulation game that will keep you entertained for hours, then you should try Egg Inc. This game lets you build your own egg empire from scratch, with hundreds of chickens, dozens of research items, and many challenges to complete. You can also explore the secrets of the universe hidden in the chicken egg, launch space expeditions, and join forces with other players in co-op mode. In this article, we will show you how to download and install Egg Inc APK on your Android device, how to play the game, and some tips and tricks to help you succeed.

-

egg inc download apk


Download File https://jinyurl.com/2uNMff



-

What is Egg Inc?

-

A brief introduction to the game and its features

-

Egg Inc is a simulation game developed by Auxbrain Inc, a company that specializes in creating casual games with unique gameplay and graphics. The game was released in 2016 and has since gained over 10 million downloads on Google Play Store. It has also received positive reviews from critics and players alike, who praised its originality, humor, and depth.

-

The game is set in the near future, where the secrets of the universe will be unlocked in the chicken egg. You have decided to get in on the gold rush and sell as many eggs as you can. To do that, you need to hatch chickens, build hen houses, hire drivers, commission research, launch space expeditions, and more. The game is an incremental (clicker) game at its core, but it also uses many elements from simulation games that give it a unique feel and play style. You can interact with your farm in various ways, such as tapping on chickens, swiping on vehicles, or zooming in and out. You can also customize your farm with different themes, decorations, and music.

-

Why you should play Egg Inc

-

There are many reasons why you should play Egg Inc, but here are some of the main ones:

- -

How to download and install Egg Inc APK

-

If you want to play Egg Inc on your Android device, you have two options: you can either download it from Google Play Store, or you can download it from a third-party source such as APKCombo. The latter option may be useful if you want to access older versions of the game or if you have compatibility issues with your device. However, you should be careful about the source of the APK file, as it may contain malware or viruses that can harm your device. Only download APK files from reputable and trusted websites, such as APKCombo. Here are the steps to download and install Egg Inc APK from APKCombo: - Go to the APKCombo website and search for Egg Inc in the search bar. You can also use this direct link to go to the Egg Inc page. - On the Egg Inc page, you will see various versions of the game, along with their release dates, sizes, and ratings. Choose the version that is compatible with your device and tap on the Download APK button. - A pop-up window will appear, asking you to choose a download method. You can either use a QR code scanner app to scan the code and download the file directly to your device, or you can use a download manager app to download the file faster and more securely. Choose the option that suits you best and follow the instructions on the screen. - Once the APK file is downloaded, locate it in your device's file explorer app and tap on it to install it. You may need to allow installation from unknown sources if you haven't done so already. To do that, go to Settings > Apps > Special access > Install unknown apps and enable the permission for your browser or file manager app. - After the installation is complete, you can launch Egg Inc from your app drawer and enjoy the game.

How to Play Egg Inc

-

The basics of egg farming

-

Egg Inc is a game that simulates the process of running an egg farm. Your goal is to produce as many eggs as possible and sell them for profit. To do that, you need to hatch chickens, build hen houses, hire drivers, commission research, launch space expeditions, and more.

-

The game has a simple interface that shows you your farm and its various elements. You can tap on any element to interact with it or view more information. You can also swipe left or right to move around your farm, or pinch in or out to zoom in or out.

-

The main element of your farm is the chicken coop, where you hatch chickens by tapping on the red button. The more chickens you have, the more eggs they produce. However, you also need to provide enough space for them in your hen houses, which you can build by tapping on the construction icon. You also need to deliver your eggs to the market by hiring drivers and buying vehicles, which you can do by tapping on the delivery icon.

-

egg inc apk free download
-egg inc mod apk download
-egg inc game download apk
-egg inc latest version apk download
-egg inc hack apk download
-egg inc android apk download
-egg inc simulation game apk download
-egg inc clicker game apk download
-egg inc offline apk download
-egg inc unlimited money apk download
-egg inc cheats apk download
-egg inc pro apk download
-egg inc premium apk download
-egg inc cracked apk download
-egg inc full version apk download
-egg inc update apk download
-egg inc beta apk download
-egg inc old version apk download
-egg inc 1.27.0 apk download
-egg inc 1.26.2 apk download
-egg inc 1.26.1 apk download
-egg inc xapk download
-egg inc apks download
-egg inc obb download
-egg inc app bundle download
-how to download egg inc apk
-where to download egg inc apk
-best site to download egg inc apk
-safe site to download egg inc apk
-virus free egg inc apk download
-easy way to download egg inc apk
-fast way to download egg inc apk
-direct link to download egg inc apk
-mirror link to download egg inc apk
-alternative link to download egg inc apk
-torrent link to download egg inc apk
-magnet link to download egg inc apk
-google play store link to download egg inc apk
-apkpure link to download egg inc apk
-apkmirror link to download egg inc apk
-apptoide link to download egg inc apk
-uptodown link to download egg inc apk
-apkmody link to download egg inc apk
-rexdl link to download egg inc apk
-revdl link to download egg inc apk
-andropalace link to download egg inc apk
-android1 link to download egg inc apk

-

You can earn money by selling your eggs, which depends on the type and quality of your eggs. You can also earn golden eggs, which are a special currency that you can use to buy boosters, upgrade your farm, or launch space expeditions. You can get golden eggs by completing missions, watching ads, or finding them randomly on your farm.

-

The different types of eggs and their benefits

-

As you progress in the game, you will be able to unlock new types of eggs that have different benefits and requirements. You can switch between different types of eggs by tapping on the egg icon at the top of the screen. Each type of egg has a different value, demand, and production rate. Some types of eggs also have special effects that can affect your farm or the world.

-

Here are some examples of the types of eggs you can unlock in Egg Inc:

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
TypeValueDemandProduction RateSpecial Effect
Edible Egg$0.25HighNormalNone
Superfood Egg$1.25HighNormalIncreases happiness and health of people who eat it
Medical Egg$6.25MediumNormalCures diseases and extends lifespan of people who eat it
Rocket Fuel Egg$30LowSlowPowers rockets and spaceships with its high energy density
Fusion Egg$150Very LowVery SlowCreates clean and unlimited energy by fusing atoms inside it
-

The various buildings, vehicles, and upgrades you can use

-

To increase your egg production and income, you can also use various buildings, vehicles, and upgrades that you can buy with your money or golden eggs. Here are some examples of what you can use:

- -

The research and missions you can complete

-

To unlock new features and challenges in the game, you can also complete research and missions that you can access by tapping on the research icon or the mission icon. Here are some examples of what you can do:

- -

The prestige and contracts system

-

To progress further in the game, you can also use the prestige and contracts system that you can access by tapping on the prestige icon or the contract icon. Here are some examples of what you can do:

- -

Tips and Tricks for Egg Inc

-

How to optimize your egg production and income

-

To optimize your egg production and income, you should follow these tips and tricks:

- -

How to use boosters and drones effectively

-

To use boosters and drones effectively, you should follow these tips and tricks:

- -

Conclusion

-

Egg Inc is a fun and addictive simulation game that lets you build your own egg empire from scratch. You can hatch chickens, build hen houses, hire drivers, commission research, launch space expeditions, and more. You can also unlock new types of eggs, each with their own benefits and requirements. You can also prestige your farm to start over with extra bonuses, or join contracts to cooperate with other players for bigger rewards. You can also complete achievements and trophies to earn golden eggs and prophecy eggs.

-

If you want to play Egg Inc on your Android device, you can download it from Google Play Store, or you can download it from a third-party source such as APKCombo. However, you should be careful about the source of the APK file, as it may contain malware or viruses that can harm your device. Only download APK files from reputable and trusted websites, such as APKCombo.

-

We hope this article has helped you learn more about Egg Inc and how to play it. If you have any questions or feedback, please feel free to leave a comment below. Happy egg farming!

-

FAQs

-

Q: How do I get more golden eggs?

-

A: You can get more golden eggs by completing missions, watching ads, shooting down drones, finding them randomly on your farm, or buying them with real money.

-

Q: How do I get more prophecy eggs?

-

A: You can get more prophecy eggs by completing trophy missions or joining contracts that reward them.

-

Q: How do I change the theme of my farm?

-

A: You can change the theme of your farm by tapping on the settings icon and choosing the theme option. You can choose from different themes, such as classic, winter, western, or futuristic.

-

Q: How do I launch a space expedition?

-

A: You can launch a space expedition by tapping on the rocket icon and choosing the expedition option. You need to have a certain amount of golden eggs and a certain type of egg to launch an expedition. You can also choose the duration and difficulty of the expedition. You can get various rewards from expeditions, such as money, golden eggs, boosters, or secrets.

-

Q: How do I create or join a co-op contract?

-

A: You can create or join a co-op contract by tapping on the contract icon and choosing the contract option. You need to have a certain type of egg and a certain farm value to join a contract. You can either join a public contract, which is open to anyone, or a private contract, which requires a code to join. You can also create your own contract and share the code with other players. You need to produce a certain amount of eggs within a certain time limit to complete a contract. You can get various rewards from contracts, such as money, golden eggs, prophecy eggs, or boosters.

401be4b1e0
-
-
\ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/Enjoy the Classic Pinoy Game of Mahjong on Your Android Device with Pinoy Mahjong APK.md b/spaces/1phancelerku/anime-remove-background/Enjoy the Classic Pinoy Game of Mahjong on Your Android Device with Pinoy Mahjong APK.md deleted file mode 100644 index ae9ae841dd8a8ec4316e856b8b72a3020ed04e31..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Enjoy the Classic Pinoy Game of Mahjong on Your Android Device with Pinoy Mahjong APK.md +++ /dev/null @@ -1,146 +0,0 @@ - -

Pinoy Mahjong APK: A Fun and Easy Way to Play Mahjong on Your Phone

-

If you love playing mahjong, you might want to try Pinoy Mahjong APK, a mobile app that lets you enjoy the game anytime, anywhere. Pinoy Mahjong is a version of mahjong that is popular in the Philippines, where it is also known as Filipino mahjong or Pusoy Dos. It is a simple and fast-paced game that can be played by anyone, even if you are not familiar with the traditional rules of mahjong. In this article, we will tell you everything you need to know about Pinoy Mahjong APK, including what it is, how to download and install it, how to play it online with friends, and how it reflects the history and culture of mahjong.

-

pinoy mahjong apk


Download Zip ★★★ https://jinyurl.com/2uNOX5



-

What is Pinoy Mahjong?

-

Pinoy Mahjong is a single-player game based on the mahjong rules (not yet formally defined) used in the Philippines. This app is an implementation of those rules. This app runs on iPads as well as iPhones.

-

The origin and rules of Pinoy Mahjong

-

Mahjong is a tile-based game that was developed in the 19th century in China and has spread throughout the world since the early 20th century. It is played by four players (with some three-player variations found in parts of China, Japan, South Korea and Southeast Asia). While many variations of mahjong exist, most variations have some basic rules in common including how a piece is drawn and discarded, how a piece is robbed from another player, the use of suits (numbered tiles) and honors (winds and dragons), the basic kinds of melds allowed, how to deal the tiles and the order of play.

-

Pinoy Mahjong is one of the many variations of mahjong that emerged in different regions and countries. It is believed that mahjong was introduced to the Philippines by Chinese immigrants during the Spanish colonial period. Over time, the game adapted to the local culture and preferences, resulting in a unique version that differs from other forms of mahjong in several ways. Some of the main differences are:

- -

The features and benefits of Pinoy Mahjong APK. jong", "mahjong", or "Filipino mahjong". You can also create your own server and invite your friends to join. -
  • Once you are in a server, look for a channel that hosts Pinoy Mahjong games. You can also create your own channel and set up the game settings, such as the number of players, the difficulty level, and the game mode. Some of the game modes are:
  • - -
  • After you have chosen or created a channel, join the game lobby and wait for other players to join. You can also invite your friends to join by sending them a link or a code.
  • -
  • When the game starts, you will see the tiles on your screen and the other players' names and avatars. You can also chat, voice call, or video call with them using Discord's features.
  • -
  • Play Pinoy Mahjong as you normally would, following the rules and the scoring system of the game mode. You can also use Discord's features to communicate and interact with other players.
  • -
  • When the game ends, you will see the results and the rankings of each player. You can also view your stats and achievements on Discord's dashboard.
  • -
  • You can play as many games as you want with your friends online using Discord. You can also join other servers and channels to play with different people and try different game modes.
  • - -

    The advantages and challenges of playing Pinoy Mahjong online

    -

    Playing Pinoy Mahjong online with friends using Discord has many advantages and challenges that make it a different and exciting experience. Some of the advantages are:

    - -

    Some of the challenges are:

    - -

    How Pinoy Mahjong reflects the history and culture of mahjong

    -

    Pinoy Mahjong is not just a game, but also a reflection of the history and culture of mahjong. Mahjong is a game that has evolved and diversified over time, influenced by various factors such as geography, politics, religion, economics, and social norms. Pinoy Mahjong is one of the examples of how mahjong has adapted to different contexts and preferences. Here are some of the ways that Pinoy Mahjong reflects the history and culture of mahjong:

    -

    pinoy mahjong game download
    -pinoy mahjong rules and scoring
    -pinoy mahjong app for ipad
    -pinoy mahjong youtube videos
    -pinoy mahjong online free play
    -pinoy mahjong rotate games llc
    -pinoy mahjong appadvice review
    -pinoy mahjong strategy and tips
    -pinoy mahjong best tiles to use
    -pinoy mahjong how to win
    -pinoy mahjong history and origin
    -pinoy mahjong variations and styles
    -pinoy mahjong tournaments and events
    -pinoy mahjong cheats and hacks
    -pinoy mahjong latest version update
    -pinoy mahjong for android devices
    -pinoy mahjong for ios devices
    -pinoy mahjong for windows devices
    -pinoy mahjong for mac devices
    -pinoy mahjong for linux devices
    -pinoy mahjong offline mode available
    -pinoy mahjong multiplayer mode available
    -pinoy mahjong single-player mode available
    -pinoy mahjong custom mode available
    -pinoy mahjong difficulty levels available
    -pinoy mahjong sound effects and music
    -pinoy mahjong graphics and design
    -pinoy mahjong user interface and controls
    -pinoy mahjong feedback and ratings
    -pinoy mahjong support and contact
    -pinoy mahjong faq and help
    -pinoy mahjong privacy policy and terms of service
    -pinoy mahjong installation and setup guide
    -pinoy mahjong features and benefits
    -pinoy mahjong pros and cons comparison
    -pinoy mahjong alternatives and competitors
    -pinoy mahjong testimonials and reviews
    -pinoy mahjong social media and community
    -pinoy mahjong blog and news articles
    -pinoy mahjong awards and recognition

    -

    The evolution and variations of mahjong

    -

    Mahjong is a game that has undergone many changes and modifications since its origin in China. Some of the factors that contributed to its evolution are:

    - -

    Pinoy Mahjong is one of the many variations of mahjong that emerged from these factors. It is a version that reflects the preferences and needs of the Filipino people, who are known for their creativity, adaptability, and hospitality. Pinoy Mahjong is a game that is easy to learn, fun to play, and suitable for any occasion.

    -

    The significance and symbolism of mahjong in different communities

    -

    Mahjong is not just a game, but also a symbol of many things in different communities. Some of the things that mahjong represents are:

    - -

    Pinoy Mahjong is one of the examples of how mahjong can have different meanings and functions in different communities. It is a game that reflects the culture and identity of the Filipino people, who are known for their optimism, resilience, and hospitality. Pinoy Mahjong is a game that can bring joy and happiness to anyone who plays it.

    -

    Conclusion

    -

    Pinoy Mahjong APK is a mobile app that allows you to play Pinoy Mahjong on your phone or tablet. It is a version of mahjong that is popular in the Philippines, where it is also known as Filipino mahjong or Pusoy Dos. It is a simple and fast-paced game that can be played by anyone, even if you are not familiar with the traditional rules of mahjong.

    -

    A summary of the main points

    -

    In this article, we have told you everything you need to know about Pinoy Mahjong APK, including:

    - -

    A call to action to download and play Pinoy Mahjong APK

    -

    If you are interested in playing Pinoy Mahjong APK, you can download it for free from the links below . You can also visit the official website or follow the social media accounts of Pinoy Mahjong APK for more information and updates. You can also share your feedback and suggestions with the developers or other players through the app or online platforms.

    -

    Pinoy Mahjong APK is a fun and easy way to play mahjong on your phone or tablet. It is a game that can entertain you, challenge you, teach you, and connect you with others. It is a game that can make you happy. So what are you waiting for? Download Pinoy Mahjong APK today and enjoy the game!

    -

    Frequently Asked Questions

    -

    Here are some of the frequently asked questions about Pinoy Mahjong APK:

    -

    Q: Is Pinoy Mahjong APK safe to download and play?

    -

    A: Yes, Pinoy Mahjong APK is safe to download and play. It does not contain any viruses or malware that can harm your device or data. It also does not require any sensitive or personal information from you to play the game. However, you should always download Pinoy Mahjong APK from trusted sources such as Google Play Store or App Store to avoid any potential risks.

    -

    Q: Is Pin oy Mahjong APK compatible with all devices and platforms?

    -

    A: Pinoy Mahjong APK is compatible with most devices and platforms that run on Android or iOS operating systems. However, some older or lower-end devices may experience some performance issues or errors while playing the game. You can check the minimum system requirements and compatibility of Pinoy Mahjong APK on its official website or on Google Play Store or App Store before downloading it.

    -

    Q: How can I contact the developers or support team of Pinoy Mahjong APK?

    -

    A: If you have any questions, problems, or suggestions regarding Pinoy Mahjong APK, you can contact the developers or support team of Pinoy Mahjong APK through the following ways:

    - -

    Q: How can I update Pinoy Mahjong APK to the latest version?

    -

    A: If you have downloaded Pinoy Mahjong APK from Google Play Store or App Store, you can update it automatically or manually through the app store. If you have downloaded Pinoy Mahjong APK from other sources, you can update it manually by downloading and installing the latest version from the official website or from the links provided below . You should always update Pinoy Mahjong APK to the latest version to enjoy the new features, improvements, and bug fixes.

    -

    Q: How can I uninstall Pinoy Mahjong APK from my device?

    -

    A: If you want to uninstall Pinoy Mahjong APK from your device, you can do so by following these steps:

    -
      -
    1. Go to your device's settings and look for the apps or applications menu.
    2. -
    3. Find and tap on Pinoy Mahjong APK from the list of apps installed on your device.
    4. -
    5. Tap on the uninstall button and confirm your action.
    6. -
    7. Wait for the uninstallation process to finish and check if Pinoy Mahjong APK is removed from your device.
    8. -

    401be4b1e0
    -
    -
    \ No newline at end of file diff --git a/spaces/3i2irg/first-app/app.py b/spaces/3i2irg/first-app/app.py deleted file mode 100644 index ce9522ea334f3405c5bf0fb6929e2c640c1c387e..0000000000000000000000000000000000000000 --- a/spaces/3i2irg/first-app/app.py +++ /dev/null @@ -1,20 +0,0 @@ -import gradio as gr - -from fastai.vision.all import * -import skimage - -learn = load_learner('export.pkl') - -labels = learn.dls.vocab -def predict(img): - img = PILImage.create(img) - pred,pred_idx,probs = learn.predict(img) - return {labels[i]: float(probs[i]) for i in range(len(labels))} - -title = "Emotion Classifier" -description = "An emotion classifier trained with images from DuckDuckGo image search and fastai." -examples = ['happyphoto.jpg', 'yoelphoto.jpg'] -interpretation='default' -enable_queue=True - -gr.Interface(fn=predict,inputs=gr.inputs.Image(shape=(512, 512)),outputs=gr.outputs.Label(num_top_classes=3),title=title,description=description,examples=examples,interpretation=interpretation,enable_queue=enable_queue).launch() \ No newline at end of file diff --git a/spaces/4Taps/SadTalker/src/face3d/util/load_mats.py b/spaces/4Taps/SadTalker/src/face3d/util/load_mats.py deleted file mode 100644 index f9a6fcc71de1d7dad8b0f81c67dc1c213764ff0b..0000000000000000000000000000000000000000 --- a/spaces/4Taps/SadTalker/src/face3d/util/load_mats.py +++ /dev/null @@ -1,120 +0,0 @@ -"""This script is to load 3D face model for Deep3DFaceRecon_pytorch -""" - -import numpy as np -from PIL import Image -from scipy.io import loadmat, savemat -from array import array -import os.path as osp - -# load expression basis -def LoadExpBasis(bfm_folder='BFM'): - n_vertex = 53215 - Expbin = open(osp.join(bfm_folder, 'Exp_Pca.bin'), 'rb') - exp_dim = array('i') - exp_dim.fromfile(Expbin, 1) - expMU = array('f') - expPC = array('f') - expMU.fromfile(Expbin, 3*n_vertex) - expPC.fromfile(Expbin, 3*exp_dim[0]*n_vertex) - Expbin.close() - - expPC = np.array(expPC) - expPC = np.reshape(expPC, [exp_dim[0], -1]) - expPC = np.transpose(expPC) - - expEV = np.loadtxt(osp.join(bfm_folder, 'std_exp.txt')) - - return expPC, expEV - - -# transfer original BFM09 to our face model -def transferBFM09(bfm_folder='BFM'): - print('Transfer BFM09 to BFM_model_front......') - original_BFM = loadmat(osp.join(bfm_folder, '01_MorphableModel.mat')) - shapePC = original_BFM['shapePC'] # shape basis - shapeEV = original_BFM['shapeEV'] # corresponding eigen value - shapeMU = original_BFM['shapeMU'] # mean face - texPC = original_BFM['texPC'] # texture basis - texEV = original_BFM['texEV'] # eigen value - texMU = original_BFM['texMU'] # mean texture - - expPC, expEV = LoadExpBasis(bfm_folder) - - # transfer BFM09 to our face model - - idBase = shapePC*np.reshape(shapeEV, [-1, 199]) - idBase = idBase/1e5 # unify the scale to decimeter - idBase = idBase[:, :80] # use only first 80 basis - - exBase = expPC*np.reshape(expEV, [-1, 79]) - exBase = exBase/1e5 # unify the scale to decimeter - exBase = exBase[:, :64] # use only first 64 basis - - texBase = texPC*np.reshape(texEV, [-1, 199]) - texBase = texBase[:, :80] # use only first 80 basis - - # our face model is cropped along face landmarks and contains only 35709 vertex. - # original BFM09 contains 53490 vertex, and expression basis provided by Guo et al. contains 53215 vertex. - # thus we select corresponding vertex to get our face model. - - index_exp = loadmat(osp.join(bfm_folder, 'BFM_front_idx.mat')) - index_exp = index_exp['idx'].astype(np.int32) - 1 # starts from 0 (to 53215) - - index_shape = loadmat(osp.join(bfm_folder, 'BFM_exp_idx.mat')) - index_shape = index_shape['trimIndex'].astype( - np.int32) - 1 # starts from 0 (to 53490) - index_shape = index_shape[index_exp] - - idBase = np.reshape(idBase, [-1, 3, 80]) - idBase = idBase[index_shape, :, :] - idBase = np.reshape(idBase, [-1, 80]) - - texBase = np.reshape(texBase, [-1, 3, 80]) - texBase = texBase[index_shape, :, :] - texBase = np.reshape(texBase, [-1, 80]) - - exBase = np.reshape(exBase, [-1, 3, 64]) - exBase = exBase[index_exp, :, :] - exBase = np.reshape(exBase, [-1, 64]) - - meanshape = np.reshape(shapeMU, [-1, 3])/1e5 - meanshape = meanshape[index_shape, :] - meanshape = np.reshape(meanshape, [1, -1]) - - meantex = np.reshape(texMU, [-1, 3]) - meantex = meantex[index_shape, :] - meantex = np.reshape(meantex, [1, -1]) - - # other info contains triangles, region used for computing photometric loss, - # region used for skin texture regularization, and 68 landmarks index etc. - other_info = loadmat(osp.join(bfm_folder, 'facemodel_info.mat')) - frontmask2_idx = other_info['frontmask2_idx'] - skinmask = other_info['skinmask'] - keypoints = other_info['keypoints'] - point_buf = other_info['point_buf'] - tri = other_info['tri'] - tri_mask2 = other_info['tri_mask2'] - - # save our face model - savemat(osp.join(bfm_folder, 'BFM_model_front.mat'), {'meanshape': meanshape, 'meantex': meantex, 'idBase': idBase, 'exBase': exBase, 'texBase': texBase, - 'tri': tri, 'point_buf': point_buf, 'tri_mask2': tri_mask2, 'keypoints': keypoints, 'frontmask2_idx': frontmask2_idx, 'skinmask': skinmask}) - - -# load landmarks for standard face, which is used for image preprocessing -def load_lm3d(bfm_folder): - - Lm3D = loadmat(osp.join(bfm_folder, 'similarity_Lm3D_all.mat')) - Lm3D = Lm3D['lm'] - - # calculate 5 facial landmarks using 68 landmarks - lm_idx = np.array([31, 37, 40, 43, 46, 49, 55]) - 1 - Lm3D = np.stack([Lm3D[lm_idx[0], :], np.mean(Lm3D[lm_idx[[1, 2]], :], 0), np.mean( - Lm3D[lm_idx[[3, 4]], :], 0), Lm3D[lm_idx[5], :], Lm3D[lm_idx[6], :]], axis=0) - Lm3D = Lm3D[[1, 2, 0, 3, 4], :] - - return Lm3D - - -if __name__ == '__main__': - transferBFM09() \ No newline at end of file diff --git a/spaces/801artistry/RVC801/lib/infer_pack/commons.py b/spaces/801artistry/RVC801/lib/infer_pack/commons.py deleted file mode 100644 index 54470986f37825b35d90d7efa7437d1c26b87215..0000000000000000000000000000000000000000 --- a/spaces/801artistry/RVC801/lib/infer_pack/commons.py +++ /dev/null @@ -1,166 +0,0 @@ -import math -import numpy as np -import torch -from torch import nn -from torch.nn import functional as F - - -def init_weights(m, mean=0.0, std=0.01): - classname = m.__class__.__name__ - if classname.find("Conv") != -1: - m.weight.data.normal_(mean, std) - - -def get_padding(kernel_size, dilation=1): - return int((kernel_size * dilation - dilation) / 2) - - -def convert_pad_shape(pad_shape): - l = pad_shape[::-1] - pad_shape = [item for sublist in l for item in sublist] - return pad_shape - - -def kl_divergence(m_p, logs_p, m_q, logs_q): - """KL(P||Q)""" - kl = (logs_q - logs_p) - 0.5 - kl += ( - 0.5 * (torch.exp(2.0 * logs_p) + ((m_p - m_q) ** 2)) * torch.exp(-2.0 * logs_q) - ) - return kl - - -def rand_gumbel(shape): - """Sample from the Gumbel distribution, protect from overflows.""" - uniform_samples = torch.rand(shape) * 0.99998 + 0.00001 - return -torch.log(-torch.log(uniform_samples)) - - -def rand_gumbel_like(x): - g = rand_gumbel(x.size()).to(dtype=x.dtype, device=x.device) - return g - - -def slice_segments(x, ids_str, segment_size=4): - ret = torch.zeros_like(x[:, :, :segment_size]) - for i in range(x.size(0)): - idx_str = ids_str[i] - idx_end = idx_str + segment_size - ret[i] = x[i, :, idx_str:idx_end] - return ret - - -def slice_segments2(x, ids_str, segment_size=4): - ret = torch.zeros_like(x[:, :segment_size]) - for i in range(x.size(0)): - idx_str = ids_str[i] - idx_end = idx_str + segment_size - ret[i] = x[i, idx_str:idx_end] - return ret - - -def rand_slice_segments(x, x_lengths=None, segment_size=4): - b, d, t = x.size() - if x_lengths is None: - x_lengths = t - ids_str_max = x_lengths - segment_size + 1 - ids_str = (torch.rand([b]).to(device=x.device) * ids_str_max).to(dtype=torch.long) - ret = slice_segments(x, ids_str, segment_size) - return ret, ids_str - - -def get_timing_signal_1d(length, channels, min_timescale=1.0, max_timescale=1.0e4): - position = torch.arange(length, dtype=torch.float) - num_timescales = channels // 2 - log_timescale_increment = math.log(float(max_timescale) / float(min_timescale)) / ( - num_timescales - 1 - ) - inv_timescales = min_timescale * torch.exp( - torch.arange(num_timescales, dtype=torch.float) * -log_timescale_increment - ) - scaled_time = position.unsqueeze(0) * inv_timescales.unsqueeze(1) - signal = torch.cat([torch.sin(scaled_time), torch.cos(scaled_time)], 0) - signal = F.pad(signal, [0, 0, 0, channels % 2]) - signal = signal.view(1, channels, length) - return signal - - -def add_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4): - b, channels, length = x.size() - signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale) - return x + signal.to(dtype=x.dtype, device=x.device) - - -def cat_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4, axis=1): - b, channels, length = x.size() - signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale) - return torch.cat([x, signal.to(dtype=x.dtype, device=x.device)], axis) - - -def subsequent_mask(length): - mask = torch.tril(torch.ones(length, length)).unsqueeze(0).unsqueeze(0) - return mask - - -@torch.jit.script -def fused_add_tanh_sigmoid_multiply(input_a, input_b, n_channels): - n_channels_int = n_channels[0] - in_act = input_a + input_b - t_act = torch.tanh(in_act[:, :n_channels_int, :]) - s_act = torch.sigmoid(in_act[:, n_channels_int:, :]) - acts = t_act * s_act - return acts - - -def convert_pad_shape(pad_shape): - l = pad_shape[::-1] - pad_shape = [item for sublist in l for item in sublist] - return pad_shape - - -def shift_1d(x): - x = F.pad(x, convert_pad_shape([[0, 0], [0, 0], [1, 0]]))[:, :, :-1] - return x - - -def sequence_mask(length, max_length=None): - if max_length is None: - max_length = length.max() - x = torch.arange(max_length, dtype=length.dtype, device=length.device) - return x.unsqueeze(0) < length.unsqueeze(1) - - -def generate_path(duration, mask): - """ - duration: [b, 1, t_x] - mask: [b, 1, t_y, t_x] - """ - device = duration.device - - b, _, t_y, t_x = mask.shape - cum_duration = torch.cumsum(duration, -1) - - cum_duration_flat = cum_duration.view(b * t_x) - path = sequence_mask(cum_duration_flat, t_y).to(mask.dtype) - path = path.view(b, t_x, t_y) - path = path - F.pad(path, convert_pad_shape([[0, 0], [1, 0], [0, 0]]))[:, :-1] - path = path.unsqueeze(1).transpose(2, 3) * mask - return path - - -def clip_grad_value_(parameters, clip_value, norm_type=2): - if isinstance(parameters, torch.Tensor): - parameters = [parameters] - parameters = list(filter(lambda p: p.grad is not None, parameters)) - norm_type = float(norm_type) - if clip_value is not None: - clip_value = float(clip_value) - - total_norm = 0 - for p in parameters: - param_norm = p.grad.data.norm(norm_type) - total_norm += param_norm.item() ** norm_type - if clip_value is not None: - p.grad.data.clamp_(min=-clip_value, max=clip_value) - total_norm = total_norm ** (1.0 / norm_type) - return total_norm diff --git a/spaces/A00001/bingothoo/tests/parse.ts b/spaces/A00001/bingothoo/tests/parse.ts deleted file mode 100644 index 92940fe6315f1d7cb2b267ba5e5a7e26460a1de3..0000000000000000000000000000000000000000 --- a/spaces/A00001/bingothoo/tests/parse.ts +++ /dev/null @@ -1,13 +0,0 @@ -import { promises as fs } from 'fs' -import { join } from 'path' -import { parseHeadersFromCurl } from '@/lib/utils' - -(async () => { - const content = await fs.readFile(join(__dirname, './fixtures/curl.txt'), 'utf-8') - const headers = parseHeadersFromCurl(content) - console.log(headers) - - const cmdContent = await fs.readFile(join(__dirname, './fixtures/cmd.txt'), 'utf-8') - const cmdHeaders = parseHeadersFromCurl(cmdContent) - console.log(cmdHeaders) -})() diff --git "a/spaces/AB-TW/team-ai/documents/bussiness_context/NOTION_DB/Engineering Wiki 2402f5396a3244fdb3f1d135bdb0f3d6/ABstract\357\274\210\346\217\222\344\273\266\345\214\226AB Testing\345\271\263\345\217\260\357\274\211 746b87acd94643ca871ec661b63f196c.md" "b/spaces/AB-TW/team-ai/documents/bussiness_context/NOTION_DB/Engineering Wiki 2402f5396a3244fdb3f1d135bdb0f3d6/ABstract\357\274\210\346\217\222\344\273\266\345\214\226AB Testing\345\271\263\345\217\260\357\274\211 746b87acd94643ca871ec661b63f196c.md" deleted file mode 100644 index b9b5eb5b6c132f8073b5be3230d977c88d96c303..0000000000000000000000000000000000000000 --- "a/spaces/AB-TW/team-ai/documents/bussiness_context/NOTION_DB/Engineering Wiki 2402f5396a3244fdb3f1d135bdb0f3d6/ABstract\357\274\210\346\217\222\344\273\266\345\214\226AB Testing\345\271\263\345\217\260\357\274\211 746b87acd94643ca871ec661b63f196c.md" +++ /dev/null @@ -1,123 +0,0 @@ -# ABstract(插件化AB Testing平台) - -Last edited time: April 23, 2023 3:58 PM -Owner: Anonymous - -Project name: "ABstract". It plays on the words "AB testing" while also hinting at the concept of abstracting away the complexity of building and managing an AB testing platform. - -> 这篇文档介绍了一个插件化AB Testing平台的产品愿景、项目目标、核心业务能力、业务模型和进程间架构。该平台提供配置管理、实验管理、数据收集和配置发布等核心能力,可以帮助公司业务/开发人员基于AB测试实现数据驱动产品迭代,同时提供核心能力插件化管理和最小实现,支持开发人员结合实际需求进行剪裁或扩展。 -> - -## 产品愿景 - -对于:我们的目标客户/用户 - - - -他们想:目标客户的痛点或者希望 - - - -这个:产品名称 - - - -是一个:什么样的产品类型(平台?工具?) - - - -它可以:通过什么样的功能,为用户带来什么样的价值。 - - - -不同于:市场上的竞品及其特点 - - - - - -它的优势是:我们产品的独特价值 - - - -## 项目目标 - -> 完成插件化AB Testing 平台核心功能开发 -> - -> 探索AI在软件开发中的应用实践 -> - -## 核心业务能力 - -- 配置管理 - 1. Feature Flag管理 - 1. 提供Feature Config的元数据 - 2. Feature Config管理 - 1. 提供依据Feature Flag生产Feature Config 配置界面的能力 -- 实验管理 - 1. 实验管理 - 1. 提供实验、分组、指标配置的管理功能 - 2. 提供实验实验运行结果查看 - 2. 实验分级管理 - 1. 提供互斥组管理 - 2. 互斥组中的实验流量之间互斥 - 3. 实验执行阶段分组结果查询能力 -- Tracking 数据收集 - - 埋点事件上报收集 - -- 配置发布 - - 提供统一的通过featureKey 获取配置的结果,统一Feature Config 和实验配置下发结果 - - -```mermaid -graph LR - subgraph "AB Testing 平台" - AB测试核心能力 --> 配置管理 - AB测试核心能力 --> 实验管理 - AB测试核心能力 --> 数据收集 - AB测试核心能力 --> 配置发布 - 数据收集 --> 指标分析 - 配置管理 --> FeatureFlag - 配置管理 --> FeatureConfig - 实验管理 --> 实验配置 - 实验管理 --> 实验分级 - 配置发布 --> 实验结果 - 配置发布 --> FeatureConfig结果 - end - -``` - -## 业务模型 - -[业务模型](ABstract%EF%BC%88%E6%8F%92%E4%BB%B6%E5%8C%96AB%20Testing%E5%B9%B3%E5%8F%B0%EF%BC%89%20746b87acd94643ca871ec661b63f196c/%E4%B8%9A%E5%8A%A1%E6%A8%A1%E5%9E%8B%20d31846027b4f40ca99f6e76f897663a4.md) - -## 进程间架构 - -[进程间架构](ABstract%EF%BC%88%E6%8F%92%E4%BB%B6%E5%8C%96AB%20Testing%E5%B9%B3%E5%8F%B0%EF%BC%89%20746b87acd94643ca871ec661b63f196c/%E8%BF%9B%E7%A8%8B%E9%97%B4%E6%9E%B6%E6%9E%84%20d50744212b044d06a4b29fe931df391b.md) \ No newline at end of file diff --git a/spaces/AIGText/GlyphControl/ldm/modules/midas/midas/vit.py b/spaces/AIGText/GlyphControl/ldm/modules/midas/midas/vit.py deleted file mode 100644 index ea46b1be88b261b0dec04f3da0256f5f66f88a74..0000000000000000000000000000000000000000 --- a/spaces/AIGText/GlyphControl/ldm/modules/midas/midas/vit.py +++ /dev/null @@ -1,491 +0,0 @@ -import torch -import torch.nn as nn -import timm -import types -import math -import torch.nn.functional as F - - -class Slice(nn.Module): - def __init__(self, start_index=1): - super(Slice, self).__init__() - self.start_index = start_index - - def forward(self, x): - return x[:, self.start_index :] - - -class AddReadout(nn.Module): - def __init__(self, start_index=1): - super(AddReadout, self).__init__() - self.start_index = start_index - - def forward(self, x): - if self.start_index == 2: - readout = (x[:, 0] + x[:, 1]) / 2 - else: - readout = x[:, 0] - return x[:, self.start_index :] + readout.unsqueeze(1) - - -class ProjectReadout(nn.Module): - def __init__(self, in_features, start_index=1): - super(ProjectReadout, self).__init__() - self.start_index = start_index - - self.project = nn.Sequential(nn.Linear(2 * in_features, in_features), nn.GELU()) - - def forward(self, x): - readout = x[:, 0].unsqueeze(1).expand_as(x[:, self.start_index :]) - features = torch.cat((x[:, self.start_index :], readout), -1) - - return self.project(features) - - -class Transpose(nn.Module): - def __init__(self, dim0, dim1): - super(Transpose, self).__init__() - self.dim0 = dim0 - self.dim1 = dim1 - - def forward(self, x): - x = x.transpose(self.dim0, self.dim1) - return x - - -def forward_vit(pretrained, x): - b, c, h, w = x.shape - - glob = pretrained.model.forward_flex(x) - - layer_1 = pretrained.activations["1"] - layer_2 = pretrained.activations["2"] - layer_3 = pretrained.activations["3"] - layer_4 = pretrained.activations["4"] - - layer_1 = pretrained.act_postprocess1[0:2](layer_1) - layer_2 = pretrained.act_postprocess2[0:2](layer_2) - layer_3 = pretrained.act_postprocess3[0:2](layer_3) - layer_4 = pretrained.act_postprocess4[0:2](layer_4) - - unflatten = nn.Sequential( - nn.Unflatten( - 2, - torch.Size( - [ - h // pretrained.model.patch_size[1], - w // pretrained.model.patch_size[0], - ] - ), - ) - ) - - if layer_1.ndim == 3: - layer_1 = unflatten(layer_1) - if layer_2.ndim == 3: - layer_2 = unflatten(layer_2) - if layer_3.ndim == 3: - layer_3 = unflatten(layer_3) - if layer_4.ndim == 3: - layer_4 = unflatten(layer_4) - - layer_1 = pretrained.act_postprocess1[3 : len(pretrained.act_postprocess1)](layer_1) - layer_2 = pretrained.act_postprocess2[3 : len(pretrained.act_postprocess2)](layer_2) - layer_3 = pretrained.act_postprocess3[3 : len(pretrained.act_postprocess3)](layer_3) - layer_4 = pretrained.act_postprocess4[3 : len(pretrained.act_postprocess4)](layer_4) - - return layer_1, layer_2, layer_3, layer_4 - - -def _resize_pos_embed(self, posemb, gs_h, gs_w): - posemb_tok, posemb_grid = ( - posemb[:, : self.start_index], - posemb[0, self.start_index :], - ) - - gs_old = int(math.sqrt(len(posemb_grid))) - - posemb_grid = posemb_grid.reshape(1, gs_old, gs_old, -1).permute(0, 3, 1, 2) - posemb_grid = F.interpolate(posemb_grid, size=(gs_h, gs_w), mode="bilinear") - posemb_grid = posemb_grid.permute(0, 2, 3, 1).reshape(1, gs_h * gs_w, -1) - - posemb = torch.cat([posemb_tok, posemb_grid], dim=1) - - return posemb - - -def forward_flex(self, x): - b, c, h, w = x.shape - - pos_embed = self._resize_pos_embed( - self.pos_embed, h // self.patch_size[1], w // self.patch_size[0] - ) - - B = x.shape[0] - - if hasattr(self.patch_embed, "backbone"): - x = self.patch_embed.backbone(x) - if isinstance(x, (list, tuple)): - x = x[-1] # last feature if backbone outputs list/tuple of features - - x = self.patch_embed.proj(x).flatten(2).transpose(1, 2) - - if getattr(self, "dist_token", None) is not None: - cls_tokens = self.cls_token.expand( - B, -1, -1 - ) # stole cls_tokens impl from Phil Wang, thanks - dist_token = self.dist_token.expand(B, -1, -1) - x = torch.cat((cls_tokens, dist_token, x), dim=1) - else: - cls_tokens = self.cls_token.expand( - B, -1, -1 - ) # stole cls_tokens impl from Phil Wang, thanks - x = torch.cat((cls_tokens, x), dim=1) - - x = x + pos_embed - x = self.pos_drop(x) - - for blk in self.blocks: - x = blk(x) - - x = self.norm(x) - - return x - - -activations = {} - - -def get_activation(name): - def hook(model, input, output): - activations[name] = output - - return hook - - -def get_readout_oper(vit_features, features, use_readout, start_index=1): - if use_readout == "ignore": - readout_oper = [Slice(start_index)] * len(features) - elif use_readout == "add": - readout_oper = [AddReadout(start_index)] * len(features) - elif use_readout == "project": - readout_oper = [ - ProjectReadout(vit_features, start_index) for out_feat in features - ] - else: - assert ( - False - ), "wrong operation for readout token, use_readout can be 'ignore', 'add', or 'project'" - - return readout_oper - - -def _make_vit_b16_backbone( - model, - features=[96, 192, 384, 768], - size=[384, 384], - hooks=[2, 5, 8, 11], - vit_features=768, - use_readout="ignore", - start_index=1, -): - pretrained = nn.Module() - - pretrained.model = model - pretrained.model.blocks[hooks[0]].register_forward_hook(get_activation("1")) - pretrained.model.blocks[hooks[1]].register_forward_hook(get_activation("2")) - pretrained.model.blocks[hooks[2]].register_forward_hook(get_activation("3")) - pretrained.model.blocks[hooks[3]].register_forward_hook(get_activation("4")) - - pretrained.activations = activations - - readout_oper = get_readout_oper(vit_features, features, use_readout, start_index) - - # 32, 48, 136, 384 - pretrained.act_postprocess1 = nn.Sequential( - readout_oper[0], - Transpose(1, 2), - nn.Unflatten(2, torch.Size([size[0] // 16, size[1] // 16])), - nn.Conv2d( - in_channels=vit_features, - out_channels=features[0], - kernel_size=1, - stride=1, - padding=0, - ), - nn.ConvTranspose2d( - in_channels=features[0], - out_channels=features[0], - kernel_size=4, - stride=4, - padding=0, - bias=True, - dilation=1, - groups=1, - ), - ) - - pretrained.act_postprocess2 = nn.Sequential( - readout_oper[1], - Transpose(1, 2), - nn.Unflatten(2, torch.Size([size[0] // 16, size[1] // 16])), - nn.Conv2d( - in_channels=vit_features, - out_channels=features[1], - kernel_size=1, - stride=1, - padding=0, - ), - nn.ConvTranspose2d( - in_channels=features[1], - out_channels=features[1], - kernel_size=2, - stride=2, - padding=0, - bias=True, - dilation=1, - groups=1, - ), - ) - - pretrained.act_postprocess3 = nn.Sequential( - readout_oper[2], - Transpose(1, 2), - nn.Unflatten(2, torch.Size([size[0] // 16, size[1] // 16])), - nn.Conv2d( - in_channels=vit_features, - out_channels=features[2], - kernel_size=1, - stride=1, - padding=0, - ), - ) - - pretrained.act_postprocess4 = nn.Sequential( - readout_oper[3], - Transpose(1, 2), - nn.Unflatten(2, torch.Size([size[0] // 16, size[1] // 16])), - nn.Conv2d( - in_channels=vit_features, - out_channels=features[3], - kernel_size=1, - stride=1, - padding=0, - ), - nn.Conv2d( - in_channels=features[3], - out_channels=features[3], - kernel_size=3, - stride=2, - padding=1, - ), - ) - - pretrained.model.start_index = start_index - pretrained.model.patch_size = [16, 16] - - # We inject this function into the VisionTransformer instances so that - # we can use it with interpolated position embeddings without modifying the library source. - pretrained.model.forward_flex = types.MethodType(forward_flex, pretrained.model) - pretrained.model._resize_pos_embed = types.MethodType( - _resize_pos_embed, pretrained.model - ) - - return pretrained - - -def _make_pretrained_vitl16_384(pretrained, use_readout="ignore", hooks=None): - model = timm.create_model("vit_large_patch16_384", pretrained=pretrained) - - hooks = [5, 11, 17, 23] if hooks == None else hooks - return _make_vit_b16_backbone( - model, - features=[256, 512, 1024, 1024], - hooks=hooks, - vit_features=1024, - use_readout=use_readout, - ) - - -def _make_pretrained_vitb16_384(pretrained, use_readout="ignore", hooks=None): - model = timm.create_model("vit_base_patch16_384", pretrained=pretrained) - - hooks = [2, 5, 8, 11] if hooks == None else hooks - return _make_vit_b16_backbone( - model, features=[96, 192, 384, 768], hooks=hooks, use_readout=use_readout - ) - - -def _make_pretrained_deitb16_384(pretrained, use_readout="ignore", hooks=None): - model = timm.create_model("vit_deit_base_patch16_384", pretrained=pretrained) - - hooks = [2, 5, 8, 11] if hooks == None else hooks - return _make_vit_b16_backbone( - model, features=[96, 192, 384, 768], hooks=hooks, use_readout=use_readout - ) - - -def _make_pretrained_deitb16_distil_384(pretrained, use_readout="ignore", hooks=None): - model = timm.create_model( - "vit_deit_base_distilled_patch16_384", pretrained=pretrained - ) - - hooks = [2, 5, 8, 11] if hooks == None else hooks - return _make_vit_b16_backbone( - model, - features=[96, 192, 384, 768], - hooks=hooks, - use_readout=use_readout, - start_index=2, - ) - - -def _make_vit_b_rn50_backbone( - model, - features=[256, 512, 768, 768], - size=[384, 384], - hooks=[0, 1, 8, 11], - vit_features=768, - use_vit_only=False, - use_readout="ignore", - start_index=1, -): - pretrained = nn.Module() - - pretrained.model = model - - if use_vit_only == True: - pretrained.model.blocks[hooks[0]].register_forward_hook(get_activation("1")) - pretrained.model.blocks[hooks[1]].register_forward_hook(get_activation("2")) - else: - pretrained.model.patch_embed.backbone.stages[0].register_forward_hook( - get_activation("1") - ) - pretrained.model.patch_embed.backbone.stages[1].register_forward_hook( - get_activation("2") - ) - - pretrained.model.blocks[hooks[2]].register_forward_hook(get_activation("3")) - pretrained.model.blocks[hooks[3]].register_forward_hook(get_activation("4")) - - pretrained.activations = activations - - readout_oper = get_readout_oper(vit_features, features, use_readout, start_index) - - if use_vit_only == True: - pretrained.act_postprocess1 = nn.Sequential( - readout_oper[0], - Transpose(1, 2), - nn.Unflatten(2, torch.Size([size[0] // 16, size[1] // 16])), - nn.Conv2d( - in_channels=vit_features, - out_channels=features[0], - kernel_size=1, - stride=1, - padding=0, - ), - nn.ConvTranspose2d( - in_channels=features[0], - out_channels=features[0], - kernel_size=4, - stride=4, - padding=0, - bias=True, - dilation=1, - groups=1, - ), - ) - - pretrained.act_postprocess2 = nn.Sequential( - readout_oper[1], - Transpose(1, 2), - nn.Unflatten(2, torch.Size([size[0] // 16, size[1] // 16])), - nn.Conv2d( - in_channels=vit_features, - out_channels=features[1], - kernel_size=1, - stride=1, - padding=0, - ), - nn.ConvTranspose2d( - in_channels=features[1], - out_channels=features[1], - kernel_size=2, - stride=2, - padding=0, - bias=True, - dilation=1, - groups=1, - ), - ) - else: - pretrained.act_postprocess1 = nn.Sequential( - nn.Identity(), nn.Identity(), nn.Identity() - ) - pretrained.act_postprocess2 = nn.Sequential( - nn.Identity(), nn.Identity(), nn.Identity() - ) - - pretrained.act_postprocess3 = nn.Sequential( - readout_oper[2], - Transpose(1, 2), - nn.Unflatten(2, torch.Size([size[0] // 16, size[1] // 16])), - nn.Conv2d( - in_channels=vit_features, - out_channels=features[2], - kernel_size=1, - stride=1, - padding=0, - ), - ) - - pretrained.act_postprocess4 = nn.Sequential( - readout_oper[3], - Transpose(1, 2), - nn.Unflatten(2, torch.Size([size[0] // 16, size[1] // 16])), - nn.Conv2d( - in_channels=vit_features, - out_channels=features[3], - kernel_size=1, - stride=1, - padding=0, - ), - nn.Conv2d( - in_channels=features[3], - out_channels=features[3], - kernel_size=3, - stride=2, - padding=1, - ), - ) - - pretrained.model.start_index = start_index - pretrained.model.patch_size = [16, 16] - - # We inject this function into the VisionTransformer instances so that - # we can use it with interpolated position embeddings without modifying the library source. - pretrained.model.forward_flex = types.MethodType(forward_flex, pretrained.model) - - # We inject this function into the VisionTransformer instances so that - # we can use it with interpolated position embeddings without modifying the library source. - pretrained.model._resize_pos_embed = types.MethodType( - _resize_pos_embed, pretrained.model - ) - - return pretrained - - -def _make_pretrained_vitb_rn50_384( - pretrained, use_readout="ignore", hooks=None, use_vit_only=False -): - model = timm.create_model("vit_base_resnet50_384", pretrained=pretrained) - - hooks = [0, 1, 8, 11] if hooks == None else hooks - return _make_vit_b_rn50_backbone( - model, - features=[256, 512, 768, 768], - size=[384, 384], - hooks=hooks, - use_vit_only=use_vit_only, - use_readout=use_readout, - ) diff --git a/spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_2_ProfileRecogition/__init__.py b/spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_2_ProfileRecogition/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/AbandonedMuse/UnlimitedMusicGen/audiocraft/models/__init__.py b/spaces/AbandonedMuse/UnlimitedMusicGen/audiocraft/models/__init__.py deleted file mode 100644 index 92c7a48a200eba455044cd66e0d2c1efe6494f5c..0000000000000000000000000000000000000000 --- a/spaces/AbandonedMuse/UnlimitedMusicGen/audiocraft/models/__init__.py +++ /dev/null @@ -1,10 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -# flake8: noqa -from .musicgen import MusicGen -from .lm import LMModel -from .encodec import CompressionModel, EncodecModel diff --git a/spaces/AchyuthGamer/OpenGPT/README.md b/spaces/AchyuthGamer/OpenGPT/README.md deleted file mode 100644 index ca7f7dffc2697555bdee0feecc31a2d092db3b3e..0000000000000000000000000000000000000000 --- a/spaces/AchyuthGamer/OpenGPT/README.md +++ /dev/null @@ -1,173 +0,0 @@ ---- -license: creativeml-openrail-m -title: OpenGPT -emoji: 🚀 -colorFrom: blue -colorTo: green -pinned: true -sdk: gradio -app_file: run.py ---- -# FreeGPT WebUI v2 - - -## GPT 3.5/4 - -NOT REQUIRE ANY API KEY ❌🔑 - -This project features a WebUI utilizing the [G4F API](https://github.com/xtekky/gpt4free).
    -Experience the power of ChatGPT with a user-friendly interface, enhanced jailbreaks, and completely free. - -**Important!** Don't be afraid to ask a question or write about any problem in the "issue". -We will solve a question or a problem together! 🌍 - -You can [buy me coffee](https://boosty.to/vadimboev/donate) here ☕🤎 - -## Known bugs 🚧 -- Stream mode not working properly. -- Operation timed out after 30000 milliseconds -- Web Access is not working. -Because the API that was used earlier in the "freegpt-webui" repository from ramonvc stopped working. This will be fixed later - -## Features v2 📢 -- Updated g4f -- Fixes to make everything work - -## Project Hosting and Demonstration 🌐🚀 -The project is hosted on multiple platforms to be tested and modified. -|Platform|Status|API Key|Free|Repo|Demo| -|--|--|--|--|--|--| -|[My site](http://vadimboev.ru:1338/)|![Active](https://img.shields.io/badge/Active-brightgreen)|◼️|☑️|[FreeGPT WebUI](https://github.com/VadimBoev/freegpt-webui-v2)|[Chat](http://vadimboev.ru:1338/) - -## Table of Contents -- [To-Do List](#to-do-list-%EF%B8%8F) -- [Getting Started](#getting-started-white_check_mark) - - [Cloning the Repository](#cloning-the-repository-inbox_tray) - - [Install Dependencies](#install-dependencies-wrench) -- [Running the Application](#running-the-application-rocket) -- [Docker](#docker-) - - [Prerequisites](#prerequisites) - - [Running the Docker](#running-the-docker) -- [Incorporated Projects](#incorporated-projects-busts_in_silhouette) - - [WebUI](#webui) - - [API FreeGPT](#api-g4f) -- [Star History](#star-history) -- [Legal Notice](#legal-notice) - -## Getting Started :white_check_mark: -To get started with this project, you'll need to clone the repository and have [Python](https://www.python.org/downloads/) installed on your system. -(Version 3.10+ is recommended. It also works for me on 3.9.2 in debian 11). - -### Cloning the Repository :inbox_tray: -Run the following command to clone the repository: - -``` -git clone https://github.com/VadimBoev/freegpt-webui-v2.git -``` - -### Install Dependencies :wrench: -Navigate to the project directory: -``` -cd freegpt-webui-v2 -``` - -Install the dependencies: -``` -pip install -r requirements.txt -``` -## Running the Application :rocket: -To run the application, run the following command: -``` -python run.py -``` - -Access the application in your browser using the URL: -``` -http://127.0.0.1:1338 -``` -or -``` -http://localhost:1338 -``` - -## Docker 🐳 -### Prerequisites -Before you start, make sure you have installed [Docker](https://www.docker.com/get-started) on your machine. - -### Running the Docker -Pull the Docker image from Docker Hub: -``` -docker pull VadimBoev/freegpt-webui-v2 -``` - -Run the application using Docker: -``` -docker run -p 1338:1338 VadimBoev/freegpt-webui-v2 -``` - -Access the application in your browser using the URL: -``` -http://127.0.0.1:1338 -``` -or -``` -http://localhost:1338 -``` - -When you're done using the application, stop the Docker containers using the following command: -``` -docker stop -``` - -## Incorporated Projects :busts_in_silhouette: -I highly recommend visiting and supporting both projects. - -### WebUI -The application interface was incorporated from the [chatgpt-clone](https://github.com/xtekky/chatgpt-clone) repository. - -### API G4F -The free GPT-4 API was incorporated from the [GPT4Free](https://github.com/xtekky/gpt4free) repository. - -
    - -## Star History -[![Star History Chart](https://api.star-history.com/svg?repos=VadimBoev/freegpt-webui-v2&type=Timeline)](https://star-history.com/#VadimBoev/freegpt-webui-v2&Timeline) - -
    - -## Legal Notice -This repository is _not_ associated with or endorsed by providers of the APIs contained in this GitHub repository. This -project is intended **for educational purposes only**. This is just a little personal project. Sites may contact me to -improve their security or request the removal of their site from this repository. - -Please note the following: - -1. **Disclaimer**: The APIs, services, and trademarks mentioned in this repository belong to their respective owners. - This project is _not_ claiming any right over them nor is it affiliated with or endorsed by any of the providers - mentioned. - -2. **Responsibility**: The author of this repository is _not_ responsible for any consequences, damages, or losses - arising from the use or misuse of this repository or the content provided by the third-party APIs. Users are solely - responsible for their actions and any repercussions that may follow. We strongly recommend the users to follow the - TOS of the each Website. - -3. **Educational Purposes Only**: This repository and its content are provided strictly for educational purposes. By - using the information and code provided, users acknowledge that they are using the APIs and models at their own risk - and agree to comply with any applicable laws and regulations. - -4. **Copyright**: All content in this repository, including but not limited to code, images, and documentation, is the - intellectual property of the repository author, unless otherwise stated. Unauthorized copying, distribution, or use - of any content in this repository is strictly prohibited without the express written consent of the repository - author. - -5. **Indemnification**: Users agree to indemnify, defend, and hold harmless the author of this repository from and - against any and all claims, liabilities, damages, losses, or expenses, including legal fees and costs, arising out of - or in any way connected with their use or misuse of this repository, its content, or related third-party APIs. - -6. **Updates and Changes**: The author reserves the right to modify, update, or remove any content, information, or - features in this repository at any time without prior notice. Users are responsible for regularly reviewing the - content and any changes made to this repository. - -By using this repository or any code related to it, you agree to these terms. The author is not responsible for any -copies, forks, or reuploads made by other users. This is the author's only account and repository. To prevent -impersonation or irresponsible actions, you may comply with the GNU GPL license this Repository uses. \ No newline at end of file diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/sizer/GetChildrenHeight.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/sizer/GetChildrenHeight.js deleted file mode 100644 index 8f60f6dc8620faf62af9192a494ca9b948adef3f..0000000000000000000000000000000000000000 --- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/sizer/GetChildrenHeight.js +++ /dev/null @@ -1,58 +0,0 @@ -var GetChildrenHeight = function (minimumMode) { - if (this.rexSizer.hidden) { - return 0; - } - - if (minimumMode === undefined) { - minimumMode = true; - } - - var result = 0; - var children = this.sizerChildren; - var child, padding, childHeight; - if (this.orientation === 0) { // x - // Get maximun height - for (var i = 0, cnt = children.length; i < cnt; i++) { - child = children[i]; - if (child.rexSizer.hidden) { - continue; - } - - padding = child.rexSizer.padding; - childHeight = this.getChildHeight(child) + padding.top + padding.bottom; - result = Math.max(childHeight, result); - } - } else { - // Get summation of minimum height - var itemSpace = this.space.item; - var isFirstChild = true; - for (var i = 0, cnt = children.length; i < cnt; i++) { - child = children[i]; - if (!child.hasOwnProperty('rexSizer')) { - continue; - } - if (child.rexSizer.hidden) { - continue; - } - - if ((child.rexSizer.proportion === 0) || minimumMode) { - childHeight = this.getChildHeight(child); - } else { - childHeight = 0; - } - padding = child.rexSizer.padding; - childHeight += (padding.top + padding.bottom); - - if (isFirstChild) { - isFirstChild = false; - } else { - childHeight += itemSpace; - } - - result += childHeight; - } - } - return result + this.space.top + this.space.bottom; -} - -export default GetChildrenHeight; \ No newline at end of file diff --git a/spaces/Akmyradov/TurkmenTTSweSTT/vits/attentions.py b/spaces/Akmyradov/TurkmenTTSweSTT/vits/attentions.py deleted file mode 100644 index 4e0b0c1fd48c962e21e1fbe60b23fc574927435c..0000000000000000000000000000000000000000 --- a/spaces/Akmyradov/TurkmenTTSweSTT/vits/attentions.py +++ /dev/null @@ -1,303 +0,0 @@ -import copy -import math -import numpy as np -import torch -from torch import nn -from torch.nn import functional as F - -import commons -import modules -from modules import LayerNorm - - -class Encoder(nn.Module): - def __init__(self, hidden_channels, filter_channels, n_heads, n_layers, kernel_size=1, p_dropout=0., window_size=4, **kwargs): - super().__init__() - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.window_size = window_size - - self.drop = nn.Dropout(p_dropout) - self.attn_layers = nn.ModuleList() - self.norm_layers_1 = nn.ModuleList() - self.ffn_layers = nn.ModuleList() - self.norm_layers_2 = nn.ModuleList() - for i in range(self.n_layers): - self.attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout, window_size=window_size)) - self.norm_layers_1.append(LayerNorm(hidden_channels)) - self.ffn_layers.append(FFN(hidden_channels, hidden_channels, filter_channels, kernel_size, p_dropout=p_dropout)) - self.norm_layers_2.append(LayerNorm(hidden_channels)) - - def forward(self, x, x_mask): - attn_mask = x_mask.unsqueeze(2) * x_mask.unsqueeze(-1) - x = x * x_mask - for i in range(self.n_layers): - y = self.attn_layers[i](x, x, attn_mask) - y = self.drop(y) - x = self.norm_layers_1[i](x + y) - - y = self.ffn_layers[i](x, x_mask) - y = self.drop(y) - x = self.norm_layers_2[i](x + y) - x = x * x_mask - return x - - -class Decoder(nn.Module): - def __init__(self, hidden_channels, filter_channels, n_heads, n_layers, kernel_size=1, p_dropout=0., proximal_bias=False, proximal_init=True, **kwargs): - super().__init__() - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.proximal_bias = proximal_bias - self.proximal_init = proximal_init - - self.drop = nn.Dropout(p_dropout) - self.self_attn_layers = nn.ModuleList() - self.norm_layers_0 = nn.ModuleList() - self.encdec_attn_layers = nn.ModuleList() - self.norm_layers_1 = nn.ModuleList() - self.ffn_layers = nn.ModuleList() - self.norm_layers_2 = nn.ModuleList() - for i in range(self.n_layers): - self.self_attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout, proximal_bias=proximal_bias, proximal_init=proximal_init)) - self.norm_layers_0.append(LayerNorm(hidden_channels)) - self.encdec_attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout)) - self.norm_layers_1.append(LayerNorm(hidden_channels)) - self.ffn_layers.append(FFN(hidden_channels, hidden_channels, filter_channels, kernel_size, p_dropout=p_dropout, causal=True)) - self.norm_layers_2.append(LayerNorm(hidden_channels)) - - def forward(self, x, x_mask, h, h_mask): - """ - x: decoder input - h: encoder output - """ - self_attn_mask = commons.subsequent_mask(x_mask.size(2)).to(device=x.device, dtype=x.dtype) - encdec_attn_mask = h_mask.unsqueeze(2) * x_mask.unsqueeze(-1) - x = x * x_mask - for i in range(self.n_layers): - y = self.self_attn_layers[i](x, x, self_attn_mask) - y = self.drop(y) - x = self.norm_layers_0[i](x + y) - - y = self.encdec_attn_layers[i](x, h, encdec_attn_mask) - y = self.drop(y) - x = self.norm_layers_1[i](x + y) - - y = self.ffn_layers[i](x, x_mask) - y = self.drop(y) - x = self.norm_layers_2[i](x + y) - x = x * x_mask - return x - - -class MultiHeadAttention(nn.Module): - def __init__(self, channels, out_channels, n_heads, p_dropout=0., window_size=None, heads_share=True, block_length=None, proximal_bias=False, proximal_init=False): - super().__init__() - assert channels % n_heads == 0 - - self.channels = channels - self.out_channels = out_channels - self.n_heads = n_heads - self.p_dropout = p_dropout - self.window_size = window_size - self.heads_share = heads_share - self.block_length = block_length - self.proximal_bias = proximal_bias - self.proximal_init = proximal_init - self.attn = None - - self.k_channels = channels // n_heads - self.conv_q = nn.Conv1d(channels, channels, 1) - self.conv_k = nn.Conv1d(channels, channels, 1) - self.conv_v = nn.Conv1d(channels, channels, 1) - self.conv_o = nn.Conv1d(channels, out_channels, 1) - self.drop = nn.Dropout(p_dropout) - - if window_size is not None: - n_heads_rel = 1 if heads_share else n_heads - rel_stddev = self.k_channels**-0.5 - self.emb_rel_k = nn.Parameter(torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) * rel_stddev) - self.emb_rel_v = nn.Parameter(torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) * rel_stddev) - - nn.init.xavier_uniform_(self.conv_q.weight) - nn.init.xavier_uniform_(self.conv_k.weight) - nn.init.xavier_uniform_(self.conv_v.weight) - if proximal_init: - with torch.no_grad(): - self.conv_k.weight.copy_(self.conv_q.weight) - self.conv_k.bias.copy_(self.conv_q.bias) - - def forward(self, x, c, attn_mask=None): - q = self.conv_q(x) - k = self.conv_k(c) - v = self.conv_v(c) - - x, self.attn = self.attention(q, k, v, mask=attn_mask) - - x = self.conv_o(x) - return x - - def attention(self, query, key, value, mask=None): - # reshape [b, d, t] -> [b, n_h, t, d_k] - b, d, t_s, t_t = (*key.size(), query.size(2)) - query = query.view(b, self.n_heads, self.k_channels, t_t).transpose(2, 3) - key = key.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3) - value = value.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3) - - scores = torch.matmul(query / math.sqrt(self.k_channels), key.transpose(-2, -1)) - if self.window_size is not None: - assert t_s == t_t, "Relative attention is only available for self-attention." - key_relative_embeddings = self._get_relative_embeddings(self.emb_rel_k, t_s) - rel_logits = self._matmul_with_relative_keys(query /math.sqrt(self.k_channels), key_relative_embeddings) - scores_local = self._relative_position_to_absolute_position(rel_logits) - scores = scores + scores_local - if self.proximal_bias: - assert t_s == t_t, "Proximal bias is only available for self-attention." - scores = scores + self._attention_bias_proximal(t_s).to(device=scores.device, dtype=scores.dtype) - if mask is not None: - scores = scores.masked_fill(mask == 0, -1e4) - if self.block_length is not None: - assert t_s == t_t, "Local attention is only available for self-attention." - block_mask = torch.ones_like(scores).triu(-self.block_length).tril(self.block_length) - scores = scores.masked_fill(block_mask == 0, -1e4) - p_attn = F.softmax(scores, dim=-1) # [b, n_h, t_t, t_s] - p_attn = self.drop(p_attn) - output = torch.matmul(p_attn, value) - if self.window_size is not None: - relative_weights = self._absolute_position_to_relative_position(p_attn) - value_relative_embeddings = self._get_relative_embeddings(self.emb_rel_v, t_s) - output = output + self._matmul_with_relative_values(relative_weights, value_relative_embeddings) - output = output.transpose(2, 3).contiguous().view(b, d, t_t) # [b, n_h, t_t, d_k] -> [b, d, t_t] - return output, p_attn - - def _matmul_with_relative_values(self, x, y): - """ - x: [b, h, l, m] - y: [h or 1, m, d] - ret: [b, h, l, d] - """ - ret = torch.matmul(x, y.unsqueeze(0)) - return ret - - def _matmul_with_relative_keys(self, x, y): - """ - x: [b, h, l, d] - y: [h or 1, m, d] - ret: [b, h, l, m] - """ - ret = torch.matmul(x, y.unsqueeze(0).transpose(-2, -1)) - return ret - - def _get_relative_embeddings(self, relative_embeddings, length): - max_relative_position = 2 * self.window_size + 1 - # Pad first before slice to avoid using cond ops. - pad_length = max(length - (self.window_size + 1), 0) - slice_start_position = max((self.window_size + 1) - length, 0) - slice_end_position = slice_start_position + 2 * length - 1 - if pad_length > 0: - padded_relative_embeddings = F.pad( - relative_embeddings, - commons.convert_pad_shape([[0, 0], [pad_length, pad_length], [0, 0]])) - else: - padded_relative_embeddings = relative_embeddings - used_relative_embeddings = padded_relative_embeddings[:,slice_start_position:slice_end_position] - return used_relative_embeddings - - def _relative_position_to_absolute_position(self, x): - """ - x: [b, h, l, 2*l-1] - ret: [b, h, l, l] - """ - batch, heads, length, _ = x.size() - # Concat columns of pad to shift from relative to absolute indexing. - x = F.pad(x, commons.convert_pad_shape([[0,0],[0,0],[0,0],[0,1]])) - - # Concat extra elements so to add up to shape (len+1, 2*len-1). - x_flat = x.view([batch, heads, length * 2 * length]) - x_flat = F.pad(x_flat, commons.convert_pad_shape([[0,0],[0,0],[0,length-1]])) - - # Reshape and slice out the padded elements. - x_final = x_flat.view([batch, heads, length+1, 2*length-1])[:, :, :length, length-1:] - return x_final - - def _absolute_position_to_relative_position(self, x): - """ - x: [b, h, l, l] - ret: [b, h, l, 2*l-1] - """ - batch, heads, length, _ = x.size() - # padd along column - x = F.pad(x, commons.convert_pad_shape([[0, 0], [0, 0], [0, 0], [0, length-1]])) - x_flat = x.view([batch, heads, length**2 + length*(length -1)]) - # add 0's in the beginning that will skew the elements after reshape - x_flat = F.pad(x_flat, commons.convert_pad_shape([[0, 0], [0, 0], [length, 0]])) - x_final = x_flat.view([batch, heads, length, 2*length])[:,:,:,1:] - return x_final - - def _attention_bias_proximal(self, length): - """Bias for self-attention to encourage attention to close positions. - Args: - length: an integer scalar. - Returns: - a Tensor with shape [1, 1, length, length] - """ - r = torch.arange(length, dtype=torch.float32) - diff = torch.unsqueeze(r, 0) - torch.unsqueeze(r, 1) - return torch.unsqueeze(torch.unsqueeze(-torch.log1p(torch.abs(diff)), 0), 0) - - -class FFN(nn.Module): - def __init__(self, in_channels, out_channels, filter_channels, kernel_size, p_dropout=0., activation=None, causal=False): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.activation = activation - self.causal = causal - - if causal: - self.padding = self._causal_padding - else: - self.padding = self._same_padding - - self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size) - self.conv_2 = nn.Conv1d(filter_channels, out_channels, kernel_size) - self.drop = nn.Dropout(p_dropout) - - def forward(self, x, x_mask): - x = self.conv_1(self.padding(x * x_mask)) - if self.activation == "gelu": - x = x * torch.sigmoid(1.702 * x) - else: - x = torch.relu(x) - x = self.drop(x) - x = self.conv_2(self.padding(x * x_mask)) - return x * x_mask - - def _causal_padding(self, x): - if self.kernel_size == 1: - return x - pad_l = self.kernel_size - 1 - pad_r = 0 - padding = [[0, 0], [0, 0], [pad_l, pad_r]] - x = F.pad(x, commons.convert_pad_shape(padding)) - return x - - def _same_padding(self, x): - if self.kernel_size == 1: - return x - pad_l = (self.kernel_size - 1) // 2 - pad_r = self.kernel_size // 2 - padding = [[0, 0], [0, 0], [pad_l, pad_r]] - x = F.pad(x, commons.convert_pad_shape(padding)) - return x diff --git a/spaces/Amrrs/DragGan-Inversion/PTI/utils/log_utils.py b/spaces/Amrrs/DragGan-Inversion/PTI/utils/log_utils.py deleted file mode 100644 index b29eae05f1c3ba34df60c074373b417c5420e836..0000000000000000000000000000000000000000 --- a/spaces/Amrrs/DragGan-Inversion/PTI/utils/log_utils.py +++ /dev/null @@ -1,79 +0,0 @@ -import numpy as np -from PIL import Image -import wandb -from PTI.configs import global_config -import torch -import matplotlib.pyplot as plt - - -def log_image_from_w(w, G, name): - img = get_image_from_w(w, G) - pillow_image = Image.fromarray(img) - wandb.log( - {f"{name}": [ - wandb.Image(pillow_image, caption=f"current inversion {name}")]}, - step=global_config.training_step) - - -def log_images_from_w(ws, G, names): - for name, w in zip(names, ws): - w = w.to(global_config.device) - log_image_from_w(w, G, name) - - -def plot_image_from_w(w, G): - img = get_image_from_w(w, G) - pillow_image = Image.fromarray(img) - plt.imshow(pillow_image) - plt.show() - - -def plot_image(img): - img = (img.permute(0, 2, 3, 1) * 127.5 + 128).clamp(0, 255).to(torch.uint8).detach().cpu().numpy() - pillow_image = Image.fromarray(img[0]) - plt.imshow(pillow_image) - plt.show() - - -def save_image(name, method_type, results_dir, image, run_id): - image.save(f'{results_dir}/{method_type}_{name}_{run_id}.jpg') - - -def save_w(w, G, name, method_type, results_dir): - im = get_image_from_w(w, G) - im = Image.fromarray(im, mode='RGB') - save_image(name, method_type, results_dir, im) - - -def save_concat_image(base_dir, image_latents, new_inv_image_latent, new_G, - old_G, - file_name, - extra_image=None): - images_to_save = [] - if extra_image is not None: - images_to_save.append(extra_image) - for latent in image_latents: - images_to_save.append(get_image_from_w(latent, old_G)) - images_to_save.append(get_image_from_w(new_inv_image_latent, new_G)) - result_image = create_alongside_images(images_to_save) - result_image.save(f'{base_dir}/{file_name}.jpg') - - -def save_single_image(base_dir, image_latent, G, file_name): - image_to_save = get_image_from_w(image_latent, G) - image_to_save = Image.fromarray(image_to_save, mode='RGB') - image_to_save.save(f'{base_dir}/{file_name}.jpg') - - -def create_alongside_images(images): - res = np.concatenate([np.array(image) for image in images], axis=1) - return Image.fromarray(res, mode='RGB') - - -def get_image_from_w(w, G): - if len(w.size()) <= 2: - w = w.unsqueeze(0) - with torch.no_grad(): - img = G.synthesis(w, noise_mode='const') - img = (img.permute(0, 2, 3, 1) * 127.5 + 128).clamp(0, 255).to(torch.uint8).detach().cpu().numpy() - return img[0] diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/ko/using-diffusers/depth2img.md b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/ko/using-diffusers/depth2img.md deleted file mode 100644 index b5602e3081daa6089265e002cc4df1cd8473a1e3..0000000000000000000000000000000000000000 --- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/ko/using-diffusers/depth2img.md +++ /dev/null @@ -1,57 +0,0 @@ - - -# Text-guided depth-to-image 생성 - -[[open-in-colab]] - -[`StableDiffusionDepth2ImgPipeline`]을 사용하면 텍스트 프롬프트와 초기 이미지를 전달하여 새 이미지의 생성을 조절할 수 있습니다. 또한 이미지 구조를 보존하기 위해 `depth_map`을 전달할 수도 있습니다. `depth_map`이 제공되지 않으면 파이프라인은 통합된 [depth-estimation model](https://github.com/isl-org/MiDaS)을 통해 자동으로 깊이를 예측합니다. - - -먼저 [`StableDiffusionDepth2ImgPipeline`]의 인스턴스를 생성합니다: - -```python -import torch -import requests -from PIL import Image - -from diffusers import StableDiffusionDepth2ImgPipeline - -pipe = StableDiffusionDepth2ImgPipeline.from_pretrained( - "stabilityai/stable-diffusion-2-depth", - torch_dtype=torch.float16, -).to("cuda") -``` - -이제 프롬프트를 파이프라인에 전달합니다. 특정 단어가 이미지 생성을 가이드 하는것을 방지하기 위해 `negative_prompt`를 전달할 수도 있습니다: - -```python -url = "http://images.cocodataset.org/val2017/000000039769.jpg" -init_image = Image.open(requests.get(url, stream=True).raw) -prompt = "two tigers" -n_prompt = "bad, deformed, ugly, bad anatomy" -image = pipe(prompt=prompt, image=init_image, negative_prompt=n_prompt, strength=0.7).images[0] -image -``` - -| Input | Output | -|---------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------| -| | | - -아래의 Spaces를 가지고 놀며 depth map이 있는 이미지와 없는 이미지의 차이가 있는지 확인해 보세요! - - diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/pipelines/deepfloyd_if/test_if_img2img_superresolution.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/pipelines/deepfloyd_if/test_if_img2img_superresolution.py deleted file mode 100644 index 500557108aed05b9b01020964f13b15fdb9abed0..0000000000000000000000000000000000000000 --- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/pipelines/deepfloyd_if/test_if_img2img_superresolution.py +++ /dev/null @@ -1,85 +0,0 @@ -# coding=utf-8 -# Copyright 2023 HuggingFace Inc. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import random -import unittest - -import torch - -from diffusers import IFImg2ImgSuperResolutionPipeline -from diffusers.utils import floats_tensor -from diffusers.utils.import_utils import is_xformers_available -from diffusers.utils.testing_utils import skip_mps, torch_device - -from ..pipeline_params import TEXT_GUIDED_IMAGE_VARIATION_BATCH_PARAMS, TEXT_GUIDED_IMAGE_VARIATION_PARAMS -from ..test_pipelines_common import PipelineTesterMixin -from . import IFPipelineTesterMixin - - -@skip_mps -class IFImg2ImgSuperResolutionPipelineFastTests(PipelineTesterMixin, IFPipelineTesterMixin, unittest.TestCase): - pipeline_class = IFImg2ImgSuperResolutionPipeline - params = TEXT_GUIDED_IMAGE_VARIATION_PARAMS - {"width", "height"} - batch_params = TEXT_GUIDED_IMAGE_VARIATION_BATCH_PARAMS.union({"original_image"}) - required_optional_params = PipelineTesterMixin.required_optional_params - {"latents"} - - def get_dummy_components(self): - return self._get_superresolution_dummy_components() - - def get_dummy_inputs(self, device, seed=0): - if str(device).startswith("mps"): - generator = torch.manual_seed(seed) - else: - generator = torch.Generator(device=device).manual_seed(seed) - - original_image = floats_tensor((1, 3, 32, 32), rng=random.Random(seed)).to(device) - image = floats_tensor((1, 3, 16, 16), rng=random.Random(seed)).to(device) - - inputs = { - "prompt": "A painting of a squirrel eating a burger", - "image": image, - "original_image": original_image, - "generator": generator, - "num_inference_steps": 2, - "output_type": "numpy", - } - - return inputs - - @unittest.skipIf( - torch_device != "cuda" or not is_xformers_available(), - reason="XFormers attention is only available with CUDA and `xformers` installed", - ) - def test_xformers_attention_forwardGenerator_pass(self): - self._test_xformers_attention_forwardGenerator_pass(expected_max_diff=1e-3) - - def test_save_load_optional_components(self): - self._test_save_load_optional_components() - - @unittest.skipIf(torch_device != "cuda", reason="float16 requires CUDA") - def test_save_load_float16(self): - # Due to non-determinism in save load of the hf-internal-testing/tiny-random-t5 text encoder - super().test_save_load_float16(expected_max_diff=1e-1) - - def test_attention_slicing_forward_pass(self): - self._test_attention_slicing_forward_pass(expected_max_diff=1e-2) - - def test_save_load_local(self): - self._test_save_load_local() - - def test_inference_batch_single_identical(self): - self._test_inference_batch_single_identical( - expected_max_diff=1e-2, - ) diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/pipelines/stable_unclip/__init__.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/pipelines/stable_unclip/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/Andy1621/uniformer_image_detection/configs/fsaf/README.md b/spaces/Andy1621/uniformer_image_detection/configs/fsaf/README.md deleted file mode 100644 index 42468c8bf596d675d74e0c1d453e0641c5dc3b9c..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_detection/configs/fsaf/README.md +++ /dev/null @@ -1,45 +0,0 @@ -# Feature Selective Anchor-Free Module for Single-Shot Object Detection - -[ALGORITHM] - -FSAF is an anchor-free method published in CVPR2019 ([https://arxiv.org/pdf/1903.00621.pdf](https://arxiv.org/pdf/1903.00621.pdf)). -Actually it is equivalent to the anchor-based method with only one anchor at each feature map position in each FPN level. -And this is how we implemented it. -Only the anchor-free branch is released for its better compatibility with the current framework and less computational budget. - -In the original paper, feature maps within the central 0.2-0.5 area of a gt box are tagged as ignored. However, -it is empirically found that a hard threshold (0.2-0.2) gives a further gain on the performance. (see the table below) - -## Main Results - -### Results on R50/R101/X101-FPN - -| Backbone | ignore range | ms-train| Lr schd |Train Mem (GB)| Train time (s/iter) | Inf time (fps) | box AP | Config | Download | -|:----------:| :-------: |:-------:|:-------:|:------------:|:---------------:|:--------------:|:-------------:|:------:|:--------:| -| R-50 | 0.2-0.5 | N | 1x | 3.15 | 0.43 | 12.3 | 36.0 (35.9) | | [model](http://download.openmmlab.com/mmdetection/v2.0/fsaf/fsaf_pscale0.2_nscale0.5_r50_fpn_1x_coco/fsaf_pscale0.2_nscale0.5_r50_fpn_1x_coco_20200715-b555b0e0.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/fsaf/fsaf_pscale0.2_nscale0.5_r50_fpn_1x_coco/fsaf_pscale0.2_nscale0.5_r50_fpn_1x_coco_20200715_094657.log.json) | -| R-50 | 0.2-0.2 | N | 1x | 3.15 | 0.43 | 13.0 | 37.4 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/fsaf/fsaf_r50_fpn_1x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/fsaf/fsaf_r50_fpn_1x_coco/fsaf_r50_fpn_1x_coco-94ccc51f.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/fsaf/fsaf_r50_fpn_1x_coco/fsaf_r50_fpn_1x_coco_20200428_072327.log.json)| -| R-101 | 0.2-0.2 | N | 1x | 5.08 | 0.58 | 10.8 | 39.3 (37.9) | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/fsaf/fsaf_r101_fpn_1x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/fsaf/fsaf_r101_fpn_1x_coco/fsaf_r101_fpn_1x_coco-9e71098f.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/fsaf/fsaf_r101_fpn_1x_coco/fsaf_r101_fpn_1x_coco_20200428_160348.log.json)| -| X-101 | 0.2-0.2 | N | 1x | 9.38 | 1.23 | 5.6 | 42.4 (41.0) | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/fsaf/fsaf_x101_64x4d_fpn_1x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/fsaf/fsaf_x101_64x4d_fpn_1x_coco/fsaf_x101_64x4d_fpn_1x_coco-e3f6e6fd.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/fsaf/fsaf_x101_64x4d_fpn_1x_coco/fsaf_x101_64x4d_fpn_1x_coco_20200428_160424.log.json)| - -**Notes:** - -- *1x means the model is trained for 12 epochs.* -- *AP values in the brackets represent those reported in the original paper.* -- *All results are obtained with a single model and single-scale test.* -- *X-101 backbone represents ResNext-101-64x4d.* -- *All pretrained backbones use pytorch style.* -- *All models are trained on 8 Titan-XP gpus and tested on a single gpu.* - -## Citations - -BibTeX reference is as follows. - -```latex -@inproceedings{zhu2019feature, - title={Feature Selective Anchor-Free Module for Single-Shot Object Detection}, - author={Zhu, Chenchen and He, Yihui and Savvides, Marios}, - booktitle={Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition}, - pages={840--849}, - year={2019} -} -``` diff --git a/spaces/Andy1621/uniformer_image_detection/mmdet/core/bbox/iou_calculators/__init__.py b/spaces/Andy1621/uniformer_image_detection/mmdet/core/bbox/iou_calculators/__init__.py deleted file mode 100644 index e71369a58a05fa25e6a754300875fdbb87cb26a5..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_detection/mmdet/core/bbox/iou_calculators/__init__.py +++ /dev/null @@ -1,4 +0,0 @@ -from .builder import build_iou_calculator -from .iou2d_calculator import BboxOverlaps2D, bbox_overlaps - -__all__ = ['build_iou_calculator', 'BboxOverlaps2D', 'bbox_overlaps'] diff --git a/spaces/ArkanDash/rvc-models/vc_infer_pipeline.py b/spaces/ArkanDash/rvc-models/vc_infer_pipeline.py deleted file mode 100644 index c26d45068f9b6bf2b194b13c3c89f8a06347c124..0000000000000000000000000000000000000000 --- a/spaces/ArkanDash/rvc-models/vc_infer_pipeline.py +++ /dev/null @@ -1,306 +0,0 @@ -import numpy as np, parselmouth, torch, pdb -from time import time as ttime -import torch.nn.functional as F -from config import x_pad, x_query, x_center, x_max -import scipy.signal as signal -import pyworld, os, traceback, faiss -from scipy import signal - -bh, ah = signal.butter(N=5, Wn=48, btype="high", fs=16000) - - -class VC(object): - def __init__(self, tgt_sr, device, is_half): - self.sr = 16000 # hubert输入采样率 - self.window = 160 # 每帧点数 - self.t_pad = self.sr * x_pad # 每条前后pad时间 - self.t_pad_tgt = tgt_sr * x_pad - self.t_pad2 = self.t_pad * 2 - self.t_query = self.sr * x_query # 查询切点前后查询时间 - self.t_center = self.sr * x_center # 查询切点位置 - self.t_max = self.sr * x_max # 免查询时长阈值 - self.device = device - self.is_half = is_half - - def get_f0(self, x, p_len, f0_up_key, f0_method, inp_f0=None): - time_step = self.window / self.sr * 1000 - f0_min = 50 - f0_max = 1100 - f0_mel_min = 1127 * np.log(1 + f0_min / 700) - f0_mel_max = 1127 * np.log(1 + f0_max / 700) - if f0_method == "pm": - f0 = ( - parselmouth.Sound(x, self.sr) - .to_pitch_ac( - time_step=time_step / 1000, - voicing_threshold=0.6, - pitch_floor=f0_min, - pitch_ceiling=f0_max, - ) - .selected_array["frequency"] - ) - pad_size = (p_len - len(f0) + 1) // 2 - if pad_size > 0 or p_len - len(f0) - pad_size > 0: - f0 = np.pad( - f0, [[pad_size, p_len - len(f0) - pad_size]], mode="constant" - ) - elif f0_method == "harvest": - f0, t = pyworld.harvest( - x.astype(np.double), - fs=self.sr, - f0_ceil=f0_max, - f0_floor=f0_min, - frame_period=10, - ) - f0 = pyworld.stonemask(x.astype(np.double), f0, t, self.sr) - f0 = signal.medfilt(f0, 3) - f0 *= pow(2, f0_up_key / 12) - # with open("test.txt","w")as f:f.write("\n".join([str(i)for i in f0.tolist()])) - tf0 = self.sr // self.window # 每秒f0点数 - if inp_f0 is not None: - delta_t = np.round( - (inp_f0[:, 0].max() - inp_f0[:, 0].min()) * tf0 + 1 - ).astype("int16") - replace_f0 = np.interp( - list(range(delta_t)), inp_f0[:, 0] * 100, inp_f0[:, 1] - ) - shape = f0[x_pad * tf0 : x_pad * tf0 + len(replace_f0)].shape[0] - f0[x_pad * tf0 : x_pad * tf0 + len(replace_f0)] = replace_f0[:shape] - # with open("test_opt.txt","w")as f:f.write("\n".join([str(i)for i in f0.tolist()])) - f0bak = f0.copy() - f0_mel = 1127 * np.log(1 + f0 / 700) - f0_mel[f0_mel > 0] = (f0_mel[f0_mel > 0] - f0_mel_min) * 254 / ( - f0_mel_max - f0_mel_min - ) + 1 - f0_mel[f0_mel <= 1] = 1 - f0_mel[f0_mel > 255] = 255 - f0_coarse = np.rint(f0_mel).astype(np.int) - return f0_coarse, f0bak # 1-0 - - def vc( - self, - model, - net_g, - sid, - audio0, - pitch, - pitchf, - times, - index, - big_npy, - index_rate, - ): # ,file_index,file_big_npy - feats = torch.from_numpy(audio0) - if self.is_half: - feats = feats.half() - else: - feats = feats.float() - if feats.dim() == 2: # double channels - feats = feats.mean(-1) - assert feats.dim() == 1, feats.dim() - feats = feats.view(1, -1) - padding_mask = torch.BoolTensor(feats.shape).to(self.device).fill_(False) - - inputs = { - "source": feats.to(self.device), - "padding_mask": padding_mask, - "output_layer": 9, # layer 9 - } - t0 = ttime() - with torch.no_grad(): - logits = model.extract_features(**inputs) - feats = model.final_proj(logits[0]) - - if ( - isinstance(index, type(None)) == False - and isinstance(big_npy, type(None)) == False - and index_rate != 0 - ): - npy = feats[0].cpu().numpy() - if self.is_half: - npy = npy.astype("float32") - _, I = index.search(npy, 1) - npy = big_npy[I.squeeze()] - if self.is_half: - npy = npy.astype("float16") - feats = ( - torch.from_numpy(npy).unsqueeze(0).to(self.device) * index_rate - + (1 - index_rate) * feats - ) - - feats = F.interpolate(feats.permute(0, 2, 1), scale_factor=2).permute(0, 2, 1) - t1 = ttime() - p_len = audio0.shape[0] // self.window - if feats.shape[1] < p_len: - p_len = feats.shape[1] - if pitch != None and pitchf != None: - pitch = pitch[:, :p_len] - pitchf = pitchf[:, :p_len] - p_len = torch.tensor([p_len], device=self.device).long() - with torch.no_grad(): - if pitch != None and pitchf != None: - audio1 = ( - (net_g.infer(feats, p_len, pitch, pitchf, sid)[0][0, 0] * 32768) - .data.cpu() - .float() - .numpy() - .astype(np.int16) - ) - else: - audio1 = ( - (net_g.infer(feats, p_len, sid)[0][0, 0] * 32768) - .data.cpu() - .float() - .numpy() - .astype(np.int16) - ) - del feats, p_len, padding_mask - if torch.cuda.is_available(): - torch.cuda.empty_cache() - t2 = ttime() - times[0] += t1 - t0 - times[2] += t2 - t1 - return audio1 - - def pipeline( - self, - model, - net_g, - sid, - audio, - times, - f0_up_key, - f0_method, - file_index, - file_big_npy, - index_rate, - if_f0, - f0_file=None, - ): - if ( - file_big_npy != "" - and file_index != "" - and os.path.exists(file_big_npy) == True - and os.path.exists(file_index) == True - and index_rate != 0 - ): - try: - index = faiss.read_index(file_index) - big_npy = np.load(file_big_npy) - except: - traceback.print_exc() - index = big_npy = None - else: - index = big_npy = None - print("Feature retrieval library doesn't exist or ratio is 0") - audio = signal.filtfilt(bh, ah, audio) - audio_pad = np.pad(audio, (self.window // 2, self.window // 2), mode="reflect") - opt_ts = [] - if audio_pad.shape[0] > self.t_max: - audio_sum = np.zeros_like(audio) - for i in range(self.window): - audio_sum += audio_pad[i : i - self.window] - for t in range(self.t_center, audio.shape[0], self.t_center): - opt_ts.append( - t - - self.t_query - + np.where( - np.abs(audio_sum[t - self.t_query : t + self.t_query]) - == np.abs(audio_sum[t - self.t_query : t + self.t_query]).min() - )[0][0] - ) - s = 0 - audio_opt = [] - t = None - t1 = ttime() - audio_pad = np.pad(audio, (self.t_pad, self.t_pad), mode="reflect") - p_len = audio_pad.shape[0] // self.window - inp_f0 = None - if hasattr(f0_file, "name") == True: - try: - with open(f0_file.name, "r") as f: - lines = f.read().strip("\n").split("\n") - inp_f0 = [] - for line in lines: - inp_f0.append([float(i) for i in line.split(",")]) - inp_f0 = np.array(inp_f0, dtype="float32") - except: - traceback.print_exc() - sid = torch.tensor(sid, device=self.device).unsqueeze(0).long() - pitch, pitchf = None, None - if if_f0 == 1: - pitch, pitchf = self.get_f0(audio_pad, p_len, f0_up_key, f0_method, inp_f0) - pitch = pitch[:p_len] - pitchf = pitchf[:p_len] - pitch = torch.tensor(pitch, device=self.device).unsqueeze(0).long() - pitchf = torch.tensor(pitchf, device=self.device).unsqueeze(0).float() - t2 = ttime() - times[1] += t2 - t1 - for t in opt_ts: - t = t // self.window * self.window - if if_f0 == 1: - audio_opt.append( - self.vc( - model, - net_g, - sid, - audio_pad[s : t + self.t_pad2 + self.window], - pitch[:, s // self.window : (t + self.t_pad2) // self.window], - pitchf[:, s // self.window : (t + self.t_pad2) // self.window], - times, - index, - big_npy, - index_rate, - )[self.t_pad_tgt : -self.t_pad_tgt] - ) - else: - audio_opt.append( - self.vc( - model, - net_g, - sid, - audio_pad[s : t + self.t_pad2 + self.window], - None, - None, - times, - index, - big_npy, - index_rate, - )[self.t_pad_tgt : -self.t_pad_tgt] - ) - s = t - if if_f0 == 1: - audio_opt.append( - self.vc( - model, - net_g, - sid, - audio_pad[t:], - pitch[:, t // self.window :] if t is not None else pitch, - pitchf[:, t // self.window :] if t is not None else pitchf, - times, - index, - big_npy, - index_rate, - )[self.t_pad_tgt : -self.t_pad_tgt] - ) - else: - audio_opt.append( - self.vc( - model, - net_g, - sid, - audio_pad[t:], - None, - None, - times, - index, - big_npy, - index_rate, - )[self.t_pad_tgt : -self.t_pad_tgt] - ) - audio_opt = np.concatenate(audio_opt) - del pitch, pitchf, sid - if torch.cuda.is_available(): - torch.cuda.empty_cache() - return audio_opt diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_internal/models/installation_report.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_internal/models/installation_report.py deleted file mode 100644 index fef3757f222b67fc1f4de52d260c49d64b6a4e16..0000000000000000000000000000000000000000 --- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_internal/models/installation_report.py +++ /dev/null @@ -1,53 +0,0 @@ -from typing import Any, Dict, Sequence - -from pip._vendor.packaging.markers import default_environment - -from pip import __version__ -from pip._internal.req.req_install import InstallRequirement - - -class InstallationReport: - def __init__(self, install_requirements: Sequence[InstallRequirement]): - self._install_requirements = install_requirements - - @classmethod - def _install_req_to_dict(cls, ireq: InstallRequirement) -> Dict[str, Any]: - assert ireq.download_info, f"No download_info for {ireq}" - res = { - # PEP 610 json for the download URL. download_info.archive_info.hashes may - # be absent when the requirement was installed from the wheel cache - # and the cache entry was populated by an older pip version that did not - # record origin.json. - "download_info": ireq.download_info.to_dict(), - # is_direct is true if the requirement was a direct URL reference (which - # includes editable requirements), and false if the requirement was - # downloaded from a PEP 503 index or --find-links. - "is_direct": bool(ireq.original_link), - # requested is true if the requirement was specified by the user (aka - # top level requirement), and false if it was installed as a dependency of a - # requirement. https://peps.python.org/pep-0376/#requested - "requested": ireq.user_supplied, - # PEP 566 json encoding for metadata - # https://www.python.org/dev/peps/pep-0566/#json-compatible-metadata - "metadata": ireq.get_dist().metadata_dict, - } - if ireq.user_supplied and ireq.extras: - # For top level requirements, the list of requested extras, if any. - res["requested_extras"] = list(sorted(ireq.extras)) - return res - - def to_dict(self) -> Dict[str, Any]: - return { - "version": "1", - "pip_version": __version__, - "install": [ - self._install_req_to_dict(ireq) for ireq in self._install_requirements - ], - # https://peps.python.org/pep-0508/#environment-markers - # TODO: currently, the resolver uses the default environment to evaluate - # environment markers, so that is what we report here. In the future, it - # should also take into account options such as --python-version or - # --platform, perhaps under the form of an environment_override field? - # https://github.com/pypa/pip/issues/11198 - "environment": default_environment(), - } diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/_distutils/sysconfig.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/_distutils/sysconfig.py deleted file mode 100644 index 6a979f8c91fce3c8239b36ddb8764dc85dea41f2..0000000000000000000000000000000000000000 --- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/_distutils/sysconfig.py +++ /dev/null @@ -1,558 +0,0 @@ -"""Provide access to Python's configuration information. The specific -configuration variables available depend heavily on the platform and -configuration. The values may be retrieved using -get_config_var(name), and the list of variables is available via -get_config_vars().keys(). Additional convenience functions are also -available. - -Written by: Fred L. Drake, Jr. -Email: -""" - -import os -import re -import sys -import sysconfig -import pathlib - -from .errors import DistutilsPlatformError -from . import py39compat -from ._functools import pass_none - -IS_PYPY = '__pypy__' in sys.builtin_module_names - -# These are needed in a couple of spots, so just compute them once. -PREFIX = os.path.normpath(sys.prefix) -EXEC_PREFIX = os.path.normpath(sys.exec_prefix) -BASE_PREFIX = os.path.normpath(sys.base_prefix) -BASE_EXEC_PREFIX = os.path.normpath(sys.base_exec_prefix) - -# Path to the base directory of the project. On Windows the binary may -# live in project/PCbuild/win32 or project/PCbuild/amd64. -# set for cross builds -if "_PYTHON_PROJECT_BASE" in os.environ: - project_base = os.path.abspath(os.environ["_PYTHON_PROJECT_BASE"]) -else: - if sys.executable: - project_base = os.path.dirname(os.path.abspath(sys.executable)) - else: - # sys.executable can be empty if argv[0] has been changed and Python is - # unable to retrieve the real program name - project_base = os.getcwd() - - -def _is_python_source_dir(d): - """ - Return True if the target directory appears to point to an - un-installed Python. - """ - modules = pathlib.Path(d).joinpath('Modules') - return any(modules.joinpath(fn).is_file() for fn in ('Setup', 'Setup.local')) - - -_sys_home = getattr(sys, '_home', None) - - -def _is_parent(dir_a, dir_b): - """ - Return True if a is a parent of b. - """ - return os.path.normcase(dir_a).startswith(os.path.normcase(dir_b)) - - -if os.name == 'nt': - - @pass_none - def _fix_pcbuild(d): - # In a venv, sys._home will be inside BASE_PREFIX rather than PREFIX. - prefixes = PREFIX, BASE_PREFIX - matched = ( - prefix - for prefix in prefixes - if _is_parent(d, os.path.join(prefix, "PCbuild")) - ) - return next(matched, d) - - project_base = _fix_pcbuild(project_base) - _sys_home = _fix_pcbuild(_sys_home) - - -def _python_build(): - if _sys_home: - return _is_python_source_dir(_sys_home) - return _is_python_source_dir(project_base) - - -python_build = _python_build() - - -# Calculate the build qualifier flags if they are defined. Adding the flags -# to the include and lib directories only makes sense for an installation, not -# an in-source build. -build_flags = '' -try: - if not python_build: - build_flags = sys.abiflags -except AttributeError: - # It's not a configure-based build, so the sys module doesn't have - # this attribute, which is fine. - pass - - -def get_python_version(): - """Return a string containing the major and minor Python version, - leaving off the patchlevel. Sample return values could be '1.5' - or '2.2'. - """ - return '%d.%d' % sys.version_info[:2] - - -def get_python_inc(plat_specific=0, prefix=None): - """Return the directory containing installed Python header files. - - If 'plat_specific' is false (the default), this is the path to the - non-platform-specific header files, i.e. Python.h and so on; - otherwise, this is the path to platform-specific header files - (namely pyconfig.h). - - If 'prefix' is supplied, use it instead of sys.base_prefix or - sys.base_exec_prefix -- i.e., ignore 'plat_specific'. - """ - default_prefix = BASE_EXEC_PREFIX if plat_specific else BASE_PREFIX - resolved_prefix = prefix if prefix is not None else default_prefix - try: - getter = globals()[f'_get_python_inc_{os.name}'] - except KeyError: - raise DistutilsPlatformError( - "I don't know where Python installs its C header files " - "on platform '%s'" % os.name - ) - return getter(resolved_prefix, prefix, plat_specific) - - -def _get_python_inc_posix(prefix, spec_prefix, plat_specific): - if IS_PYPY and sys.version_info < (3, 8): - return os.path.join(prefix, 'include') - return ( - _get_python_inc_posix_python(plat_specific) - or _get_python_inc_from_config(plat_specific, spec_prefix) - or _get_python_inc_posix_prefix(prefix) - ) - - -def _get_python_inc_posix_python(plat_specific): - """ - Assume the executable is in the build directory. The - pyconfig.h file should be in the same directory. Since - the build directory may not be the source directory, - use "srcdir" from the makefile to find the "Include" - directory. - """ - if not python_build: - return - if plat_specific: - return _sys_home or project_base - incdir = os.path.join(get_config_var('srcdir'), 'Include') - return os.path.normpath(incdir) - - -def _get_python_inc_from_config(plat_specific, spec_prefix): - """ - If no prefix was explicitly specified, provide the include - directory from the config vars. Useful when - cross-compiling, since the config vars may come from - the host - platform Python installation, while the current Python - executable is from the build platform installation. - - >>> monkeypatch = getfixture('monkeypatch') - >>> gpifc = _get_python_inc_from_config - >>> monkeypatch.setitem(gpifc.__globals__, 'get_config_var', str.lower) - >>> gpifc(False, '/usr/bin/') - >>> gpifc(False, '') - >>> gpifc(False, None) - 'includepy' - >>> gpifc(True, None) - 'confincludepy' - """ - if spec_prefix is None: - return get_config_var('CONF' * plat_specific + 'INCLUDEPY') - - -def _get_python_inc_posix_prefix(prefix): - implementation = 'pypy' if IS_PYPY else 'python' - python_dir = implementation + get_python_version() + build_flags - return os.path.join(prefix, "include", python_dir) - - -def _get_python_inc_nt(prefix, spec_prefix, plat_specific): - if python_build: - # Include both the include and PC dir to ensure we can find - # pyconfig.h - return ( - os.path.join(prefix, "include") - + os.path.pathsep - + os.path.join(prefix, "PC") - ) - return os.path.join(prefix, "include") - - -# allow this behavior to be monkey-patched. Ref pypa/distutils#2. -def _posix_lib(standard_lib, libpython, early_prefix, prefix): - if standard_lib: - return libpython - else: - return os.path.join(libpython, "site-packages") - - -def get_python_lib(plat_specific=0, standard_lib=0, prefix=None): - """Return the directory containing the Python library (standard or - site additions). - - If 'plat_specific' is true, return the directory containing - platform-specific modules, i.e. any module from a non-pure-Python - module distribution; otherwise, return the platform-shared library - directory. If 'standard_lib' is true, return the directory - containing standard Python library modules; otherwise, return the - directory for site-specific modules. - - If 'prefix' is supplied, use it instead of sys.base_prefix or - sys.base_exec_prefix -- i.e., ignore 'plat_specific'. - """ - - if IS_PYPY and sys.version_info < (3, 8): - # PyPy-specific schema - if prefix is None: - prefix = PREFIX - if standard_lib: - return os.path.join(prefix, "lib-python", sys.version[0]) - return os.path.join(prefix, 'site-packages') - - early_prefix = prefix - - if prefix is None: - if standard_lib: - prefix = plat_specific and BASE_EXEC_PREFIX or BASE_PREFIX - else: - prefix = plat_specific and EXEC_PREFIX or PREFIX - - if os.name == "posix": - if plat_specific or standard_lib: - # Platform-specific modules (any module from a non-pure-Python - # module distribution) or standard Python library modules. - libdir = getattr(sys, "platlibdir", "lib") - else: - # Pure Python - libdir = "lib" - implementation = 'pypy' if IS_PYPY else 'python' - libpython = os.path.join(prefix, libdir, implementation + get_python_version()) - return _posix_lib(standard_lib, libpython, early_prefix, prefix) - elif os.name == "nt": - if standard_lib: - return os.path.join(prefix, "Lib") - else: - return os.path.join(prefix, "Lib", "site-packages") - else: - raise DistutilsPlatformError( - "I don't know where Python installs its library " - "on platform '%s'" % os.name - ) - - -def customize_compiler(compiler): # noqa: C901 - """Do any platform-specific customization of a CCompiler instance. - - Mainly needed on Unix, so we can plug in the information that - varies across Unices and is stored in Python's Makefile. - """ - if compiler.compiler_type == "unix": - if sys.platform == "darwin": - # Perform first-time customization of compiler-related - # config vars on OS X now that we know we need a compiler. - # This is primarily to support Pythons from binary - # installers. The kind and paths to build tools on - # the user system may vary significantly from the system - # that Python itself was built on. Also the user OS - # version and build tools may not support the same set - # of CPU architectures for universal builds. - global _config_vars - # Use get_config_var() to ensure _config_vars is initialized. - if not get_config_var('CUSTOMIZED_OSX_COMPILER'): - import _osx_support - - _osx_support.customize_compiler(_config_vars) - _config_vars['CUSTOMIZED_OSX_COMPILER'] = 'True' - - ( - cc, - cxx, - cflags, - ccshared, - ldshared, - shlib_suffix, - ar, - ar_flags, - ) = get_config_vars( - 'CC', - 'CXX', - 'CFLAGS', - 'CCSHARED', - 'LDSHARED', - 'SHLIB_SUFFIX', - 'AR', - 'ARFLAGS', - ) - - if 'CC' in os.environ: - newcc = os.environ['CC'] - if 'LDSHARED' not in os.environ and ldshared.startswith(cc): - # If CC is overridden, use that as the default - # command for LDSHARED as well - ldshared = newcc + ldshared[len(cc) :] - cc = newcc - if 'CXX' in os.environ: - cxx = os.environ['CXX'] - if 'LDSHARED' in os.environ: - ldshared = os.environ['LDSHARED'] - if 'CPP' in os.environ: - cpp = os.environ['CPP'] - else: - cpp = cc + " -E" # not always - if 'LDFLAGS' in os.environ: - ldshared = ldshared + ' ' + os.environ['LDFLAGS'] - if 'CFLAGS' in os.environ: - cflags = cflags + ' ' + os.environ['CFLAGS'] - ldshared = ldshared + ' ' + os.environ['CFLAGS'] - if 'CPPFLAGS' in os.environ: - cpp = cpp + ' ' + os.environ['CPPFLAGS'] - cflags = cflags + ' ' + os.environ['CPPFLAGS'] - ldshared = ldshared + ' ' + os.environ['CPPFLAGS'] - if 'AR' in os.environ: - ar = os.environ['AR'] - if 'ARFLAGS' in os.environ: - archiver = ar + ' ' + os.environ['ARFLAGS'] - else: - archiver = ar + ' ' + ar_flags - - cc_cmd = cc + ' ' + cflags - compiler.set_executables( - preprocessor=cpp, - compiler=cc_cmd, - compiler_so=cc_cmd + ' ' + ccshared, - compiler_cxx=cxx, - linker_so=ldshared, - linker_exe=cc, - archiver=archiver, - ) - - if 'RANLIB' in os.environ and compiler.executables.get('ranlib', None): - compiler.set_executables(ranlib=os.environ['RANLIB']) - - compiler.shared_lib_extension = shlib_suffix - - -def get_config_h_filename(): - """Return full pathname of installed pyconfig.h file.""" - if python_build: - if os.name == "nt": - inc_dir = os.path.join(_sys_home or project_base, "PC") - else: - inc_dir = _sys_home or project_base - return os.path.join(inc_dir, 'pyconfig.h') - else: - return sysconfig.get_config_h_filename() - - -def get_makefile_filename(): - """Return full pathname of installed Makefile from the Python build.""" - return sysconfig.get_makefile_filename() - - -def parse_config_h(fp, g=None): - """Parse a config.h-style file. - - A dictionary containing name/value pairs is returned. If an - optional dictionary is passed in as the second argument, it is - used instead of a new dictionary. - """ - return sysconfig.parse_config_h(fp, vars=g) - - -# Regexes needed for parsing Makefile (and similar syntaxes, -# like old-style Setup files). -_variable_rx = re.compile(r"([a-zA-Z][a-zA-Z0-9_]+)\s*=\s*(.*)") -_findvar1_rx = re.compile(r"\$\(([A-Za-z][A-Za-z0-9_]*)\)") -_findvar2_rx = re.compile(r"\${([A-Za-z][A-Za-z0-9_]*)}") - - -def parse_makefile(fn, g=None): # noqa: C901 - """Parse a Makefile-style file. - - A dictionary containing name/value pairs is returned. If an - optional dictionary is passed in as the second argument, it is - used instead of a new dictionary. - """ - from distutils.text_file import TextFile - - fp = TextFile( - fn, strip_comments=1, skip_blanks=1, join_lines=1, errors="surrogateescape" - ) - - if g is None: - g = {} - done = {} - notdone = {} - - while True: - line = fp.readline() - if line is None: # eof - break - m = _variable_rx.match(line) - if m: - n, v = m.group(1, 2) - v = v.strip() - # `$$' is a literal `$' in make - tmpv = v.replace('$$', '') - - if "$" in tmpv: - notdone[n] = v - else: - try: - v = int(v) - except ValueError: - # insert literal `$' - done[n] = v.replace('$$', '$') - else: - done[n] = v - - # Variables with a 'PY_' prefix in the makefile. These need to - # be made available without that prefix through sysconfig. - # Special care is needed to ensure that variable expansion works, even - # if the expansion uses the name without a prefix. - renamed_variables = ('CFLAGS', 'LDFLAGS', 'CPPFLAGS') - - # do variable interpolation here - while notdone: - for name in list(notdone): - value = notdone[name] - m = _findvar1_rx.search(value) or _findvar2_rx.search(value) - if m: - n = m.group(1) - found = True - if n in done: - item = str(done[n]) - elif n in notdone: - # get it on a subsequent round - found = False - elif n in os.environ: - # do it like make: fall back to environment - item = os.environ[n] - - elif n in renamed_variables: - if name.startswith('PY_') and name[3:] in renamed_variables: - item = "" - - elif 'PY_' + n in notdone: - found = False - - else: - item = str(done['PY_' + n]) - else: - done[n] = item = "" - if found: - after = value[m.end() :] - value = value[: m.start()] + item + after - if "$" in after: - notdone[name] = value - else: - try: - value = int(value) - except ValueError: - done[name] = value.strip() - else: - done[name] = value - del notdone[name] - - if name.startswith('PY_') and name[3:] in renamed_variables: - - name = name[3:] - if name not in done: - done[name] = value - else: - # bogus variable reference; just drop it since we can't deal - del notdone[name] - - fp.close() - - # strip spurious spaces - for k, v in done.items(): - if isinstance(v, str): - done[k] = v.strip() - - # save the results in the global dictionary - g.update(done) - return g - - -def expand_makefile_vars(s, vars): - """Expand Makefile-style variables -- "${foo}" or "$(foo)" -- in - 'string' according to 'vars' (a dictionary mapping variable names to - values). Variables not present in 'vars' are silently expanded to the - empty string. The variable values in 'vars' should not contain further - variable expansions; if 'vars' is the output of 'parse_makefile()', - you're fine. Returns a variable-expanded version of 's'. - """ - - # This algorithm does multiple expansion, so if vars['foo'] contains - # "${bar}", it will expand ${foo} to ${bar}, and then expand - # ${bar}... and so forth. This is fine as long as 'vars' comes from - # 'parse_makefile()', which takes care of such expansions eagerly, - # according to make's variable expansion semantics. - - while True: - m = _findvar1_rx.search(s) or _findvar2_rx.search(s) - if m: - (beg, end) = m.span() - s = s[0:beg] + vars.get(m.group(1)) + s[end:] - else: - break - return s - - -_config_vars = None - - -def get_config_vars(*args): - """With no arguments, return a dictionary of all configuration - variables relevant for the current platform. Generally this includes - everything needed to build extensions and install both pure modules and - extensions. On Unix, this means every variable defined in Python's - installed Makefile; on Windows it's a much smaller set. - - With arguments, return a list of values that result from looking up - each argument in the configuration variable dictionary. - """ - global _config_vars - if _config_vars is None: - _config_vars = sysconfig.get_config_vars().copy() - py39compat.add_ext_suffix(_config_vars) - - if args: - vals = [] - for name in args: - vals.append(_config_vars.get(name)) - return vals - else: - return _config_vars - - -def get_config_var(name): - """Return the value of a single variable using the dictionary - returned by 'get_config_vars()'. Equivalent to - get_config_vars().get(name) - """ - if name == 'SO': - import warnings - - warnings.warn('SO is deprecated, use EXT_SUFFIX', DeprecationWarning, 2) - return get_config_vars().get(name) diff --git a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/modeling/test_time_augmentation.py b/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/modeling/test_time_augmentation.py deleted file mode 100644 index 373e6bf00a39c040ff1da49d6dcd39a54a0b69a7..0000000000000000000000000000000000000000 --- a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/modeling/test_time_augmentation.py +++ /dev/null @@ -1,307 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import copy -import numpy as np -from contextlib import contextmanager -from itertools import count -from typing import List -import torch -from fvcore.transforms import HFlipTransform, NoOpTransform -from torch import nn -from torch.nn.parallel import DistributedDataParallel - -from detectron2.config import configurable -from detectron2.data.detection_utils import read_image -from detectron2.data.transforms import ( - RandomFlip, - ResizeShortestEdge, - ResizeTransform, - apply_augmentations, -) -from detectron2.structures import Boxes, Instances - -from .meta_arch import GeneralizedRCNN -from .postprocessing import detector_postprocess -from .roi_heads.fast_rcnn import fast_rcnn_inference_single_image - -__all__ = ["DatasetMapperTTA", "GeneralizedRCNNWithTTA"] - - -class DatasetMapperTTA: - """ - Implement test-time augmentation for detection data. - It is a callable which takes a dataset dict from a detection dataset, - and returns a list of dataset dicts where the images - are augmented from the input image by the transformations defined in the config. - This is used for test-time augmentation. - """ - - @configurable - def __init__(self, min_sizes: List[int], max_size: int, flip: bool): - """ - Args: - min_sizes: list of short-edge size to resize the image to - max_size: maximum height or width of resized images - flip: whether to apply flipping augmentation - """ - self.min_sizes = min_sizes - self.max_size = max_size - self.flip = flip - - @classmethod - def from_config(cls, cfg): - return { - "min_sizes": cfg.TEST.AUG.MIN_SIZES, - "max_size": cfg.TEST.AUG.MAX_SIZE, - "flip": cfg.TEST.AUG.FLIP, - } - - def __call__(self, dataset_dict): - """ - Args: - dict: a dict in standard model input format. See tutorials for details. - - Returns: - list[dict]: - a list of dicts, which contain augmented version of the input image. - The total number of dicts is ``len(min_sizes) * (2 if flip else 1)``. - Each dict has field "transforms" which is a TransformList, - containing the transforms that are used to generate this image. - """ - numpy_image = dataset_dict["image"].permute(1, 2, 0).numpy() - shape = numpy_image.shape - orig_shape = (dataset_dict["height"], dataset_dict["width"]) - if shape[:2] != orig_shape: - # It transforms the "original" image in the dataset to the input image - pre_tfm = ResizeTransform(orig_shape[0], orig_shape[1], shape[0], shape[1]) - else: - pre_tfm = NoOpTransform() - - # Create all combinations of augmentations to use - aug_candidates = [] # each element is a list[Augmentation] - for min_size in self.min_sizes: - resize = ResizeShortestEdge(min_size, self.max_size) - aug_candidates.append([resize]) # resize only - if self.flip: - flip = RandomFlip(prob=1.0) - aug_candidates.append([resize, flip]) # resize + flip - - # Apply all the augmentations - ret = [] - for aug in aug_candidates: - new_image, tfms = apply_augmentations(aug, np.copy(numpy_image)) - torch_image = torch.from_numpy(np.ascontiguousarray(new_image.transpose(2, 0, 1))) - - dic = copy.deepcopy(dataset_dict) - dic["transforms"] = pre_tfm + tfms - dic["image"] = torch_image - ret.append(dic) - return ret - - -class GeneralizedRCNNWithTTA(nn.Module): - """ - A GeneralizedRCNN with test-time augmentation enabled. - Its :meth:`__call__` method has the same interface as :meth:`GeneralizedRCNN.forward`. - """ - - def __init__(self, cfg, model, tta_mapper=None, batch_size=3): - """ - Args: - cfg (CfgNode): - model (GeneralizedRCNN): a GeneralizedRCNN to apply TTA on. - tta_mapper (callable): takes a dataset dict and returns a list of - augmented versions of the dataset dict. Defaults to - `DatasetMapperTTA(cfg)`. - batch_size (int): batch the augmented images into this batch size for inference. - """ - super().__init__() - if isinstance(model, DistributedDataParallel): - model = model.module - assert isinstance( - model, GeneralizedRCNN - ), "TTA is only supported on GeneralizedRCNN. Got a model of type {}".format(type(model)) - self.cfg = cfg.clone() - assert not self.cfg.MODEL.KEYPOINT_ON, "TTA for keypoint is not supported yet" - assert ( - not self.cfg.MODEL.LOAD_PROPOSALS - ), "TTA for pre-computed proposals is not supported yet" - - self.model = model - - if tta_mapper is None: - tta_mapper = DatasetMapperTTA(cfg) - self.tta_mapper = tta_mapper - self.batch_size = batch_size - - @contextmanager - def _turn_off_roi_heads(self, attrs): - """ - Open a context where some heads in `model.roi_heads` are temporarily turned off. - Args: - attr (list[str]): the attribute in `model.roi_heads` which can be used - to turn off a specific head, e.g., "mask_on", "keypoint_on". - """ - roi_heads = self.model.roi_heads - old = {} - for attr in attrs: - try: - old[attr] = getattr(roi_heads, attr) - except AttributeError: - # The head may not be implemented in certain ROIHeads - pass - - if len(old.keys()) == 0: - yield - else: - for attr in old.keys(): - setattr(roi_heads, attr, False) - yield - for attr in old.keys(): - setattr(roi_heads, attr, old[attr]) - - def _batch_inference(self, batched_inputs, detected_instances=None): - """ - Execute inference on a list of inputs, - using batch size = self.batch_size, instead of the length of the list. - - Inputs & outputs have the same format as :meth:`GeneralizedRCNN.inference` - """ - if detected_instances is None: - detected_instances = [None] * len(batched_inputs) - - outputs = [] - inputs, instances = [], [] - for idx, input, instance in zip(count(), batched_inputs, detected_instances): - inputs.append(input) - instances.append(instance) - if len(inputs) == self.batch_size or idx == len(batched_inputs) - 1: - outputs.extend( - self.model.inference( - inputs, - instances if instances[0] is not None else None, - do_postprocess=False, - ) - ) - inputs, instances = [], [] - return outputs - - def __call__(self, batched_inputs): - """ - Same input/output format as :meth:`GeneralizedRCNN.forward` - """ - - def _maybe_read_image(dataset_dict): - ret = copy.copy(dataset_dict) - if "image" not in ret: - image = read_image(ret.pop("file_name"), self.model.input_format) - image = torch.from_numpy(np.ascontiguousarray(image.transpose(2, 0, 1))) # CHW - ret["image"] = image - if "height" not in ret and "width" not in ret: - ret["height"] = image.shape[1] - ret["width"] = image.shape[2] - return ret - - return [self._inference_one_image(_maybe_read_image(x)) for x in batched_inputs] - - def _inference_one_image(self, input): - """ - Args: - input (dict): one dataset dict with "image" field being a CHW tensor - - Returns: - dict: one output dict - """ - orig_shape = (input["height"], input["width"]) - augmented_inputs, tfms = self._get_augmented_inputs(input) - # Detect boxes from all augmented versions - with self._turn_off_roi_heads(["mask_on", "keypoint_on"]): - # temporarily disable roi heads - all_boxes, all_scores, all_classes = self._get_augmented_boxes(augmented_inputs, tfms) - # merge all detected boxes to obtain final predictions for boxes - merged_instances = self._merge_detections(all_boxes, all_scores, all_classes, orig_shape) - - if self.cfg.MODEL.MASK_ON: - # Use the detected boxes to obtain masks - augmented_instances = self._rescale_detected_boxes( - augmented_inputs, merged_instances, tfms - ) - # run forward on the detected boxes - outputs = self._batch_inference(augmented_inputs, augmented_instances) - # Delete now useless variables to avoid being out of memory - del augmented_inputs, augmented_instances - # average the predictions - merged_instances.pred_masks = self._reduce_pred_masks(outputs, tfms) - merged_instances = detector_postprocess(merged_instances, *orig_shape) - return {"instances": merged_instances} - else: - return {"instances": merged_instances} - - def _get_augmented_inputs(self, input): - augmented_inputs = self.tta_mapper(input) - tfms = [x.pop("transforms") for x in augmented_inputs] - return augmented_inputs, tfms - - def _get_augmented_boxes(self, augmented_inputs, tfms): - # 1: forward with all augmented images - outputs = self._batch_inference(augmented_inputs) - # 2: union the results - all_boxes = [] - all_scores = [] - all_classes = [] - for output, tfm in zip(outputs, tfms): - # Need to inverse the transforms on boxes, to obtain results on original image - pred_boxes = output.pred_boxes.tensor - original_pred_boxes = tfm.inverse().apply_box(pred_boxes.cpu().numpy()) - all_boxes.append(torch.from_numpy(original_pred_boxes).to(pred_boxes.device)) - - all_scores.extend(output.scores) - all_classes.extend(output.pred_classes) - all_boxes = torch.cat(all_boxes, dim=0) - return all_boxes, all_scores, all_classes - - def _merge_detections(self, all_boxes, all_scores, all_classes, shape_hw): - # select from the union of all results - num_boxes = len(all_boxes) - num_classes = self.cfg.MODEL.ROI_HEADS.NUM_CLASSES - # +1 because fast_rcnn_inference expects background scores as well - all_scores_2d = torch.zeros(num_boxes, num_classes + 1, device=all_boxes.device) - for idx, cls, score in zip(count(), all_classes, all_scores): - all_scores_2d[idx, cls] = score - - merged_instances, _ = fast_rcnn_inference_single_image( - all_boxes, - all_scores_2d, - shape_hw, - 1e-8, - self.cfg.MODEL.ROI_HEADS.NMS_THRESH_TEST, - self.cfg.TEST.DETECTIONS_PER_IMAGE, - ) - - return merged_instances - - def _rescale_detected_boxes(self, augmented_inputs, merged_instances, tfms): - augmented_instances = [] - for input, tfm in zip(augmented_inputs, tfms): - # Transform the target box to the augmented image's coordinate space - pred_boxes = merged_instances.pred_boxes.tensor.cpu().numpy() - pred_boxes = torch.from_numpy(tfm.apply_box(pred_boxes)) - - aug_instances = Instances( - image_size=input["image"].shape[1:3], - pred_boxes=Boxes(pred_boxes), - pred_classes=merged_instances.pred_classes, - scores=merged_instances.scores, - ) - augmented_instances.append(aug_instances) - return augmented_instances - - def _reduce_pred_masks(self, outputs, tfms): - # Should apply inverse transforms on masks. - # We assume only resize & flip are used. pred_masks is a scale-invariant - # representation, so we handle flip specially - for output, tfm in zip(outputs, tfms): - if any(isinstance(t, HFlipTransform) for t in tfm.transforms): - output.pred_masks = output.pred_masks.flip(dims=[3]) - all_pred_masks = torch.stack([o.pred_masks for o in outputs], dim=0) - avg_pred_masks = torch.mean(all_pred_masks, dim=0) - return avg_pred_masks diff --git a/spaces/Banbri/zcvzcv/src/app/interface/page/index.tsx b/spaces/Banbri/zcvzcv/src/app/interface/page/index.tsx deleted file mode 100644 index 545ecb4af98a3f4bac9b964f1d4bae32bd62294a..0000000000000000000000000000000000000000 --- a/spaces/Banbri/zcvzcv/src/app/interface/page/index.tsx +++ /dev/null @@ -1,55 +0,0 @@ -import { allLayoutAspectRatios, allLayouts } from "@/app/layouts" -import { useStore } from "@/app/store" -import { cn } from "@/lib/utils" -import { useEffect, useRef } from "react" - -export function Page({ page }: { page: number }) { - const zoomLevel = useStore(state => state.zoomLevel) - const layouts = useStore(state => state.layouts) - // const prompt = useStore(state => state.prompt) - - const LayoutElement = (allLayouts as any)[layouts[page]] - const aspectRatio = ((allLayoutAspectRatios as any)[layouts[page]] as string) || "aspect-[250/297]" - /* - const [canLoad, setCanLoad] = useState(false) - useEffect(() => { - if (prompt?.length) { - setCanLoad(false) - setTimeout(() => { - setCanLoad(true) - }, page * 4000) - } - }, [prompt]) - */ - - const setPage = useStore(state => state.setPage) - const pageRef = useRef(null) - - useEffect(() => { - const element = pageRef.current - if (!element) { return } - setPage(element) - }, [pageRef.current]) - - return ( -
    100 ? `100`}` - }} - > - -
    - ) -} \ No newline at end of file diff --git a/spaces/Benson/text-generation/Examples/Descargar Apk X Aire Behringer.md b/spaces/Benson/text-generation/Examples/Descargar Apk X Aire Behringer.md deleted file mode 100644 index 3f3545766742f899fbddc5b5af47bd1e516ee6ec..0000000000000000000000000000000000000000 --- a/spaces/Benson/text-generation/Examples/Descargar Apk X Aire Behringer.md +++ /dev/null @@ -1,61 +0,0 @@ - -

    Cómo descargar APK X AIR Behringer para dispositivos Android

    -

    Si usted es un músico, ingeniero de sonido, o intérprete en vivo que utiliza un mezclador digital BEHRINGER X AIR, es posible que desee descargar APK X AIR Behringer para su dispositivo Android. Esta aplicación le permite controlar todas las funciones de mezcla, procesamiento y efectos de su mezclador desde su tableta o teléfono inteligente. En este artículo, te mostraré cómo descargar, instalar y usar esta aplicación, así como algunos consejos y trucos para sacarle el máximo partido.

    -

    Beneficios de APK X AIR Behringer

    -

    APK X AIR Behringer es una aplicación gratuita que ofrece un control completo para los mezcladores X18, XR18, XR16 y XR12. La interfaz de usuario es configurable para acceso simplificado o edición de nivel experto (S/E), para mezclar 18 canales de entrada a 12 buses. Control también se proporciona para los cuatro procesadores de efectos estéreo internos - todos los cuales cuentan con el aclamado motor de procesamiento de audio BEHRINGER X32.

    -

    descargar apk x aire behringer


    Download ——— https://bltlly.com/2v6LG8



    -

    La aplicación proporciona la movilidad para ir donde usted necesita para obtener el máximo provecho de su sistema, lo que le permite ajustar la mezcla de la casa desde cualquier asiento o mezclas de monitor de ajuste fino desde el escenario. Dado que todos los mezcladores BEHRINGER X AIR cuentan con puntos de acceso internos, la configuración de la aplicación no podría ser más simple - solo seleccione la red X AIR y conecte su dispositivo Android a ella. Al abrir la aplicación, su mezclador X AIR se mostrará como un dispositivo controlable, e incluso le permitirá bloquear su dispositivo Android a ese mezclador X AIR específico. También puede ejecutar la aplicación en modo de demostración sin conectarse a su mezclador.

    -

    No se requiere hardware adicional, por lo que la aplicación es la solución ideal para aplicaciones de mezcla remota sin problemas. Ya sea que lo utilice para espectáculos en vivo, grabaciones de estudio, ensayos, podcasts o seminarios web, APK X AIR Behringer puede ayudarle a lograr una calidad de sonido profesional con facilidad y comodidad.

    -

    Requisitos para APK X AIR Behringer

    -

    Para usar APK X AIR Behringer, necesita lo siguiente:

    -
      - -
    • Un mezclador digital BEHRINGER X AIR (X18, XR18, XR16 o XR12) con firmware versión 1.15 o superior
    • -
    • Una red Wi-Fi que conecta tu dispositivo y tu mezclador
    • -
    • Una conexión a Internet para descargar la aplicación
    • -
    -

    Pasos para descargar APK X AIR Behringer

    -

    Aquí están los pasos para descargar APK X AIR Behringer para su dispositivo Android:

    -

    Paso 1: Encontrar el enlace de descarga oficial para APK X AIR Behringer

    -

    La aplicación no está disponible en la Google Play Store, por lo que necesita encontrar el enlace oficial de descarga desde el sitio web de BEHRINGER. Puede escanear el código QR en la página del producto o ir a esta URL: [https://www.behringer.com/behringer/product?modelCode=P0BI8]

    -

    Paso 2: Habilitar fuentes desconocidas en la configuración del dispositivo

    -

    Dado que está descargando la aplicación desde una fuente de terceros, debe habilitar fuentes desconocidas en la configuración de su dispositivo. Esto le permitirá instalar aplicaciones que no son de Google Play Store. Para hacer esto, vaya a Configuración > Seguridad > Fuentes desconocidas y conéctelo. Es posible que vea un mensaje de advertencia que dice que instalar desde fuentes desconocidas puede dañar su dispositivo, pero puede ignorarlo siempre y cuando confíe en el origen de la aplicación.

    -

    Paso 3: Descargar e instalar el archivo APK

    -

    Una vez que haya habilitado fuentes desconocidas, puede descargar el archivo APK desde el enlace que encontró en el paso 1. El tamaño del archivo es de aproximadamente 5.6 MB y debería tomar unos segundos para descargar dependiendo de su velocidad de Internet. Una vez finalizada la descarga, abre el archivo y sigue las instrucciones para instalar la aplicación en tu dispositivo. Es posible que necesite conceder algunos permisos a la aplicación, como el acceso a su red Wi-Fi y almacenamiento.

    -

    Paso 4: Conecte su dispositivo a su mezclador X AIR a través de Wi-Fi

    - -

    Paso 5: Iniciar la aplicación y disfrutar de sus características

    -

    Ahora que ha instalado y conectado la aplicación, puede iniciarla y comenzar a controlar su mezclador de forma remota. Verá una lista de dispositivos disponibles en la pantalla de inicio de la aplicación. Toque en el que coincida con el nombre de la red del mezclador y el número de modelo. A continuación, verá un mensaje de confirmación que dice "Conectado". Ahora puede acceder a todas las funciones de mezcla, procesamiento y efectos de su mezclador desde su dispositivo. También puede cambiar entre el modo S/E, la superposición RTA, el modo de envío de bus único, la función AutoMixing y las instantáneas internas desde el menú de la aplicación.

    -

    -

    Consejos y trucos para el uso de APK X AIR Behringer

    -

    Para optimizar tu experiencia con APK X AIR Behringer, aquí hay algunos consejos y trucos que puedes probar:

    -

    Consejo 1: Utilice el modo S/ E para cambiar entre la edición de nivel simplificado y experto

    -

    La aplicación tiene dos modos de operación: simplificado (S) y experto (E). El modo S proporciona una interfaz optimizada que le permite ajustar solo los parámetros más esenciales de cada canal, tales como ganancia, silenciar, solo, pan, EQ, dinámica y enviar niveles. El modo E proporciona una interfaz con todas las funciones que le permite acceder a todos los parámetros de cada canal, tales como configuración de preamplificador, configuración de puerta, configuración de compresor, configuración de limitador, configuración de retardo, etc. Puede cambiar entre los modos S y E pulsando el botón S/ E en la esquina superior izquierda de la aplicación.

    -

    Consejo 2: Utilice la superposición RTA para ajustar la configuración de EQ

    - -

    Consejo 3: Utilice el modo de envío de bus único para el monitoreo personal

    -

    La aplicación tiene un solo modo de envío de bus que le permite controlar solo un nivel de envío de bus por canal a la vez. Esto es útil para aplicaciones de monitoreo personal donde cada músico o intérprete quiere ajustar su propia mezcla de monitor sin afectar a los demás. Para usar este modo, toque en el botón de envío de bus único en la esquina superior derecha de la aplicación y seleccione un bus de la lista. A continuación, verá un fader azul que representa el nivel de envío de ese bus para cada canal. Puede arrastrarlo hacia arriba o hacia abajo para ajustar el nivel de envío. También puede tocar en los botones de silencio o solo para silenciar o solo el bus.

    -

    Consejo 4: Utilice la función de Auto-Mixing para conferencias o discusiones de panel

    -

    La aplicación tiene una función AutoMixing que ajusta automáticamente la ganancia de varios micrófonos en tiempo real para reducir el ruido de fondo y la retroalimentación. Esto es útil para conferencias o mesas redondas donde varios oradores están hablando al mismo tiempo. Para utilizar esta función, toque en el botón de Auto Mixing en la esquina superior derecha de la aplicación y seleccione un canal de entrada de la lista. A continuación, verá un indicador verde que muestra el estado de AutoMixing de ese canal. También puede ajustar el umbral, el peso y los parámetros de destino del algoritmo AutoMixing.

    -

    Consejo 5: Utilice las instantáneas internas para guardar y recuperar la configuración

    -

    La aplicación tiene una característica de instantánea interna que le permite guardar y recuperar la configuración del mezclador en cualquier momento. Esto es útil para cambiar entre diferentes escenas o ajustes preestablecidos rápida y fácilmente. Para utilizar esta función, toque en el botón de instantánea en la esquina superior derecha de la aplicación y seleccione una ranura de instantánea de la lista. A continuación, puede nombrar, guardar, cargar o eliminar su instantánea. También puede usar la función de bloqueo para evitar cambios accidentales en su instantánea.

    -

    Conclusión

    - -

    Si tiene alguna pregunta o comentario sobre APK X AIR Behringer, no dude en ponerse en contacto con el equipo de atención al cliente de BEHRINGER o visitar su sitio web para obtener más información. También puede consultar su canal de YouTube para obtener tutoriales y demostraciones de sus productos.

    -

    Gracias por leer este artículo y espero que te haya sido útil. Si lo hizo, por favor compartirlo con sus amigos y colegas que podrían estar interesados en APK X AIR Behringer. Y no te olvides de descargar la aplicación y probarlo por ti mismo!

    -

    Preguntas frecuentes

    -

    Aquí hay algunas preguntas y respuestas comunes sobre APK X AIR Behringer:

    -
      -
    1. ¿Es APK X AIR Behringer compatible con otros productos BEHRINGER?
    2. -

      APK X AIR Behringer está diseñado específicamente para los mezcladores X18, XR18, XR16 y XR12. No es compatible con otros productos BEHRINGER, como el X32 o X AIR EDIT.

      -
    3. ¿Puedo usar APK X AIR Behringer con múltiples dispositivos al mismo tiempo?
    4. -

      Sí, puede usar APK X AIR Behringer con varios dispositivos al mismo tiempo, siempre y cuando estén conectados a la misma red Wi-Fi que su mezclador. Sin embargo, debe tener cuidado de no hacer cambios conflictivos en la configuración del mezclador desde diferentes dispositivos, ya que esto puede causar resultados inesperados.

      -
    5. ¿Puedo usar APK X AIR Behringer sin conexión?
    6. -

      No, no se puede utilizar APK X AIR Behringer fuera de línea. Necesita una conexión a Internet para descargar la aplicación y una conexión Wi-Fi para conectarse a su mezclador.

      -
    7. ¿Cómo puedo actualizar APK X AIR Behringer?
    8. -

      Para actualizar APK X AIR Behringer, es necesario comprobar si hay nuevas versiones en el sitio web de BEHRINGER y descargarlos manualmente. La aplicación no tiene una función de actualización automática.

      -
    9. ¿Cómo puedo desinstalar APK X AIR Behringer?
    10. -

      Para desinstalar APK X AIR Behringer, es necesario ir a Configuración > Aplicaciones en el dispositivo y encontrar la aplicación de la lista. A continuación, toque en él y seleccione desinstalar.

      -

    64aa2da5cf
    -
    -
    \ No newline at end of file diff --git a/spaces/Benson/text-generation/Examples/Descargar Azulejos De Piano 2 Apk Mod.md b/spaces/Benson/text-generation/Examples/Descargar Azulejos De Piano 2 Apk Mod.md deleted file mode 100644 index 7962394020c1ee673035918e15831c8d87570abb..0000000000000000000000000000000000000000 --- a/spaces/Benson/text-generation/Examples/Descargar Azulejos De Piano 2 Apk Mod.md +++ /dev/null @@ -1,50 +0,0 @@ - -

    Descargar Piano Tiles 2 APK Mod: Un divertido y desafiante juego de música

    -

    ¿Te gusta la música y los juegos de ritmo? ¿Quieres probar tus reflejos y habilidades de coordinación? Si es así, entonces deberías probar Piano Tiles 2, uno de los juegos de música más populares y adictivos del mundo. Y si quieres disfrutar del juego con más características y beneficios, entonces usted debe descargar Piano Tiles 2 APK Mod, una versión modificada del juego que le da acceso ilimitado a todas las canciones, monedas, diamantes, y más. En este artículo, le diremos todo lo que necesita saber sobre Piano Tiles 2 y cómo descargar e instalar Piano Tiles 2 APK Mod en su dispositivo Android.

    -

    ¿Qué es Piano Tiles 2?

    -

    Piano Tiles 2 es una secuela del juego original Piano Tiles, también conocido como Don’t Tap the White Tile. Es un juego simple pero desafiante donde tienes que tocar las fichas negras que aparecen en la pantalla en sincronía con la música. El juego tiene cientos de canciones de diferentes géneros, como clásica, pop, rock, jazz y más. También puedes competir con otros jugadores de todo el mundo y ver quién puede puntuar más alto en la clasificación.

    -

    descargar azulejos de piano 2 apk mod


    DOWNLOADhttps://bltlly.com/2v6LLo



    -

    Características de las baldosas de piano 2

    -

    Piano Tiles 2 tiene muchas características que lo convierten en un juego divertido y emocionante para jugar. Algunas de ellas son:

    -
      -
    • Sonido y gráficos de alta calidad: El juego tiene gráficos impresionantes y animaciones suaves que crean una experiencia realista de tocar el piano. La calidad de sonido también es excelente, con notas claras y nítidas que coinciden perfectamente con las canciones.
    • -
    • Varias canciones y niveles: El juego tiene una gran colección de canciones de diferentes géneros y épocas, como Mozart, Beethoven, Chopin, Taylor Swift, Ed Sheeran, Bruno Mars y más. Puede elegir entre diferentes niveles de dificultad, que van desde fácil de dominar.
    • - -
    • Logros y recompensas: El juego tiene muchos logros que puedes desbloquear completando ciertas tareas o alcanzando ciertos hitos. También puedes ganar monedas y diamantes jugando o viendo anuncios. Puedes usar estas monedas para comprar nuevas canciones, skins, boosters y más.
    • -
    -

    Cómo jugar Piano Tiles 2

    -

    La jugabilidad de Piano Tiles 2 es muy simple e intuitiva. Todo lo que tienes que hacer es tocar las fichas negras que aparecen en la pantalla en sincronía con la música. Tienes que evitar tocar las fichas blancas o perder las fichas negras, de lo contrario perderás el juego. Cuanto más rápido toque, mayor será su puntuación será. También puedes usar boosters como monedas dobles, auto-play o revive para ayudarte en situaciones difíciles.

    -

    ¿Por qué descargar Piano Tiles 2 APK Mod?

    -

    Piano Tiles 2 APK Mod es una versión modificada del juego original que le da acceso ilimitado a todas las características y beneficios del juego. Algunas de las ventajas de descargar Piano Tiles 2 APK Mod son:

    -
      -
    • Todas las canciones desbloqueadas: Puedes reproducir cualquier canción que quieras sin tener que gastar monedas o diamantes o esperar a que se desbloqueen.
    • -
    • Todas las monedas y diamantes ilimitados: y están familiarizados con, ya que esto le ayudará a tocar las fichas con mayor precisión y disfrutar de la música más.

      -

      Usa los amplificadores sabiamente

      -

      El juego tiene varios potenciadores que pueden ayudarte de diferentes maneras. Algunos de ellos son:

      -
        -
      • Monedas dobles: Este booster duplicará la cantidad de monedas que ganes en un juego. Puedes usarlo para comprar más canciones, skins u otros boosters.
      • -
      • Auto-play: Este booster hará que el juego se juegue solo por unos segundos. Puedes usarlo para descansar tus dedos o evitar fichas difíciles.
      • -
      • Revive: Este refuerzo te permitirá continuar el juego después de cometer un error. Puedes usarlo para guardar tu progreso o mejorar tu puntuación.
      • -
      - -

      Practica y mejora tus habilidades

      -

      La mejor manera de mejorar en Piano Tiles 2 es practicar y mejorar tus habilidades. Usted debe jugar el juego con regularidad y probar diferentes canciones y niveles. También debe prestar atención al ritmo y el tiempo de las fichas, así como la velocidad y la dirección de las fichas deslizantes. También debes tratar de tocar las fichas con ambas manos, ya que esto aumentará tu eficiencia y coordinación. Cuanto más juegues, más aprenderás y dominarás el juego.

      -

      -

      Conclusión

      -

      Piano Tiles 2 es un divertido y desafiante juego de música que pondrá a prueba tus reflejos y habilidades de coordinación. Tiene cientos de canciones de diferentes géneros y niveles de dificultad, así como sonido de alta calidad y gráficos. También puedes competir con otros jugadores de todo el mundo y ver quién puede jugar más rápido y mejor. Si quieres disfrutar del juego con más características y beneficios, usted debe descargar Piano Tiles 2 APK Mod, una versión modificada del juego que le da acceso ilimitado a todas las canciones, monedas, diamantes, y más. Puede descargar e instalar Piano Tiles 2 APK Mod en su dispositivo Android siguiendo los sencillos pasos que hemos proporcionado en este artículo. Esperamos que te diviertas jugando Piano Tiles 2!

      -

      Preguntas frecuentes

      -

      Aquí hay algunas preguntas frecuentes sobre Piano Tiles 2 y Piano Tiles 2 APK Mod:

      -
        -
      1. Es Piano Tiles 2 libre para jugar?
        -Sí, Piano Tiles 2 es gratis, pero tiene algunas compras en la aplicación que requieren dinero real. También puedes ver anuncios para ganar monedas o diamantes.
      2. -
      3. ¿Es seguro usar Piano Tiles 2 APK Mod?
        -Sí, Piano Tiles 2 APK Mod es seguro de usar, siempre y cuando se descarga de una fuente confiable. Lo hemos probado en nuestros dispositivos y no encontramos virus o malware.
      4. -
      5. ¿Puedo tocar Piano Tiles 2 sin conexión?
        - -
      6. ¿Puedo actualizar Piano Tiles 2 APK Mod?
        -Sí, puede actualizar Piano Tiles 2 APK Mod, pero puede perder algunas de las características modded si lo hace. Te recomendamos que busques actualizaciones de la misma fuente donde descargaste el archivo APK.
      7. -
      8. ¿Puedo sincronizar mi progreso entre dispositivos?
        -Sí, puedes sincronizar tu progreso entre dispositivos iniciando sesión con tu cuenta de Facebook. Sin embargo, esto puede no funcionar para Piano Tiles 2 APK Mod, ya que puede entrar en conflicto con los datos originales del juego.
      9. -

      64aa2da5cf
      -
      -
      \ No newline at end of file diff --git a/spaces/BetterAPI/BetterChat_new/src/app.html b/spaces/BetterAPI/BetterChat_new/src/app.html deleted file mode 100644 index cbee75a1325edc1e113cb99a35bf491d216bb8a1..0000000000000000000000000000000000000000 --- a/spaces/BetterAPI/BetterChat_new/src/app.html +++ /dev/null @@ -1,45 +0,0 @@ - - - - - - - HuggingChat - - %sveltekit.head% - - -
      %sveltekit.body%
      - - - - - diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/boto3/resources/params.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/boto3/resources/params.py deleted file mode 100644 index 3c5c74b3c7c0521a12ce2911635985fe0ed64798..0000000000000000000000000000000000000000 --- a/spaces/Big-Web/MMSD/env/Lib/site-packages/boto3/resources/params.py +++ /dev/null @@ -1,167 +0,0 @@ -# Copyright 2014 Amazon.com, Inc. or its affiliates. All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"). You -# may not use this file except in compliance with the License. A copy of -# the License is located at -# -# https://aws.amazon.com/apache2.0/ -# -# or in the "license" file accompanying this file. This file is -# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF -# ANY KIND, either express or implied. See the License for the specific -# language governing permissions and limitations under the License. - -import re - -import jmespath -from botocore import xform_name - -from ..exceptions import ResourceLoadException - -INDEX_RE = re.compile(r'\[(.*)\]$') - - -def get_data_member(parent, path): - """ - Get a data member from a parent using a JMESPath search query, - loading the parent if required. If the parent cannot be loaded - and no data is present then an exception is raised. - - :type parent: ServiceResource - :param parent: The resource instance to which contains data we - are interested in. - :type path: string - :param path: The JMESPath expression to query - :raises ResourceLoadException: When no data is present and the - resource cannot be loaded. - :returns: The queried data or ``None``. - """ - # Ensure the parent has its data loaded, if possible. - if parent.meta.data is None: - if hasattr(parent, 'load'): - parent.load() - else: - raise ResourceLoadException( - f'{parent.__class__.__name__} has no load method!' - ) - - return jmespath.search(path, parent.meta.data) - - -def create_request_parameters(parent, request_model, params=None, index=None): - """ - Handle request parameters that can be filled in from identifiers, - resource data members or constants. - - By passing ``params``, you can invoke this method multiple times and - build up a parameter dict over time, which is particularly useful - for reverse JMESPath expressions that append to lists. - - :type parent: ServiceResource - :param parent: The resource instance to which this action is attached. - :type request_model: :py:class:`~boto3.resources.model.Request` - :param request_model: The action request model. - :type params: dict - :param params: If set, then add to this existing dict. It is both - edited in-place and returned. - :type index: int - :param index: The position of an item within a list - :rtype: dict - :return: Pre-filled parameters to be sent to the request operation. - """ - if params is None: - params = {} - - for param in request_model.params: - source = param.source - target = param.target - - if source == 'identifier': - # Resource identifier, e.g. queue.url - value = getattr(parent, xform_name(param.name)) - elif source == 'data': - # If this is a data member then it may incur a load - # action before returning the value. - value = get_data_member(parent, param.path) - elif source in ['string', 'integer', 'boolean']: - # These are hard-coded values in the definition - value = param.value - elif source == 'input': - # This is provided by the user, so ignore it here - continue - else: - raise NotImplementedError(f'Unsupported source type: {source}') - - build_param_structure(params, target, value, index) - - return params - - -def build_param_structure(params, target, value, index=None): - """ - This method provides a basic reverse JMESPath implementation that - lets you go from a JMESPath-like string to a possibly deeply nested - object. The ``params`` are mutated in-place, so subsequent calls - can modify the same element by its index. - - >>> build_param_structure(params, 'test[0]', 1) - >>> print(params) - {'test': [1]} - - >>> build_param_structure(params, 'foo.bar[0].baz', 'hello world') - >>> print(params) - {'test': [1], 'foo': {'bar': [{'baz': 'hello, world'}]}} - - """ - pos = params - parts = target.split('.') - - # First, split into parts like 'foo', 'bar[0]', 'baz' and process - # each piece. It can either be a list or a dict, depending on if - # an index like `[0]` is present. We detect this via a regular - # expression, and keep track of where we are in params via the - # pos variable, walking down to the last item. Once there, we - # set the value. - for i, part in enumerate(parts): - # Is it indexing an array? - result = INDEX_RE.search(part) - if result: - if result.group(1): - if result.group(1) == '*': - part = part[:-3] - else: - # We have an explicit index - index = int(result.group(1)) - part = part[: -len(str(index) + '[]')] - else: - # Index will be set after we know the proper part - # name and that it's a list instance. - index = None - part = part[:-2] - - if part not in pos or not isinstance(pos[part], list): - pos[part] = [] - - # This means we should append, e.g. 'foo[]' - if index is None: - index = len(pos[part]) - - while len(pos[part]) <= index: - # Assume it's a dict until we set the final value below - pos[part].append({}) - - # Last item? Set the value, otherwise set the new position - if i == len(parts) - 1: - pos[part][index] = value - else: - # The new pos is the *item* in the array, not the array! - pos = pos[part][index] - else: - if part not in pos: - pos[part] = {} - - # Last item? Set the value, otherwise set the new position - if i == len(parts) - 1: - pos[part] = value - else: - pos = pos[part] diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/chardet/jpcntx.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/chardet/jpcntx.py deleted file mode 100644 index 2f53bdda09e92da38e31cac1a6d415f4670137f7..0000000000000000000000000000000000000000 --- a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/chardet/jpcntx.py +++ /dev/null @@ -1,238 +0,0 @@ -######################## BEGIN LICENSE BLOCK ######################## -# The Original Code is Mozilla Communicator client code. -# -# The Initial Developer of the Original Code is -# Netscape Communications Corporation. -# Portions created by the Initial Developer are Copyright (C) 1998 -# the Initial Developer. All Rights Reserved. -# -# Contributor(s): -# Mark Pilgrim - port to Python -# -# This library is free software; you can redistribute it and/or -# modify it under the terms of the GNU Lesser General Public -# License as published by the Free Software Foundation; either -# version 2.1 of the License, or (at your option) any later version. -# -# This library is distributed in the hope that it will be useful, -# but WITHOUT ANY WARRANTY; without even the implied warranty of -# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU -# Lesser General Public License for more details. -# -# You should have received a copy of the GNU Lesser General Public -# License along with this library; if not, write to the Free Software -# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA -# 02110-1301 USA -######################### END LICENSE BLOCK ######################### - -from typing import List, Tuple, Union - -# This is hiragana 2-char sequence table, the number in each cell represents its frequency category -# fmt: off -jp2_char_context = ( - (0, 0, 0, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1), - (2, 4, 0, 4, 0, 3, 0, 4, 0, 3, 4, 4, 4, 2, 4, 3, 3, 4, 3, 2, 3, 3, 4, 2, 3, 3, 3, 2, 4, 1, 4, 3, 3, 1, 5, 4, 3, 4, 3, 4, 3, 5, 3, 0, 3, 5, 4, 2, 0, 3, 1, 0, 3, 3, 0, 3, 3, 0, 1, 1, 0, 4, 3, 0, 3, 3, 0, 4, 0, 2, 0, 3, 5, 5, 5, 5, 4, 0, 4, 1, 0, 3, 4), - (0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 2), - (0, 4, 0, 5, 0, 5, 0, 4, 0, 4, 5, 4, 4, 3, 5, 3, 5, 1, 5, 3, 4, 3, 4, 4, 3, 4, 3, 3, 4, 3, 5, 4, 4, 3, 5, 5, 3, 5, 5, 5, 3, 5, 5, 3, 4, 5, 5, 3, 1, 3, 2, 0, 3, 4, 0, 4, 2, 0, 4, 2, 1, 5, 3, 2, 3, 5, 0, 4, 0, 2, 0, 5, 4, 4, 5, 4, 5, 0, 4, 0, 0, 4, 4), - (0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0), - (0, 3, 0, 4, 0, 3, 0, 3, 0, 4, 5, 4, 3, 3, 3, 3, 4, 3, 5, 4, 4, 3, 5, 4, 4, 3, 4, 3, 4, 4, 4, 4, 5, 3, 4, 4, 3, 4, 5, 5, 4, 5, 5, 1, 4, 5, 4, 3, 0, 3, 3, 1, 3, 3, 0, 4, 4, 0, 3, 3, 1, 5, 3, 3, 3, 5, 0, 4, 0, 3, 0, 4, 4, 3, 4, 3, 3, 0, 4, 1, 1, 3, 4), - (0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0), - (0, 4, 0, 3, 0, 3, 0, 4, 0, 3, 4, 4, 3, 2, 2, 1, 2, 1, 3, 1, 3, 3, 3, 3, 3, 4, 3, 1, 3, 3, 5, 3, 3, 0, 4, 3, 0, 5, 4, 3, 3, 5, 4, 4, 3, 4, 4, 5, 0, 1, 2, 0, 1, 2, 0, 2, 2, 0, 1, 0, 0, 5, 2, 2, 1, 4, 0, 3, 0, 1, 0, 4, 4, 3, 5, 4, 3, 0, 2, 1, 0, 4, 3), - (0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0), - (0, 3, 0, 5, 0, 4, 0, 2, 1, 4, 4, 2, 4, 1, 4, 2, 4, 2, 4, 3, 3, 3, 4, 3, 3, 3, 3, 1, 4, 2, 3, 3, 3, 1, 4, 4, 1, 1, 1, 4, 3, 3, 2, 0, 2, 4, 3, 2, 0, 3, 3, 0, 3, 1, 1, 0, 0, 0, 3, 3, 0, 4, 2, 2, 3, 4, 0, 4, 0, 3, 0, 4, 4, 5, 3, 4, 4, 0, 3, 0, 0, 1, 4), - (1, 4, 0, 4, 0, 4, 0, 4, 0, 3, 5, 4, 4, 3, 4, 3, 5, 4, 3, 3, 4, 3, 5, 4, 4, 4, 4, 3, 4, 2, 4, 3, 3, 1, 5, 4, 3, 2, 4, 5, 4, 5, 5, 4, 4, 5, 4, 4, 0, 3, 2, 2, 3, 3, 0, 4, 3, 1, 3, 2, 1, 4, 3, 3, 4, 5, 0, 3, 0, 2, 0, 4, 5, 5, 4, 5, 4, 0, 4, 0, 0, 5, 4), - (0, 5, 0, 5, 0, 4, 0, 3, 0, 4, 4, 3, 4, 3, 3, 3, 4, 0, 4, 4, 4, 3, 4, 3, 4, 3, 3, 1, 4, 2, 4, 3, 4, 0, 5, 4, 1, 4, 5, 4, 4, 5, 3, 2, 4, 3, 4, 3, 2, 4, 1, 3, 3, 3, 2, 3, 2, 0, 4, 3, 3, 4, 3, 3, 3, 4, 0, 4, 0, 3, 0, 4, 5, 4, 4, 4, 3, 0, 4, 1, 0, 1, 3), - (0, 3, 1, 4, 0, 3, 0, 2, 0, 3, 4, 4, 3, 1, 4, 2, 3, 3, 4, 3, 4, 3, 4, 3, 4, 4, 3, 2, 3, 1, 5, 4, 4, 1, 4, 4, 3, 5, 4, 4, 3, 5, 5, 4, 3, 4, 4, 3, 1, 2, 3, 1, 2, 2, 0, 3, 2, 0, 3, 1, 0, 5, 3, 3, 3, 4, 3, 3, 3, 3, 4, 4, 4, 4, 5, 4, 2, 0, 3, 3, 2, 4, 3), - (0, 2, 0, 3, 0, 1, 0, 1, 0, 0, 3, 2, 0, 0, 2, 0, 1, 0, 2, 1, 3, 3, 3, 1, 2, 3, 1, 0, 1, 0, 4, 2, 1, 1, 3, 3, 0, 4, 3, 3, 1, 4, 3, 3, 0, 3, 3, 2, 0, 0, 0, 0, 1, 0, 0, 2, 0, 0, 0, 0, 0, 4, 1, 0, 2, 3, 2, 2, 2, 1, 3, 3, 3, 4, 4, 3, 2, 0, 3, 1, 0, 3, 3), - (0, 4, 0, 4, 0, 3, 0, 3, 0, 4, 4, 4, 3, 3, 3, 3, 3, 3, 4, 3, 4, 2, 4, 3, 4, 3, 3, 2, 4, 3, 4, 5, 4, 1, 4, 5, 3, 5, 4, 5, 3, 5, 4, 0, 3, 5, 5, 3, 1, 3, 3, 2, 2, 3, 0, 3, 4, 1, 3, 3, 2, 4, 3, 3, 3, 4, 0, 4, 0, 3, 0, 4, 5, 4, 4, 5, 3, 0, 4, 1, 0, 3, 4), - (0, 2, 0, 3, 0, 3, 0, 0, 0, 2, 2, 2, 1, 0, 1, 0, 0, 0, 3, 0, 3, 0, 3, 0, 1, 3, 1, 0, 3, 1, 3, 3, 3, 1, 3, 3, 3, 0, 1, 3, 1, 3, 4, 0, 0, 3, 1, 1, 0, 3, 2, 0, 0, 0, 0, 1, 3, 0, 1, 0, 0, 3, 3, 2, 0, 3, 0, 0, 0, 0, 0, 3, 4, 3, 4, 3, 3, 0, 3, 0, 0, 2, 3), - (2, 3, 0, 3, 0, 2, 0, 1, 0, 3, 3, 4, 3, 1, 3, 1, 1, 1, 3, 1, 4, 3, 4, 3, 3, 3, 0, 0, 3, 1, 5, 4, 3, 1, 4, 3, 2, 5, 5, 4, 4, 4, 4, 3, 3, 4, 4, 4, 0, 2, 1, 1, 3, 2, 0, 1, 2, 0, 0, 1, 0, 4, 1, 3, 3, 3, 0, 3, 0, 1, 0, 4, 4, 4, 5, 5, 3, 0, 2, 0, 0, 4, 4), - (0, 2, 0, 1, 0, 3, 1, 3, 0, 2, 3, 3, 3, 0, 3, 1, 0, 0, 3, 0, 3, 2, 3, 1, 3, 2, 1, 1, 0, 0, 4, 2, 1, 0, 2, 3, 1, 4, 3, 2, 0, 4, 4, 3, 1, 3, 1, 3, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 4, 1, 1, 1, 2, 0, 3, 0, 0, 0, 3, 4, 2, 4, 3, 2, 0, 1, 0, 0, 3, 3), - (0, 1, 0, 4, 0, 5, 0, 4, 0, 2, 4, 4, 2, 3, 3, 2, 3, 3, 5, 3, 3, 3, 4, 3, 4, 2, 3, 0, 4, 3, 3, 3, 4, 1, 4, 3, 2, 1, 5, 5, 3, 4, 5, 1, 3, 5, 4, 2, 0, 3, 3, 0, 1, 3, 0, 4, 2, 0, 1, 3, 1, 4, 3, 3, 3, 3, 0, 3, 0, 1, 0, 3, 4, 4, 4, 5, 5, 0, 3, 0, 1, 4, 5), - (0, 2, 0, 3, 0, 3, 0, 0, 0, 2, 3, 1, 3, 0, 4, 0, 1, 1, 3, 0, 3, 4, 3, 2, 3, 1, 0, 3, 3, 2, 3, 1, 3, 0, 2, 3, 0, 2, 1, 4, 1, 2, 2, 0, 0, 3, 3, 0, 0, 2, 0, 0, 0, 1, 0, 0, 0, 0, 2, 2, 0, 3, 2, 1, 3, 3, 0, 2, 0, 2, 0, 0, 3, 3, 1, 2, 4, 0, 3, 0, 2, 2, 3), - (2, 4, 0, 5, 0, 4, 0, 4, 0, 2, 4, 4, 4, 3, 4, 3, 3, 3, 1, 2, 4, 3, 4, 3, 4, 4, 5, 0, 3, 3, 3, 3, 2, 0, 4, 3, 1, 4, 3, 4, 1, 4, 4, 3, 3, 4, 4, 3, 1, 2, 3, 0, 4, 2, 0, 4, 1, 0, 3, 3, 0, 4, 3, 3, 3, 4, 0, 4, 0, 2, 0, 3, 5, 3, 4, 5, 2, 0, 3, 0, 0, 4, 5), - (0, 3, 0, 4, 0, 1, 0, 1, 0, 1, 3, 2, 2, 1, 3, 0, 3, 0, 2, 0, 2, 0, 3, 0, 2, 0, 0, 0, 1, 0, 1, 1, 0, 0, 3, 1, 0, 0, 0, 4, 0, 3, 1, 0, 2, 1, 3, 0, 0, 0, 0, 0, 0, 3, 0, 0, 0, 0, 0, 0, 0, 4, 2, 2, 3, 1, 0, 3, 0, 0, 0, 1, 4, 4, 4, 3, 0, 0, 4, 0, 0, 1, 4), - (1, 4, 1, 5, 0, 3, 0, 3, 0, 4, 5, 4, 4, 3, 5, 3, 3, 4, 4, 3, 4, 1, 3, 3, 3, 3, 2, 1, 4, 1, 5, 4, 3, 1, 4, 4, 3, 5, 4, 4, 3, 5, 4, 3, 3, 4, 4, 4, 0, 3, 3, 1, 2, 3, 0, 3, 1, 0, 3, 3, 0, 5, 4, 4, 4, 4, 4, 4, 3, 3, 5, 4, 4, 3, 3, 5, 4, 0, 3, 2, 0, 4, 4), - (0, 2, 0, 3, 0, 1, 0, 0, 0, 1, 3, 3, 3, 2, 4, 1, 3, 0, 3, 1, 3, 0, 2, 2, 1, 1, 0, 0, 2, 0, 4, 3, 1, 0, 4, 3, 0, 4, 4, 4, 1, 4, 3, 1, 1, 3, 3, 1, 0, 2, 0, 0, 1, 3, 0, 0, 0, 0, 2, 0, 0, 4, 3, 2, 4, 3, 5, 4, 3, 3, 3, 4, 3, 3, 4, 3, 3, 0, 2, 1, 0, 3, 3), - (0, 2, 0, 4, 0, 3, 0, 2, 0, 2, 5, 5, 3, 4, 4, 4, 4, 1, 4, 3, 3, 0, 4, 3, 4, 3, 1, 3, 3, 2, 4, 3, 0, 3, 4, 3, 0, 3, 4, 4, 2, 4, 4, 0, 4, 5, 3, 3, 2, 2, 1, 1, 1, 2, 0, 1, 5, 0, 3, 3, 2, 4, 3, 3, 3, 4, 0, 3, 0, 2, 0, 4, 4, 3, 5, 5, 0, 0, 3, 0, 2, 3, 3), - (0, 3, 0, 4, 0, 3, 0, 1, 0, 3, 4, 3, 3, 1, 3, 3, 3, 0, 3, 1, 3, 0, 4, 3, 3, 1, 1, 0, 3, 0, 3, 3, 0, 0, 4, 4, 0, 1, 5, 4, 3, 3, 5, 0, 3, 3, 4, 3, 0, 2, 0, 1, 1, 1, 0, 1, 3, 0, 1, 2, 1, 3, 3, 2, 3, 3, 0, 3, 0, 1, 0, 1, 3, 3, 4, 4, 1, 0, 1, 2, 2, 1, 3), - (0, 1, 0, 4, 0, 4, 0, 3, 0, 1, 3, 3, 3, 2, 3, 1, 1, 0, 3, 0, 3, 3, 4, 3, 2, 4, 2, 0, 1, 0, 4, 3, 2, 0, 4, 3, 0, 5, 3, 3, 2, 4, 4, 4, 3, 3, 3, 4, 0, 1, 3, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 4, 2, 3, 3, 3, 0, 3, 0, 0, 0, 4, 4, 4, 5, 3, 2, 0, 3, 3, 0, 3, 5), - (0, 2, 0, 3, 0, 0, 0, 3, 0, 1, 3, 0, 2, 0, 0, 0, 1, 0, 3, 1, 1, 3, 3, 0, 0, 3, 0, 0, 3, 0, 2, 3, 1, 0, 3, 1, 0, 3, 3, 2, 0, 4, 2, 2, 0, 2, 0, 0, 0, 4, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 2, 1, 2, 0, 1, 0, 1, 0, 0, 0, 1, 3, 1, 2, 0, 0, 0, 1, 0, 0, 1, 4), - (0, 3, 0, 3, 0, 5, 0, 1, 0, 2, 4, 3, 1, 3, 3, 2, 1, 1, 5, 2, 1, 0, 5, 1, 2, 0, 0, 0, 3, 3, 2, 2, 3, 2, 4, 3, 0, 0, 3, 3, 1, 3, 3, 0, 2, 5, 3, 4, 0, 3, 3, 0, 1, 2, 0, 2, 2, 0, 3, 2, 0, 2, 2, 3, 3, 3, 0, 2, 0, 1, 0, 3, 4, 4, 2, 5, 4, 0, 3, 0, 0, 3, 5), - (0, 3, 0, 3, 0, 3, 0, 1, 0, 3, 3, 3, 3, 0, 3, 0, 2, 0, 2, 1, 1, 0, 2, 0, 1, 0, 0, 0, 2, 1, 0, 0, 1, 0, 3, 2, 0, 0, 3, 3, 1, 2, 3, 1, 0, 3, 3, 0, 0, 1, 0, 0, 0, 0, 0, 2, 0, 0, 0, 0, 0, 2, 3, 1, 2, 3, 0, 3, 0, 1, 0, 3, 2, 1, 0, 4, 3, 0, 1, 1, 0, 3, 3), - (0, 4, 0, 5, 0, 3, 0, 3, 0, 4, 5, 5, 4, 3, 5, 3, 4, 3, 5, 3, 3, 2, 5, 3, 4, 4, 4, 3, 4, 3, 4, 5, 5, 3, 4, 4, 3, 4, 4, 5, 4, 4, 4, 3, 4, 5, 5, 4, 2, 3, 4, 2, 3, 4, 0, 3, 3, 1, 4, 3, 2, 4, 3, 3, 5, 5, 0, 3, 0, 3, 0, 5, 5, 5, 5, 4, 4, 0, 4, 0, 1, 4, 4), - (0, 4, 0, 4, 0, 3, 0, 3, 0, 3, 5, 4, 4, 2, 3, 2, 5, 1, 3, 2, 5, 1, 4, 2, 3, 2, 3, 3, 4, 3, 3, 3, 3, 2, 5, 4, 1, 3, 3, 5, 3, 4, 4, 0, 4, 4, 3, 1, 1, 3, 1, 0, 2, 3, 0, 2, 3, 0, 3, 0, 0, 4, 3, 1, 3, 4, 0, 3, 0, 2, 0, 4, 4, 4, 3, 4, 5, 0, 4, 0, 0, 3, 4), - (0, 3, 0, 3, 0, 3, 1, 2, 0, 3, 4, 4, 3, 3, 3, 0, 2, 2, 4, 3, 3, 1, 3, 3, 3, 1, 1, 0, 3, 1, 4, 3, 2, 3, 4, 4, 2, 4, 4, 4, 3, 4, 4, 3, 2, 4, 4, 3, 1, 3, 3, 1, 3, 3, 0, 4, 1, 0, 2, 2, 1, 4, 3, 2, 3, 3, 5, 4, 3, 3, 5, 4, 4, 3, 3, 0, 4, 0, 3, 2, 2, 4, 4), - (0, 2, 0, 1, 0, 0, 0, 0, 0, 1, 2, 1, 3, 0, 0, 0, 0, 0, 2, 0, 1, 2, 1, 0, 0, 1, 0, 0, 0, 0, 3, 0, 0, 1, 0, 1, 1, 3, 1, 0, 0, 0, 1, 1, 0, 1, 1, 0, 0, 0, 0, 0, 2, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 2, 2, 0, 3, 4, 0, 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1), - (0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 4, 0, 4, 1, 4, 0, 3, 0, 4, 0, 3, 0, 4, 0, 3, 0, 3, 0, 4, 1, 5, 1, 4, 0, 0, 3, 0, 5, 0, 5, 2, 0, 1, 0, 0, 0, 2, 1, 4, 0, 1, 3, 0, 0, 3, 0, 0, 3, 1, 1, 4, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0), - (1, 4, 0, 5, 0, 3, 0, 2, 0, 3, 5, 4, 4, 3, 4, 3, 5, 3, 4, 3, 3, 0, 4, 3, 3, 3, 3, 3, 3, 2, 4, 4, 3, 1, 3, 4, 4, 5, 4, 4, 3, 4, 4, 1, 3, 5, 4, 3, 3, 3, 1, 2, 2, 3, 3, 1, 3, 1, 3, 3, 3, 5, 3, 3, 4, 5, 0, 3, 0, 3, 0, 3, 4, 3, 4, 4, 3, 0, 3, 0, 2, 4, 3), - (0, 1, 0, 4, 0, 0, 0, 0, 0, 1, 4, 0, 4, 1, 4, 2, 4, 0, 3, 0, 1, 0, 1, 0, 0, 0, 0, 0, 2, 0, 3, 1, 1, 1, 0, 3, 0, 0, 0, 1, 2, 1, 0, 0, 1, 1, 1, 1, 0, 1, 0, 0, 0, 1, 0, 0, 3, 0, 0, 0, 0, 3, 2, 0, 2, 2, 0, 1, 0, 0, 0, 2, 3, 2, 3, 3, 0, 0, 0, 0, 2, 1, 0), - (0, 5, 1, 5, 0, 3, 0, 3, 0, 5, 4, 4, 5, 1, 5, 3, 3, 0, 4, 3, 4, 3, 5, 3, 4, 3, 3, 2, 4, 3, 4, 3, 3, 0, 3, 3, 1, 4, 4, 3, 4, 4, 4, 3, 4, 5, 5, 3, 2, 3, 1, 1, 3, 3, 1, 3, 1, 1, 3, 3, 2, 4, 5, 3, 3, 5, 0, 4, 0, 3, 0, 4, 4, 3, 5, 3, 3, 0, 3, 4, 0, 4, 3), - (0, 5, 0, 5, 0, 3, 0, 2, 0, 4, 4, 3, 5, 2, 4, 3, 3, 3, 4, 4, 4, 3, 5, 3, 5, 3, 3, 1, 4, 0, 4, 3, 3, 0, 3, 3, 0, 4, 4, 4, 4, 5, 4, 3, 3, 5, 5, 3, 2, 3, 1, 2, 3, 2, 0, 1, 0, 0, 3, 2, 2, 4, 4, 3, 1, 5, 0, 4, 0, 3, 0, 4, 3, 1, 3, 2, 1, 0, 3, 3, 0, 3, 3), - (0, 4, 0, 5, 0, 5, 0, 4, 0, 4, 5, 5, 5, 3, 4, 3, 3, 2, 5, 4, 4, 3, 5, 3, 5, 3, 4, 0, 4, 3, 4, 4, 3, 2, 4, 4, 3, 4, 5, 4, 4, 5, 5, 0, 3, 5, 5, 4, 1, 3, 3, 2, 3, 3, 1, 3, 1, 0, 4, 3, 1, 4, 4, 3, 4, 5, 0, 4, 0, 2, 0, 4, 3, 4, 4, 3, 3, 0, 4, 0, 0, 5, 5), - (0, 4, 0, 4, 0, 5, 0, 1, 1, 3, 3, 4, 4, 3, 4, 1, 3, 0, 5, 1, 3, 0, 3, 1, 3, 1, 1, 0, 3, 0, 3, 3, 4, 0, 4, 3, 0, 4, 4, 4, 3, 4, 4, 0, 3, 5, 4, 1, 0, 3, 0, 0, 2, 3, 0, 3, 1, 0, 3, 1, 0, 3, 2, 1, 3, 5, 0, 3, 0, 1, 0, 3, 2, 3, 3, 4, 4, 0, 2, 2, 0, 4, 4), - (2, 4, 0, 5, 0, 4, 0, 3, 0, 4, 5, 5, 4, 3, 5, 3, 5, 3, 5, 3, 5, 2, 5, 3, 4, 3, 3, 4, 3, 4, 5, 3, 2, 1, 5, 4, 3, 2, 3, 4, 5, 3, 4, 1, 2, 5, 4, 3, 0, 3, 3, 0, 3, 2, 0, 2, 3, 0, 4, 1, 0, 3, 4, 3, 3, 5, 0, 3, 0, 1, 0, 4, 5, 5, 5, 4, 3, 0, 4, 2, 0, 3, 5), - (0, 5, 0, 4, 0, 4, 0, 2, 0, 5, 4, 3, 4, 3, 4, 3, 3, 3, 4, 3, 4, 2, 5, 3, 5, 3, 4, 1, 4, 3, 4, 4, 4, 0, 3, 5, 0, 4, 4, 4, 4, 5, 3, 1, 3, 4, 5, 3, 3, 3, 3, 3, 3, 3, 0, 2, 2, 0, 3, 3, 2, 4, 3, 3, 3, 5, 3, 4, 1, 3, 3, 5, 3, 2, 0, 0, 0, 0, 4, 3, 1, 3, 3), - (0, 1, 0, 3, 0, 3, 0, 1, 0, 1, 3, 3, 3, 2, 3, 3, 3, 0, 3, 0, 0, 0, 3, 1, 3, 0, 0, 0, 2, 2, 2, 3, 0, 0, 3, 2, 0, 1, 2, 4, 1, 3, 3, 0, 0, 3, 3, 3, 0, 1, 0, 0, 2, 1, 0, 0, 3, 0, 3, 1, 0, 3, 0, 0, 1, 3, 0, 2, 0, 1, 0, 3, 3, 1, 3, 3, 0, 0, 1, 1, 0, 3, 3), - (0, 2, 0, 3, 0, 2, 1, 4, 0, 2, 2, 3, 1, 1, 3, 1, 1, 0, 2, 0, 3, 1, 2, 3, 1, 3, 0, 0, 1, 0, 4, 3, 2, 3, 3, 3, 1, 4, 2, 3, 3, 3, 3, 1, 0, 3, 1, 4, 0, 1, 1, 0, 1, 2, 0, 1, 1, 0, 1, 1, 0, 3, 1, 3, 2, 2, 0, 1, 0, 0, 0, 2, 3, 3, 3, 1, 0, 0, 0, 0, 0, 2, 3), - (0, 5, 0, 4, 0, 5, 0, 2, 0, 4, 5, 5, 3, 3, 4, 3, 3, 1, 5, 4, 4, 2, 4, 4, 4, 3, 4, 2, 4, 3, 5, 5, 4, 3, 3, 4, 3, 3, 5, 5, 4, 5, 5, 1, 3, 4, 5, 3, 1, 4, 3, 1, 3, 3, 0, 3, 3, 1, 4, 3, 1, 4, 5, 3, 3, 5, 0, 4, 0, 3, 0, 5, 3, 3, 1, 4, 3, 0, 4, 0, 1, 5, 3), - (0, 5, 0, 5, 0, 4, 0, 2, 0, 4, 4, 3, 4, 3, 3, 3, 3, 3, 5, 4, 4, 4, 4, 4, 4, 5, 3, 3, 5, 2, 4, 4, 4, 3, 4, 4, 3, 3, 4, 4, 5, 5, 3, 3, 4, 3, 4, 3, 3, 4, 3, 3, 3, 3, 1, 2, 2, 1, 4, 3, 3, 5, 4, 4, 3, 4, 0, 4, 0, 3, 0, 4, 4, 4, 4, 4, 1, 0, 4, 2, 0, 2, 4), - (0, 4, 0, 4, 0, 3, 0, 1, 0, 3, 5, 2, 3, 0, 3, 0, 2, 1, 4, 2, 3, 3, 4, 1, 4, 3, 3, 2, 4, 1, 3, 3, 3, 0, 3, 3, 0, 0, 3, 3, 3, 5, 3, 3, 3, 3, 3, 2, 0, 2, 0, 0, 2, 0, 0, 2, 0, 0, 1, 0, 0, 3, 1, 2, 2, 3, 0, 3, 0, 2, 0, 4, 4, 3, 3, 4, 1, 0, 3, 0, 0, 2, 4), - (0, 0, 0, 4, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 2, 0, 0, 0, 0, 0, 1, 0, 2, 0, 1, 0, 0, 0, 0, 0, 3, 1, 3, 0, 3, 2, 0, 0, 0, 1, 0, 3, 2, 0, 0, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 3, 4, 0, 2, 0, 0, 0, 0, 0, 0, 2), - (0, 2, 1, 3, 0, 2, 0, 2, 0, 3, 3, 3, 3, 1, 3, 1, 3, 3, 3, 3, 3, 3, 4, 2, 2, 1, 2, 1, 4, 0, 4, 3, 1, 3, 3, 3, 2, 4, 3, 5, 4, 3, 3, 3, 3, 3, 3, 3, 0, 1, 3, 0, 2, 0, 0, 1, 0, 0, 1, 0, 0, 4, 2, 0, 2, 3, 0, 3, 3, 0, 3, 3, 4, 2, 3, 1, 4, 0, 1, 2, 0, 2, 3), - (0, 3, 0, 3, 0, 1, 0, 3, 0, 2, 3, 3, 3, 0, 3, 1, 2, 0, 3, 3, 2, 3, 3, 2, 3, 2, 3, 1, 3, 0, 4, 3, 2, 0, 3, 3, 1, 4, 3, 3, 2, 3, 4, 3, 1, 3, 3, 1, 1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0, 0, 0, 4, 1, 1, 0, 3, 0, 3, 1, 0, 2, 3, 3, 3, 3, 3, 1, 0, 0, 2, 0, 3, 3), - (0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 3, 0, 2, 0, 3, 0, 0, 0, 0, 0, 0, 0, 3, 0, 0, 0, 0, 0, 0, 0, 3, 0, 3, 0, 3, 1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 3, 0, 2, 0, 2, 3, 0, 0, 0, 0, 0, 0, 0, 0, 3), - (0, 2, 0, 3, 1, 3, 0, 3, 0, 2, 3, 3, 3, 1, 3, 1, 3, 1, 3, 1, 3, 3, 3, 1, 3, 0, 2, 3, 1, 1, 4, 3, 3, 2, 3, 3, 1, 2, 2, 4, 1, 3, 3, 0, 1, 4, 2, 3, 0, 1, 3, 0, 3, 0, 0, 1, 3, 0, 2, 0, 0, 3, 3, 2, 1, 3, 0, 3, 0, 2, 0, 3, 4, 4, 4, 3, 1, 0, 3, 0, 0, 3, 3), - (0, 2, 0, 1, 0, 2, 0, 0, 0, 1, 3, 2, 2, 1, 3, 0, 1, 1, 3, 0, 3, 2, 3, 1, 2, 0, 2, 0, 1, 1, 3, 3, 3, 0, 3, 3, 1, 1, 2, 3, 2, 3, 3, 1, 2, 3, 2, 0, 0, 1, 0, 0, 0, 0, 0, 0, 3, 0, 1, 0, 0, 2, 1, 2, 1, 3, 0, 3, 0, 0, 0, 3, 4, 4, 4, 3, 2, 0, 2, 0, 0, 2, 4), - (0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 2, 2, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 3, 1, 0, 0, 0, 0, 0, 0, 0, 3), - (0, 3, 0, 3, 0, 2, 0, 3, 0, 3, 3, 3, 2, 3, 2, 2, 2, 0, 3, 1, 3, 3, 3, 2, 3, 3, 0, 0, 3, 0, 3, 2, 2, 0, 2, 3, 1, 4, 3, 4, 3, 3, 2, 3, 1, 5, 4, 4, 0, 3, 1, 2, 1, 3, 0, 3, 1, 1, 2, 0, 2, 3, 1, 3, 1, 3, 0, 3, 0, 1, 0, 3, 3, 4, 4, 2, 1, 0, 2, 1, 0, 2, 4), - (0, 1, 0, 3, 0, 1, 0, 2, 0, 1, 4, 2, 5, 1, 4, 0, 2, 0, 2, 1, 3, 1, 4, 0, 2, 1, 0, 0, 2, 1, 4, 1, 1, 0, 3, 3, 0, 5, 1, 3, 2, 3, 3, 1, 0, 3, 2, 3, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 4, 0, 1, 0, 3, 0, 2, 0, 1, 0, 3, 3, 3, 4, 3, 3, 0, 0, 0, 0, 2, 3), - (0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 2, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 3, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 2, 1, 0, 0, 1, 0, 0, 0, 0, 0, 3), - (0, 1, 0, 3, 0, 4, 0, 3, 0, 2, 4, 3, 1, 0, 3, 2, 2, 1, 3, 1, 2, 2, 3, 1, 1, 1, 2, 1, 3, 0, 1, 2, 0, 1, 3, 2, 1, 3, 0, 5, 5, 1, 0, 0, 1, 3, 2, 1, 0, 3, 0, 0, 1, 0, 0, 0, 0, 0, 3, 4, 0, 1, 1, 1, 3, 2, 0, 2, 0, 1, 0, 2, 3, 3, 1, 2, 3, 0, 1, 0, 1, 0, 4), - (0, 0, 0, 1, 0, 3, 0, 3, 0, 2, 2, 1, 0, 0, 4, 0, 3, 0, 3, 1, 3, 0, 3, 0, 3, 0, 1, 0, 3, 0, 3, 1, 3, 0, 3, 3, 0, 0, 1, 2, 1, 1, 1, 0, 1, 2, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 2, 2, 1, 2, 0, 0, 2, 0, 0, 0, 0, 2, 3, 3, 3, 3, 0, 0, 0, 0, 1, 4), - (0, 0, 0, 3, 0, 3, 0, 0, 0, 0, 3, 1, 1, 0, 3, 0, 1, 0, 2, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 3, 0, 2, 0, 2, 3, 0, 0, 2, 2, 3, 1, 2, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 3, 0, 0, 2, 0, 0, 0, 0, 2, 3), - (2, 4, 0, 5, 0, 5, 0, 4, 0, 3, 4, 3, 3, 3, 4, 3, 3, 3, 4, 3, 4, 4, 5, 4, 5, 5, 5, 2, 3, 0, 5, 5, 4, 1, 5, 4, 3, 1, 5, 4, 3, 4, 4, 3, 3, 4, 3, 3, 0, 3, 2, 0, 2, 3, 0, 3, 0, 0, 3, 3, 0, 5, 3, 2, 3, 3, 0, 3, 0, 3, 0, 3, 4, 5, 4, 5, 3, 0, 4, 3, 0, 3, 4), - (0, 3, 0, 3, 0, 3, 0, 3, 0, 3, 3, 4, 3, 2, 3, 2, 3, 0, 4, 3, 3, 3, 3, 3, 3, 3, 3, 0, 3, 2, 4, 3, 3, 1, 3, 4, 3, 4, 4, 4, 3, 4, 4, 3, 2, 4, 4, 1, 0, 2, 0, 0, 1, 1, 0, 2, 0, 0, 3, 1, 0, 5, 3, 2, 1, 3, 0, 3, 0, 1, 2, 4, 3, 2, 4, 3, 3, 0, 3, 2, 0, 4, 4), - (0, 3, 0, 3, 0, 1, 0, 0, 0, 1, 4, 3, 3, 2, 3, 1, 3, 1, 4, 2, 3, 2, 4, 2, 3, 4, 3, 0, 2, 2, 3, 3, 3, 0, 3, 3, 3, 0, 3, 4, 1, 3, 3, 0, 3, 4, 3, 3, 0, 1, 1, 0, 1, 0, 0, 0, 4, 0, 3, 0, 0, 3, 1, 2, 1, 3, 0, 4, 0, 1, 0, 4, 3, 3, 4, 3, 3, 0, 2, 0, 0, 3, 3), - (0, 3, 0, 4, 0, 1, 0, 3, 0, 3, 4, 3, 3, 0, 3, 3, 3, 1, 3, 1, 3, 3, 4, 3, 3, 3, 0, 0, 3, 1, 5, 3, 3, 1, 3, 3, 2, 5, 4, 3, 3, 4, 5, 3, 2, 5, 3, 4, 0, 1, 0, 0, 0, 0, 0, 2, 0, 0, 1, 1, 0, 4, 2, 2, 1, 3, 0, 3, 0, 2, 0, 4, 4, 3, 5, 3, 2, 0, 1, 1, 0, 3, 4), - (0, 5, 0, 4, 0, 5, 0, 2, 0, 4, 4, 3, 3, 2, 3, 3, 3, 1, 4, 3, 4, 1, 5, 3, 4, 3, 4, 0, 4, 2, 4, 3, 4, 1, 5, 4, 0, 4, 4, 4, 4, 5, 4, 1, 3, 5, 4, 2, 1, 4, 1, 1, 3, 2, 0, 3, 1, 0, 3, 2, 1, 4, 3, 3, 3, 4, 0, 4, 0, 3, 0, 4, 4, 4, 3, 3, 3, 0, 4, 2, 0, 3, 4), - (1, 4, 0, 4, 0, 3, 0, 1, 0, 3, 3, 3, 1, 1, 3, 3, 2, 2, 3, 3, 1, 0, 3, 2, 2, 1, 2, 0, 3, 1, 2, 1, 2, 0, 3, 2, 0, 2, 2, 3, 3, 4, 3, 0, 3, 3, 1, 2, 0, 1, 1, 3, 1, 2, 0, 0, 3, 0, 1, 1, 0, 3, 2, 2, 3, 3, 0, 3, 0, 0, 0, 2, 3, 3, 4, 3, 3, 0, 1, 0, 0, 1, 4), - (0, 4, 0, 4, 0, 4, 0, 0, 0, 3, 4, 4, 3, 1, 4, 2, 3, 2, 3, 3, 3, 1, 4, 3, 4, 0, 3, 0, 4, 2, 3, 3, 2, 2, 5, 4, 2, 1, 3, 4, 3, 4, 3, 1, 3, 3, 4, 2, 0, 2, 1, 0, 3, 3, 0, 0, 2, 0, 3, 1, 0, 4, 4, 3, 4, 3, 0, 4, 0, 1, 0, 2, 4, 4, 4, 4, 4, 0, 3, 2, 0, 3, 3), - (0, 0, 0, 1, 0, 4, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 3, 2, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 2), - (0, 2, 0, 3, 0, 4, 0, 4, 0, 1, 3, 3, 3, 0, 4, 0, 2, 1, 2, 1, 1, 1, 2, 0, 3, 1, 1, 0, 1, 0, 3, 1, 0, 0, 3, 3, 2, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 2, 0, 2, 2, 0, 3, 1, 0, 0, 1, 0, 1, 1, 0, 1, 2, 0, 3, 0, 0, 0, 0, 1, 0, 0, 3, 3, 4, 3, 1, 0, 1, 0, 3, 0, 2), - (0, 0, 0, 3, 0, 5, 0, 0, 0, 0, 1, 0, 2, 0, 3, 1, 0, 1, 3, 0, 0, 0, 2, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 4, 0, 0, 0, 2, 3, 0, 1, 4, 1, 0, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 3, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 2, 0, 0, 3, 0, 0, 0, 0, 0, 3), - (0, 2, 0, 5, 0, 5, 0, 1, 0, 2, 4, 3, 3, 2, 5, 1, 3, 2, 3, 3, 3, 0, 4, 1, 2, 0, 3, 0, 4, 0, 2, 2, 1, 1, 5, 3, 0, 0, 1, 4, 2, 3, 2, 0, 3, 3, 3, 2, 0, 2, 4, 1, 1, 2, 0, 1, 1, 0, 3, 1, 0, 1, 3, 1, 2, 3, 0, 2, 0, 0, 0, 1, 3, 5, 4, 4, 4, 0, 3, 0, 0, 1, 3), - (0, 4, 0, 5, 0, 4, 0, 4, 0, 4, 5, 4, 3, 3, 4, 3, 3, 3, 4, 3, 4, 4, 5, 3, 4, 5, 4, 2, 4, 2, 3, 4, 3, 1, 4, 4, 1, 3, 5, 4, 4, 5, 5, 4, 4, 5, 5, 5, 2, 3, 3, 1, 4, 3, 1, 3, 3, 0, 3, 3, 1, 4, 3, 4, 4, 4, 0, 3, 0, 4, 0, 3, 3, 4, 4, 5, 0, 0, 4, 3, 0, 4, 5), - (0, 4, 0, 4, 0, 3, 0, 3, 0, 3, 4, 4, 4, 3, 3, 2, 4, 3, 4, 3, 4, 3, 5, 3, 4, 3, 2, 1, 4, 2, 4, 4, 3, 1, 3, 4, 2, 4, 5, 5, 3, 4, 5, 4, 1, 5, 4, 3, 0, 3, 2, 2, 3, 2, 1, 3, 1, 0, 3, 3, 3, 5, 3, 3, 3, 5, 4, 4, 2, 3, 3, 4, 3, 3, 3, 2, 1, 0, 3, 2, 1, 4, 3), - (0, 4, 0, 5, 0, 4, 0, 3, 0, 3, 5, 5, 3, 2, 4, 3, 4, 0, 5, 4, 4, 1, 4, 4, 4, 3, 3, 3, 4, 3, 5, 5, 2, 3, 3, 4, 1, 2, 5, 5, 3, 5, 5, 2, 3, 5, 5, 4, 0, 3, 2, 0, 3, 3, 1, 1, 5, 1, 4, 1, 0, 4, 3, 2, 3, 5, 0, 4, 0, 3, 0, 5, 4, 3, 4, 3, 0, 0, 4, 1, 0, 4, 4), - (1, 3, 0, 4, 0, 2, 0, 2, 0, 2, 5, 5, 3, 3, 3, 3, 3, 0, 4, 2, 3, 4, 4, 4, 3, 4, 0, 0, 3, 4, 5, 4, 3, 3, 3, 3, 2, 5, 5, 4, 5, 5, 5, 4, 3, 5, 5, 5, 1, 3, 1, 0, 1, 0, 0, 3, 2, 0, 4, 2, 0, 5, 2, 3, 2, 4, 1, 3, 0, 3, 0, 4, 5, 4, 5, 4, 3, 0, 4, 2, 0, 5, 4), - (0, 3, 0, 4, 0, 5, 0, 3, 0, 3, 4, 4, 3, 2, 3, 2, 3, 3, 3, 3, 3, 2, 4, 3, 3, 2, 2, 0, 3, 3, 3, 3, 3, 1, 3, 3, 3, 0, 4, 4, 3, 4, 4, 1, 1, 4, 4, 2, 0, 3, 1, 0, 1, 1, 0, 4, 1, 0, 2, 3, 1, 3, 3, 1, 3, 4, 0, 3, 0, 1, 0, 3, 1, 3, 0, 0, 1, 0, 2, 0, 0, 4, 4), - (0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0), - (0, 3, 0, 3, 0, 2, 0, 3, 0, 1, 5, 4, 3, 3, 3, 1, 4, 2, 1, 2, 3, 4, 4, 2, 4, 4, 5, 0, 3, 1, 4, 3, 4, 0, 4, 3, 3, 3, 2, 3, 2, 5, 3, 4, 3, 2, 2, 3, 0, 0, 3, 0, 2, 1, 0, 1, 2, 0, 0, 0, 0, 2, 1, 1, 3, 1, 0, 2, 0, 4, 0, 3, 4, 4, 4, 5, 2, 0, 2, 0, 0, 1, 3), - (0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 1, 1, 0, 0, 0, 4, 2, 1, 1, 0, 1, 0, 3, 2, 0, 0, 3, 1, 1, 1, 2, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 3, 0, 1, 0, 0, 0, 2, 0, 0, 0, 1, 4, 0, 4, 2, 1, 0, 0, 0, 0, 0, 1), - (0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 3, 1, 0, 0, 0, 2, 0, 2, 1, 0, 0, 1, 2, 1, 0, 1, 1, 0, 0, 3, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 3, 1, 0, 0, 0, 0, 0, 1, 0, 0, 2, 1, 0, 0, 0, 0, 0, 0, 0, 0, 2), - (0, 4, 0, 4, 0, 4, 0, 3, 0, 4, 4, 3, 4, 2, 4, 3, 2, 0, 4, 4, 4, 3, 5, 3, 5, 3, 3, 2, 4, 2, 4, 3, 4, 3, 1, 4, 0, 2, 3, 4, 4, 4, 3, 3, 3, 4, 4, 4, 3, 4, 1, 3, 4, 3, 2, 1, 2, 1, 3, 3, 3, 4, 4, 3, 3, 5, 0, 4, 0, 3, 0, 4, 3, 3, 3, 2, 1, 0, 3, 0, 0, 3, 3), - (0, 4, 0, 3, 0, 3, 0, 3, 0, 3, 5, 5, 3, 3, 3, 3, 4, 3, 4, 3, 3, 3, 4, 4, 4, 3, 3, 3, 3, 4, 3, 5, 3, 3, 1, 3, 2, 4, 5, 5, 5, 5, 4, 3, 4, 5, 5, 3, 2, 2, 3, 3, 3, 3, 2, 3, 3, 1, 2, 3, 2, 4, 3, 3, 3, 4, 0, 4, 0, 2, 0, 4, 3, 2, 2, 1, 2, 0, 3, 0, 0, 4, 1), -) -# fmt: on - - -class JapaneseContextAnalysis: - NUM_OF_CATEGORY = 6 - DONT_KNOW = -1 - ENOUGH_REL_THRESHOLD = 100 - MAX_REL_THRESHOLD = 1000 - MINIMUM_DATA_THRESHOLD = 4 - - def __init__(self) -> None: - self._total_rel = 0 - self._rel_sample: List[int] = [] - self._need_to_skip_char_num = 0 - self._last_char_order = -1 - self._done = False - self.reset() - - def reset(self) -> None: - self._total_rel = 0 # total sequence received - # category counters, each integer counts sequence in its category - self._rel_sample = [0] * self.NUM_OF_CATEGORY - # if last byte in current buffer is not the last byte of a character, - # we need to know how many bytes to skip in next buffer - self._need_to_skip_char_num = 0 - self._last_char_order = -1 # The order of previous char - # If this flag is set to True, detection is done and conclusion has - # been made - self._done = False - - def feed(self, byte_str: Union[bytes, bytearray], num_bytes: int) -> None: - if self._done: - return - - # The buffer we got is byte oriented, and a character may span in more than one - # buffers. In case the last one or two byte in last buffer is not - # complete, we record how many byte needed to complete that character - # and skip these bytes here. We can choose to record those bytes as - # well and analyse the character once it is complete, but since a - # character will not make much difference, by simply skipping - # this character will simply our logic and improve performance. - i = self._need_to_skip_char_num - while i < num_bytes: - order, char_len = self.get_order(byte_str[i : i + 2]) - i += char_len - if i > num_bytes: - self._need_to_skip_char_num = i - num_bytes - self._last_char_order = -1 - else: - if (order != -1) and (self._last_char_order != -1): - self._total_rel += 1 - if self._total_rel > self.MAX_REL_THRESHOLD: - self._done = True - break - self._rel_sample[ - jp2_char_context[self._last_char_order][order] - ] += 1 - self._last_char_order = order - - def got_enough_data(self) -> bool: - return self._total_rel > self.ENOUGH_REL_THRESHOLD - - def get_confidence(self) -> float: - # This is just one way to calculate confidence. It works well for me. - if self._total_rel > self.MINIMUM_DATA_THRESHOLD: - return (self._total_rel - self._rel_sample[0]) / self._total_rel - return self.DONT_KNOW - - def get_order(self, _: Union[bytes, bytearray]) -> Tuple[int, int]: - return -1, 1 - - -class SJISContextAnalysis(JapaneseContextAnalysis): - def __init__(self) -> None: - super().__init__() - self._charset_name = "SHIFT_JIS" - - @property - def charset_name(self) -> str: - return self._charset_name - - def get_order(self, byte_str: Union[bytes, bytearray]) -> Tuple[int, int]: - if not byte_str: - return -1, 1 - # find out current char's byte length - first_char = byte_str[0] - if (0x81 <= first_char <= 0x9F) or (0xE0 <= first_char <= 0xFC): - char_len = 2 - if (first_char == 0x87) or (0xFA <= first_char <= 0xFC): - self._charset_name = "CP932" - else: - char_len = 1 - - # return its order if it is hiragana - if len(byte_str) > 1: - second_char = byte_str[1] - if (first_char == 202) and (0x9F <= second_char <= 0xF1): - return second_char - 0x9F, char_len - - return -1, char_len - - -class EUCJPContextAnalysis(JapaneseContextAnalysis): - def get_order(self, byte_str: Union[bytes, bytearray]) -> Tuple[int, int]: - if not byte_str: - return -1, 1 - # find out current char's byte length - first_char = byte_str[0] - if (first_char == 0x8E) or (0xA1 <= first_char <= 0xFE): - char_len = 2 - elif first_char == 0x8F: - char_len = 3 - else: - char_len = 1 - - # return its order if it is hiragana - if len(byte_str) > 1: - second_char = byte_str[1] - if (first_char == 0xA4) and (0xA1 <= second_char <= 0xF3): - return second_char - 0xA1, char_len - - return -1, char_len diff --git a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/projects/TensorMask/README.md b/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/projects/TensorMask/README.md deleted file mode 100644 index ffc5282094aa71b112974587e57e24c0cf9922a7..0000000000000000000000000000000000000000 --- a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/projects/TensorMask/README.md +++ /dev/null @@ -1,63 +0,0 @@ - -# TensorMask in Detectron2 -**A Foundation for Dense Object Segmentation** - -Xinlei Chen, Ross Girshick, Kaiming He, Piotr Dollár - -[[`arXiv`](https://arxiv.org/abs/1903.12174)] [[`BibTeX`](#CitingTensorMask)] - -
      - -
      - -In this repository, we release code for TensorMask in Detectron2. -TensorMask is a dense sliding-window instance segmentation framework that, for the first time, achieves results close to the well-developed Mask R-CNN framework -- both qualitatively and quantitatively. It establishes a conceptually complementary direction for object instance segmentation research. - -## Installation -First install Detectron 2 following [INSTALL.md](https://github.com/facebookresearch/detectron2/blob/master/INSTALL.md). Then compile the TensorMask-specific op (`swap_align2nat`): -```bash -cd /path/to/detectron2/projects/TensorMask -python setup.py build develop -``` - -## Training - -To train a model, run: -```bash -python /path/to/detectron2/projects/TensorMask/train_net.py --config-file -``` - -For example, to launch TensorMask BiPyramid training (1x schedule) with ResNet-50 backbone on 8 GPUs, -one should execute: -```bash -python /path/to/detectron2/projects/TensorMask/train_net.py --config-file configs/tensormask_R_50_FPN_1x.yaml --num-gpus 8 -``` - -## Evaluation - -Model evaluation can be done similarly (6x schedule with scale augmentation): -```bash -python /path/to/detectron2/projects/TensorMask/train_net.py --config-file configs/tensormask_R_50_FPN_6x.yaml --eval-only MODEL.WEIGHTS /path/to/model_checkpoint -``` - -# Pretrained Models - -| Backbone | lr sched | AP box | AP mask | download | -| -------- | -------- | -- | --- | -------- | -| R50 | 1x | 37.6 | 32.4 | model \|  metrics | -| R50 | 6x | 41.4 | 35.8 | model \|  metrics | - - -## Citing TensorMask - -If you use TensorMask, please use the following BibTeX entry. - -``` -@InProceedings{chen2019tensormask, - title={Tensormask: A Foundation for Dense Object Segmentation}, - author={Chen, Xinlei and Girshick, Ross and He, Kaiming and Doll{\'a}r, Piotr}, - journal={The International Conference on Computer Vision (ICCV)}, - year={2019} -} -``` - diff --git a/spaces/CVPR/Dual-Key_Backdoor_Attacks/openvqa/docs/_source/conf.py b/spaces/CVPR/Dual-Key_Backdoor_Attacks/openvqa/docs/_source/conf.py deleted file mode 100644 index 4ec8c847cf1e74fc312952617bb7c42c6d757b7e..0000000000000000000000000000000000000000 --- a/spaces/CVPR/Dual-Key_Backdoor_Attacks/openvqa/docs/_source/conf.py +++ /dev/null @@ -1,108 +0,0 @@ -#!/usr/bin/env python3 -# -*- coding: utf-8 -*- -# -# Configuration file for the Sphinx documentation builder. -# -# This file only contains a selection of the most common options. For a full -# list see the documentation: -# http://www.sphinx-doc.org/en/master/config - -# -- Path setup -------------------------------------------------------------- - -# If extensions (or modules to document with autodoc) are in another directory, -# add these directories to sys.path here. If the directory is relative to the -# documentation root, use os.path.abspath to make it absolute, like shown here. -# -import os -import sys -sys.path.insert(0, os.path.abspath('../..')) - -RELEASE = os.environ.get('RELEASE', False) - -# -- Project information ----------------------------------------------------- - -project = u'OpenVQA' -copyright = u'2019, MILVLG' -author = u'MILVLG' - -# The version info for the project you're documenting, acts as replacement for -# |version| and |release|, also used in various other places throughout the -# built documents. -# -# The short X.Y version. -# version = '1.0' -# The full version, including alpha/beta/rc tags. -# release = '0.0' - - -# -- General configuration --------------------------------------------------- - -master_doc = 'index' - -# The suffix(es) of source filenames. -# You can specify multiple suffix as a list of string: -# -source_suffix = { - '.rst': 'restructuredtext', - '.txt': 'markdown', - '.md': 'markdown', -} - -# Add any Sphinx extension module names here, as strings. They can be -# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom -# ones. -extensions = [ - 'sphinx.ext.autodoc', - 'sphinx.ext.autosummary', - 'sphinx.ext.doctest', - 'sphinx.ext.intersphinx', - 'sphinx.ext.todo', - 'sphinx.ext.coverage', - 'sphinx.ext.napoleon', - 'sphinx.ext.viewcode', - 'sphinx_markdown_tables', - 'recommonmark', -] - -# Add any paths that contain templates here, relative to this directory. -templates_path = ['_templates'] - -# List of patterns, relative to source directory, that match files and -# directories to ignore when looking for source files. -# This pattern also affects html_static_path and html_extra_path. -exclude_patterns = ['_build', 'Thumbs.db', '.DS_Store'] - - -# -- Options for HTML output ------------------------------------------------- - -# The theme to use for HTML and HTML Help pages. See the documentation for -# a list of builtin themes. -# -html_theme = 'sphinx_rtd_theme' - -# Add any paths that contain custom static files (such as style sheets) here, -# relative to this directory. They are copied after the builtin static files, -# so a file named "default.css" will overwrite the builtin "default.css". -html_static_path = ['_static'] - -# Add cusotm css overrides -def setup(app): - app.add_stylesheet( "custom.css" ) - -# Add any paths that contain templates here, relative to this directory. -templates_path = ['_templates'] -if RELEASE: - templates_path = ['_templates-stable'] - -# The name of the Pygments (syntax highlighting) style to use. -pygments_style = 'sphinx' - -# Disable docstring inheritance -autodoc_inherit_docstrings = False - - -# -- Other Options ------------------------------------------------------------ - -# intersphinx_mapping = { -# 'python': ('https://docs.python.org/3', None) -# } diff --git a/spaces/CVPR/LIVE/pybind11/tests/cross_module_gil_utils.cpp b/spaces/CVPR/LIVE/pybind11/tests/cross_module_gil_utils.cpp deleted file mode 100644 index 07db9f6e48a10dfd2d4370c3daff6e793d6675d2..0000000000000000000000000000000000000000 --- a/spaces/CVPR/LIVE/pybind11/tests/cross_module_gil_utils.cpp +++ /dev/null @@ -1,73 +0,0 @@ -/* - tests/cross_module_gil_utils.cpp -- tools for acquiring GIL from a different module - - Copyright (c) 2019 Google LLC - - All rights reserved. Use of this source code is governed by a - BSD-style license that can be found in the LICENSE file. -*/ -#include -#include - -// This file mimics a DSO that makes pybind11 calls but does not define a -// PYBIND11_MODULE. The purpose is to test that such a DSO can create a -// py::gil_scoped_acquire when the running thread is in a GIL-released state. -// -// Note that we define a Python module here for convenience, but in general -// this need not be the case. The typical scenario would be a DSO that implements -// shared logic used internally by multiple pybind11 modules. - -namespace { - -namespace py = pybind11; -void gil_acquire() { py::gil_scoped_acquire gil; } - -constexpr char kModuleName[] = "cross_module_gil_utils"; - -#if PY_MAJOR_VERSION >= 3 -struct PyModuleDef moduledef = { - PyModuleDef_HEAD_INIT, - kModuleName, - NULL, - 0, - NULL, - NULL, - NULL, - NULL, - NULL -}; -#else -PyMethodDef module_methods[] = { - {NULL, NULL, 0, NULL} -}; -#endif - -} // namespace - -extern "C" PYBIND11_EXPORT -#if PY_MAJOR_VERSION >= 3 -PyObject* PyInit_cross_module_gil_utils() -#else -void initcross_module_gil_utils() -#endif -{ - - PyObject* m = -#if PY_MAJOR_VERSION >= 3 - PyModule_Create(&moduledef); -#else - Py_InitModule(kModuleName, module_methods); -#endif - - if (m != NULL) { - static_assert( - sizeof(&gil_acquire) == sizeof(void*), - "Function pointer must have the same size as void*"); - PyModule_AddObject(m, "gil_acquire_funcaddr", - PyLong_FromVoidPtr(reinterpret_cast(&gil_acquire))); - } - -#if PY_MAJOR_VERSION >= 3 - return m; -#endif -} diff --git a/spaces/CVPR/LIVE/thrust/thrust/system/omp/detail/count.h b/spaces/CVPR/LIVE/thrust/thrust/system/omp/detail/count.h deleted file mode 100644 index fde1728b77261d75c561b9042ec365281d78cee9..0000000000000000000000000000000000000000 --- a/spaces/CVPR/LIVE/thrust/thrust/system/omp/detail/count.h +++ /dev/null @@ -1,23 +0,0 @@ -/* - * Copyright 2008-2013 NVIDIA Corporation - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -#pragma once - -#include - -// this system inherits count -#include - diff --git a/spaces/CikeyQI/Yunzai/Yunzai/plugins/system/add.js b/spaces/CikeyQI/Yunzai/Yunzai/plugins/system/add.js deleted file mode 100644 index eacf1ac98268bd8dc9e89ddd044047dfe21c4121..0000000000000000000000000000000000000000 --- a/spaces/CikeyQI/Yunzai/Yunzai/plugins/system/add.js +++ /dev/null @@ -1,446 +0,0 @@ -import cfg from "../../lib/config/config.js" -import plugin from "../../lib/plugins/plugin.js" -import common from "../../lib/common/common.js" -import fs from "node:fs" -import path from "node:path" -import lodash from "lodash" -import fetch from "node-fetch" -import { fileTypeFromBuffer } from "file-type" - -let messageMap = {} - -export class add extends plugin { - constructor() { - super({ - name: "添加消息", - dsc: "添加消息", - event: "message", - priority: 50000, - rule: [ - { - reg: "^#(全局)?添加", - fnc: "add" - }, - { - reg: "^#(全局)?删除", - fnc: "del" - }, - { - reg: "", - fnc: "getMessage", - log: false - }, - { - reg: "^#(全局)?(消息|词条)", - fnc: "list" - } - ] - }) - - this.path = "data/messageJson/" - } - - async init() { - common.mkdirs(this.path) - } - - /** 群号key */ - get grpKey() { - return `Yz:group_id:${this.e.user_id}` - } - - /** #添加 */ - async add() { - this.isGlobal = Boolean(this.e.msg.match(/^#全局/)) - await this.getGroupId() - - if (!this.group_id) { - await this.reply("请先在群内触发消息,确定添加的群") - return - } - - this.initMessageMap() - - if (!this.checkAuth()) return false - /** 获取关键词 */ - this.getKeyWord() - if (!this.e.keyWord) { - await this.reply("添加错误:没有关键词") - return - } - - this.e.message = [] - this.setContext("addContext") - - return this.reply("请发送添加内容,完成后发送#结束添加", true, { at: true }) - } - - /** 获取群号 */ - async getGroupId() { - /** 添加全局消息,存入到机器人文件中 */ - if (this.isGlobal) { - this.group_id = "global" - return this.group_id - } - - if (this.e.isGroup) { - this.group_id = this.e.group_id - redis.setEx(this.grpKey, 3600 * 24 * 30, String(this.group_id)) - return this.group_id - } - - // redis获取 - let groupId = await redis.get(this.grpKey) - if (groupId) { - this.group_id = groupId - return this.group_id - } - - return false - } - - checkAuth() { - if (this.e.isMaster) return true - - const groupCfg = cfg.getGroup(this.e.self_id, this.group_id) - if (groupCfg.addLimit == 2) { - this.reply("暂无权限,只有主人才能操作") - return false - } - if (groupCfg.addLimit == 1) { - if (!this.e.member.is_admin) { - this.reply("暂无权限,只有管理员才能操作") - return false - } - } - - if (groupCfg.addPrivate != 1 && !this.e.isGroup) { - this.reply("禁止私聊添加") - return false - } - - return true - } - - /** 获取添加关键词 */ - getKeyWord() { - this.e.isGlobal = Boolean(this.e.msg.match(/^#全局/)) - this.keyWord = this.e.raw_message.replace(/#(全局)?(添加|删除)/, "").trim() - this.e.keyWord = this.trimAlias(this.keyWord) - } - - /** 过滤别名 */ - trimAlias(msg) { - const groupCfg = cfg.getGroup(this.e.self_id, this.group_id) - let alias = groupCfg.botAlias - if (!Array.isArray(alias)) - alias = [alias] - - for (const name of alias) - if (msg.startsWith(name)) - msg = lodash.trimStart(msg, name).trim() - - return msg - } - - /** 添加内容 */ - async addContext() { - const context = this.getContext()?.addContext - this.isGlobal = context.isGlobal - await this.getGroupId() - /** 关键词 */ - this.keyWord = context.keyWord - - if (!this.e.msg?.includes("#结束添加")) { - /** 添加内容 */ - for (const i of this.e.message) { - if (i.url) i.file = await this.saveFile(i) - if (i.type == "at" && i.qq == this.e.self_id) continue - context.message.push(i) - } - return - } - - this.finish("addContext") - if (!context.message?.length) { - this.reply("添加错误:没有添加内容") - return - } - - if (!messageMap[this.group_id]) - messageMap[this.group_id] = new Map() - - /** 支持单个关键词添加多个 */ - let message = messageMap[this.group_id].get(this.keyWord) - if (Array.isArray(message)) - message.push(context.message) - else - message = [context.message] - messageMap[this.group_id].set(this.keyWord, message) - - if (message.length > 1) - this.keyWord += String(message.length) - - this.saveJson() - return this.reply(`添加成功:${this.keyWord}`) - } - - saveJson() { - let obj = {} - for (let [k, v] of messageMap[this.group_id]) - obj[k] = v - - fs.writeFileSync(`${this.path}${this.group_id}.json`, JSON.stringify(obj, "", "\t")) - } - - async makeBuffer(file) { - if (file.match(/^base64:\/\//)) - return Buffer.from(file.replace(/^base64:\/\//, ""), "base64") - else if (file.match(/^https?:\/\//)) - return Buffer.from(await (await fetch(file)).arrayBuffer()) - else if (fs.existsSync(file)) - return Buffer.from(fs.readFileSync(file)) - return file - } - - async fileType(data) { - const file = { name: `${this.group_id}/${data.type}/${Date.now()}` } - try { - file.url = data.url.replace(/^base64:\/\/.*/, "base64://...") - file.buffer = await this.makeBuffer(data.url) - file.type = await fileTypeFromBuffer(file.buffer) - file.name = `${file.name}.${file.type.ext}` - } catch (err) { - logger.error(`文件类型检测错误:${logger.red(err)}`) - file.name = `${file.name}-${path.basename(data.file || data.url)}` - } - return file - } - - async saveFile(data) { - const file = await this.fileType(data) - if (file.name && Buffer.isBuffer(file.buffer) && common.mkdirs(path.dirname(`${this.path}${file.name}`))) { - fs.writeFileSync(`${this.path}${file.name}`, file.buffer) - return file.name - } - return data.url - } - - async getMessage() { - if (!this.e.raw_message) return false - this.isGlobal = false - - await this.getGroupId() - if (!this.group_id) return false - - this.initMessageMap() - this.initGlobalMessageMap() - - this.keyWord = this.trimAlias(this.e.raw_message.trim()) - let keyWord = this.keyWord - - let num = 0 - if (isNaN(keyWord)) { - num = keyWord.charAt(keyWord.length-1) - - if (!isNaN(num) && !messageMap[this.group_id].has(keyWord) && !messageMap.global.has(keyWord)) { - keyWord = lodash.trimEnd(keyWord, num).trim() - num-- - } - } - - let msg = [ - ...messageMap[this.group_id].get(keyWord) || [], - ...messageMap.global.get(keyWord) || [], - ] - if (lodash.isEmpty(msg)) return false - - if (!msg[num]) - num = lodash.random(0, msg.length-1) - - msg = [...msg[num]] - for (const i in msg) - if (msg[i].file && fs.existsSync(`${this.path}${msg[i].file}`)) - msg[i] = { ...msg[i], file: `base64://${fs.readFileSync(`${this.path}${msg[i].file}`).toString("base64")}` } - - logger.mark(`[发送消息]${this.e.logText} ${this.keyWord}`) - const groupCfg = cfg.getGroup(this.e.self_id, this.group_id) - return this.reply(msg, Boolean(groupCfg.addReply), { - at: Boolean(groupCfg.addAt), - recallMsg: groupCfg.addRecall, - }) - } - - /** 初始化已添加内容 */ - initMessageMap() { - if (messageMap[this.group_id]) return - messageMap[this.group_id] = new Map() - - const path = `${this.path}${this.group_id}.json` - if (!fs.existsSync(path)) return - - try { - const message = JSON.parse(fs.readFileSync(path, "utf8")) - for (const i in message) - messageMap[this.group_id].set(i, message[i]) - } catch (err) { - logger.error(`JSON 格式错误:${path} ${err}`) - } - } - - /** 初始化全局已添加内容 */ - initGlobalMessageMap() { - if (messageMap.global) return - messageMap.global = new Map() - - const globalPath = `${this.path}global.json` - if (!fs.existsSync(globalPath)) return - - try { - const message = JSON.parse(fs.readFileSync(globalPath, "utf8")) - for (const i in message) - messageMap.global.set(i, message[i]) - } catch (err) { - logger.error(`JSON 格式错误:${globalPath} ${err}`) - } - } - - async del() { - this.isGlobal = this.e.msg.includes("全局") - await this.getGroupId() - if (!(this.group_id && this.checkAuth())) return false - - this.initMessageMap() - - this.getKeyWord() - if (!this.keyWord) { - await this.reply("删除错误:没有关键词") - return false - } - - this.keyWord = this.trimAlias(this.keyWord) - let keyWord = this.keyWord - - let num = false - let index = 0 - if (isNaN(keyWord)) { - num = keyWord.charAt(keyWord.length-1) - - if (!isNaN(num) && !messageMap[this.group_id].has(keyWord)) { - keyWord = lodash.trimEnd(keyWord, num).trim() - index = num-1 - } else { - num = false - } - } - - let arr = messageMap[this.group_id].get(keyWord) - if (!arr) { - // await this.reply(`暂无此消息:${keyWord}`) - return false - } - - let tmp = [] - if (num) { - if (!arr[index]) { - // await this.reply(`暂无此消息:${keyWord}${num}`) - return false - } - - tmp = arr[index] - arr.splice(index, 1) - - if (arr.length <= 0) { - messageMap[this.group_id].delete(keyWord) - } else { - messageMap[this.group_id].set(keyWord, arr) - } - } else { - if (this.e.msg.includes("删除全部")) { - tmp = arr - arr = [] - } else { - tmp = arr.pop() - } - - if (arr.length <= 0) { - messageMap[this.group_id].delete(keyWord) - } else { - messageMap[this.group_id].set(keyWord, arr) - } - } - - this.saveJson() - return this.reply(`删除成功:${this.keyWord}`) - } - - async list() { - this.isGlobal = Boolean(this.e.msg.match(/^#全局/)) - - let page = 1 - let pageSize = 100 - let type = "list" - - await this.getGroupId() - if (!this.group_id) return false - - this.initMessageMap() - - const search = this.e.msg.replace(/^#(全局)?(消息|词条)/, "").trim() - if (search.match(/^列表/)) - page = search.replace(/^列表/, "") || 1 - else - type = "search" - - let list = messageMap[this.group_id] - - if (lodash.isEmpty(list)) { - await this.reply("暂无消息") - return - } - - let arr = [] - if (type == "list") - for (let [k, v] of messageMap[this.group_id]) - arr.push({ key: k, val: v, num: arr.length+1 }) - else - for (let [k, v] of messageMap[this.group_id]) - if (k.includes(search)) - arr.push({ key: k, val: v, num: arr.length+1 }) - - let count = arr.length - arr = arr.reverse() - - if (type == "list") - arr = this.pagination(page, pageSize, arr) - if (lodash.isEmpty(arr)) return false - - let msg = [] - let num = 0 - for (const i of arr) { - if (num >= page * pageSize) break - - let keyWord = i.key - if (!keyWord) continue - - msg.push(`${i.num}. ${keyWord}(${i.val.length})`) - num++ - } - msg = [msg.join("\n")] - - if (type == "list" && count > 100) - msg.push(`更多内容请翻页查看\n如:#消息列表${Number(page)+1}`) - - let title = `消息列表:第${page}页,共${count}条` - if (type == "search") - title = `消息${search}:共${count}条` - - return this.reply(await common.makeForwardMsg(this.e, msg, title)) - } - - /** 分页 */ - pagination(pageNo, pageSize, array) { - let offset = (pageNo-1) * pageSize - return offset+pageSize >= array.length ? array.slice(offset, array.length) : array.slice(offset, offset+pageSize) - } -} \ No newline at end of file diff --git a/spaces/Cyril666/my_abi/dataset.py b/spaces/Cyril666/my_abi/dataset.py deleted file mode 100644 index e424cb2134ba0d992515b2446302e1a758a3db66..0000000000000000000000000000000000000000 --- a/spaces/Cyril666/my_abi/dataset.py +++ /dev/null @@ -1,278 +0,0 @@ -import logging -import re - -import cv2 -import lmdb -import six -from fastai.vision import * -from torchvision import transforms - -from transforms import CVColorJitter, CVDeterioration, CVGeometry -from utils import CharsetMapper, onehot - - -class ImageDataset(Dataset): - "`ImageDataset` read data from LMDB database." - - def __init__(self, - path:PathOrStr, - is_training:bool=True, - img_h:int=32, - img_w:int=100, - max_length:int=25, - check_length:bool=True, - case_sensitive:bool=False, - charset_path:str='data/charset_36.txt', - convert_mode:str='RGB', - data_aug:bool=True, - deteriorate_ratio:float=0., - multiscales:bool=True, - one_hot_y:bool=True, - return_idx:bool=False, - return_raw:bool=False, - **kwargs): - self.path, self.name = Path(path), Path(path).name - assert self.path.is_dir() and self.path.exists(), f"{path} is not a valid directory." - self.convert_mode, self.check_length = convert_mode, check_length - self.img_h, self.img_w = img_h, img_w - self.max_length, self.one_hot_y = max_length, one_hot_y - self.return_idx, self.return_raw = return_idx, return_raw - self.case_sensitive, self.is_training = case_sensitive, is_training - self.data_aug, self.multiscales = data_aug, multiscales - self.charset = CharsetMapper(charset_path, max_length=max_length+1) - self.c = self.charset.num_classes - - self.env = lmdb.open(str(path), readonly=True, lock=False, readahead=False, meminit=False) - assert self.env, f'Cannot open LMDB dataset from {path}.' - with self.env.begin(write=False) as txn: - self.length = int(txn.get('num-samples'.encode())) - - if self.is_training and self.data_aug: - self.augment_tfs = transforms.Compose([ - CVGeometry(degrees=45, translate=(0.0, 0.0), scale=(0.5, 2.), shear=(45, 15), distortion=0.5, p=0.5), - CVDeterioration(var=20, degrees=6, factor=4, p=0.25), - CVColorJitter(brightness=0.5, contrast=0.5, saturation=0.5, hue=0.1, p=0.25) - ]) - self.totensor = transforms.ToTensor() - - def __len__(self): return self.length - - def _next_image(self, index): - next_index = random.randint(0, len(self) - 1) - return self.get(next_index) - - def _check_image(self, x, pixels=6): - if x.size[0] <= pixels or x.size[1] <= pixels: return False - else: return True - - def resize_multiscales(self, img, borderType=cv2.BORDER_CONSTANT): - def _resize_ratio(img, ratio, fix_h=True): - if ratio * self.img_w < self.img_h: - if fix_h: trg_h = self.img_h - else: trg_h = int(ratio * self.img_w) - trg_w = self.img_w - else: trg_h, trg_w = self.img_h, int(self.img_h / ratio) - img = cv2.resize(img, (trg_w, trg_h)) - pad_h, pad_w = (self.img_h - trg_h) / 2, (self.img_w - trg_w) / 2 - top, bottom = math.ceil(pad_h), math.floor(pad_h) - left, right = math.ceil(pad_w), math.floor(pad_w) - img = cv2.copyMakeBorder(img, top, bottom, left, right, borderType) - return img - - if self.is_training: - if random.random() < 0.5: - base, maxh, maxw = self.img_h, self.img_h, self.img_w - h, w = random.randint(base, maxh), random.randint(base, maxw) - return _resize_ratio(img, h/w) - else: return _resize_ratio(img, img.shape[0] / img.shape[1]) # keep aspect ratio - else: return _resize_ratio(img, img.shape[0] / img.shape[1]) # keep aspect ratio - - def resize(self, img): - if self.multiscales: return self.resize_multiscales(img, cv2.BORDER_REPLICATE) - else: return cv2.resize(img, (self.img_w, self.img_h)) - - def get(self, idx): - with self.env.begin(write=False) as txn: - image_key, label_key = f'image-{idx+1:09d}', f'label-{idx+1:09d}' - try: - label = str(txn.get(label_key.encode()), 'utf-8') # label - label = re.sub('[^0-9a-zA-Z]+', '', label) - if self.check_length and self.max_length > 0: - if len(label) > self.max_length or len(label) <= 0: - #logging.info(f'Long or short text image is found: {self.name}, {idx}, {label}, {len(label)}') - return self._next_image(idx) - label = label[:self.max_length] - - imgbuf = txn.get(image_key.encode()) # image - buf = six.BytesIO() - buf.write(imgbuf) - buf.seek(0) - with warnings.catch_warnings(): - warnings.simplefilter("ignore", UserWarning) # EXIF warning from TiffPlugin - image = PIL.Image.open(buf).convert(self.convert_mode) - if self.is_training and not self._check_image(image): - #logging.info(f'Invalid image is found: {self.name}, {idx}, {label}, {len(label)}') - return self._next_image(idx) - except: - import traceback - traceback.print_exc() - logging.info(f'Corrupted image is found: {self.name}, {idx}, {label}, {len(label)}') - return self._next_image(idx) - return image, label, idx - - def _process_training(self, image): - if self.data_aug: image = self.augment_tfs(image) - image = self.resize(np.array(image)) - return image - - def _process_test(self, image): - return self.resize(np.array(image)) # TODO:move is_training to here - - def __getitem__(self, idx): - image, text, idx_new = self.get(idx) - if not self.is_training: assert idx == idx_new, f'idx {idx} != idx_new {idx_new} during testing.' - - if self.is_training: image = self._process_training(image) - else: image = self._process_test(image) - if self.return_raw: return image, text - image = self.totensor(image) - - length = tensor(len(text) + 1).to(dtype=torch.long) # one for end token - label = self.charset.get_labels(text, case_sensitive=self.case_sensitive) - label = tensor(label).to(dtype=torch.long) - if self.one_hot_y: label = onehot(label, self.charset.num_classes) - - if self.return_idx: y = [label, length, idx_new] - else: y = [label, length] - return image, y - - -class TextDataset(Dataset): - def __init__(self, - path:PathOrStr, - delimiter:str='\t', - max_length:int=25, - charset_path:str='data/charset_36.txt', - case_sensitive=False, - one_hot_x=True, - one_hot_y=True, - is_training=True, - smooth_label=False, - smooth_factor=0.2, - use_sm=False, - **kwargs): - self.path = Path(path) - self.case_sensitive, self.use_sm = case_sensitive, use_sm - self.smooth_factor, self.smooth_label = smooth_factor, smooth_label - self.charset = CharsetMapper(charset_path, max_length=max_length+1) - self.one_hot_x, self.one_hot_y, self.is_training = one_hot_x, one_hot_y, is_training - if self.is_training and self.use_sm: self.sm = SpellingMutation(charset=self.charset) - - dtype = {'inp': str, 'gt': str} - self.df = pd.read_csv(self.path, dtype=dtype, delimiter=delimiter, na_filter=False) - self.inp_col, self.gt_col = 0, 1 - - def __len__(self): return len(self.df) - - def __getitem__(self, idx): - text_x = self.df.iloc[idx, self.inp_col] - text_x = re.sub('[^0-9a-zA-Z]+', '', text_x) - if not self.case_sensitive: text_x = text_x.lower() - if self.is_training and self.use_sm: text_x = self.sm(text_x) - - length_x = tensor(len(text_x) + 1).to(dtype=torch.long) # one for end token - label_x = self.charset.get_labels(text_x, case_sensitive=self.case_sensitive) - label_x = tensor(label_x) - if self.one_hot_x: - label_x = onehot(label_x, self.charset.num_classes) - if self.is_training and self.smooth_label: - label_x = torch.stack([self.prob_smooth_label(l) for l in label_x]) - x = [label_x, length_x] - - text_y = self.df.iloc[idx, self.gt_col] - text_y = re.sub('[^0-9a-zA-Z]+', '', text_y) - if not self.case_sensitive: text_y = text_y.lower() - length_y = tensor(len(text_y) + 1).to(dtype=torch.long) # one for end token - label_y = self.charset.get_labels(text_y, case_sensitive=self.case_sensitive) - label_y = tensor(label_y) - if self.one_hot_y: label_y = onehot(label_y, self.charset.num_classes) - y = [label_y, length_y] - - return x, y - - def prob_smooth_label(self, one_hot): - one_hot = one_hot.float() - delta = torch.rand([]) * self.smooth_factor - num_classes = len(one_hot) - noise = torch.rand(num_classes) - noise = noise / noise.sum() * delta - one_hot = one_hot * (1 - delta) + noise - return one_hot - - -class SpellingMutation(object): - def __init__(self, pn0=0.7, pn1=0.85, pn2=0.95, pt0=0.7, pt1=0.85, charset=None): - """ - Args: - pn0: the prob of not modifying characters is (pn0) - pn1: the prob of modifying one characters is (pn1 - pn0) - pn2: the prob of modifying two characters is (pn2 - pn1), - and three (1 - pn2) - pt0: the prob of replacing operation is pt0. - pt1: the prob of inserting operation is (pt1 - pt0), - and deleting operation is (1 - pt1) - """ - super().__init__() - self.pn0, self.pn1, self.pn2 = pn0, pn1, pn2 - self.pt0, self.pt1 = pt0, pt1 - self.charset = charset - logging.info(f'the probs: pn0={self.pn0}, pn1={self.pn1} ' + - f'pn2={self.pn2}, pt0={self.pt0}, pt1={self.pt1}') - - def is_digit(self, text, ratio=0.5): - length = max(len(text), 1) - digit_num = sum([t in self.charset.digits for t in text]) - if digit_num / length < ratio: return False - return True - - def is_unk_char(self, char): - # return char == self.charset.unk_char - return (char not in self.charset.digits) and (char not in self.charset.alphabets) - - def get_num_to_modify(self, length): - prob = random.random() - if prob < self.pn0: num_to_modify = 0 - elif prob < self.pn1: num_to_modify = 1 - elif prob < self.pn2: num_to_modify = 2 - else: num_to_modify = 3 - - if length <= 1: num_to_modify = 0 - elif length >= 2 and length <= 4: num_to_modify = min(num_to_modify, 1) - else: num_to_modify = min(num_to_modify, length // 2) # smaller than length // 2 - return num_to_modify - - def __call__(self, text, debug=False): - if self.is_digit(text): return text - length = len(text) - num_to_modify = self.get_num_to_modify(length) - if num_to_modify <= 0: return text - - chars = [] - index = np.arange(0, length) - random.shuffle(index) - index = index[: num_to_modify] - if debug: self.index = index - for i, t in enumerate(text): - if i not in index: chars.append(t) - elif self.is_unk_char(t): chars.append(t) - else: - prob = random.random() - if prob < self.pt0: # replace - chars.append(random.choice(self.charset.alphabets)) - elif prob < self.pt1: # insert - chars.append(random.choice(self.charset.alphabets)) - chars.append(t) - else: # delete - continue - new_text = ''.join(chars[: self.charset.max_length-1]) - return new_text if len(new_text) >= 1 else text \ No newline at end of file diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/mix.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/mix.py deleted file mode 100644 index caf2c68b835101c4f3d18d3d53fbb1b8494b3dba..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/mix.py +++ /dev/null @@ -1,129 +0,0 @@ -""" -Ways to transform interfaces to produce new interfaces -""" -import asyncio -import warnings - -from gradio_client.documentation import document, set_documentation_group - -import gradio - -set_documentation_group("mix_interface") - - -@document() -class Parallel(gradio.Interface): - """ - Creates a new Interface consisting of multiple Interfaces in parallel (comparing their outputs). - The Interfaces to put in Parallel must share the same input components (but can have different output components). - - Demos: interface_parallel, interface_parallel_load - Guides: advanced-interface-features - """ - - def __init__(self, *interfaces: gradio.Interface, **options): - """ - Parameters: - interfaces: any number of Interface objects that are to be compared in parallel - options: additional kwargs that are passed into the new Interface object to customize it - Returns: - an Interface object comparing the given models - """ - outputs = [] - - for interface in interfaces: - if not (isinstance(interface, gradio.Interface)): - warnings.warn( - "Parallel requires all inputs to be of type Interface. " - "May not work as expected." - ) - outputs.extend(interface.output_components) - - async def parallel_fn(*args): - return_values_with_durations = await asyncio.gather( - *[interface.call_function(0, list(args)) for interface in interfaces] - ) - return_values = [rv["prediction"] for rv in return_values_with_durations] - combined_list = [] - for interface, return_value in zip(interfaces, return_values): - if len(interface.output_components) == 1: - combined_list.append(return_value) - else: - combined_list.extend(return_value) - if len(outputs) == 1: - return combined_list[0] - return combined_list - - parallel_fn.__name__ = " | ".join([io.__name__ for io in interfaces]) - - kwargs = { - "fn": parallel_fn, - "inputs": interfaces[0].input_components, - "outputs": outputs, - } - kwargs.update(options) - super().__init__(**kwargs) - - -@document() -class Series(gradio.Interface): - """ - Creates a new Interface from multiple Interfaces in series (the output of one is fed as the input to the next, - and so the input and output components must agree between the interfaces). - - Demos: interface_series, interface_series_load - Guides: advanced-interface-features - """ - - def __init__(self, *interfaces: gradio.Interface, **options): - """ - Parameters: - interfaces: any number of Interface objects that are to be connected in series - options: additional kwargs that are passed into the new Interface object to customize it - Returns: - an Interface object connecting the given models - """ - - async def connected_fn(*data): - for idx, interface in enumerate(interfaces): - # skip preprocessing for first interface since the Series interface will include it - if idx > 0 and not (interface.api_mode): - data = [ - input_component.preprocess(data[i]) - for i, input_component in enumerate(interface.input_components) - ] - - # run all of predictions sequentially - data = (await interface.call_function(0, list(data)))["prediction"] - if len(interface.output_components) == 1: - data = [data] - - # skip postprocessing for final interface since the Series interface will include it - if idx < len(interfaces) - 1 and not (interface.api_mode): - data = [ - output_component.postprocess(data[i]) - for i, output_component in enumerate( - interface.output_components - ) - ] - - if len(interface.output_components) == 1: # type: ignore - return data[0] - return data - - for interface in interfaces: - if not (isinstance(interface, gradio.Interface)): - warnings.warn( - "Series requires all inputs to be of type Interface. May " - "not work as expected." - ) - connected_fn.__name__ = " => ".join([io.__name__ for io in interfaces]) - - kwargs = { - "fn": connected_fn, - "inputs": interfaces[0].input_components, - "outputs": interfaces[-1].output_components, - "_api_mode": interfaces[0].api_mode, # TODO: set api_mode per-interface - } - kwargs.update(options) - super().__init__(**kwargs) diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/frontend/assets/index-1e03cd90.css b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/frontend/assets/index-1e03cd90.css deleted file mode 100644 index 6692555db405e6eb83d0671b1ef9922ee30770d3..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/frontend/assets/index-1e03cd90.css +++ /dev/null @@ -1 +0,0 @@ -.preview.svelte-w0jac3.svelte-w0jac3{display:flex;position:absolute;inset:0;flex-direction:column;z-index:var(--layer-2);backdrop-filter:blur(8px);background:var(--background-fill-primary);height:var(--size-full)}.fixed-height.svelte-w0jac3.svelte-w0jac3{min-height:var(--size-80);max-height:55vh}@media (min-width: 1280px){.fixed-height.svelte-w0jac3.svelte-w0jac3{min-height:450px}}.preview.svelte-w0jac3 img.svelte-w0jac3{width:var(--size-full);height:calc(var(--size-full) - 60px);object-fit:contain}.preview.svelte-w0jac3 img.with-caption.svelte-w0jac3{height:calc(var(--size-full) - 80px)}.caption.svelte-w0jac3.svelte-w0jac3{padding:var(--size-2) var(--size-3);overflow:hidden;color:var(--block-label-text-color);font-weight:var(--weight-semibold);text-align:center;text-overflow:ellipsis;white-space:nowrap}.thumbnails.svelte-w0jac3.svelte-w0jac3{display:flex;position:absolute;bottom:0;justify-content:center;align-items:center;gap:var(--spacing-lg);width:var(--size-full);height:var(--size-14);overflow-x:scroll}.thumbnail-item.svelte-w0jac3.svelte-w0jac3{--ring-color:transparent;position:relative;box-shadow:0 0 0 2px var(--ring-color),var(--shadow-drop);border:1px solid var(--border-color-primary);border-radius:var(--button-small-radius);background:var(--background-fill-secondary);aspect-ratio:var(--ratio-square);width:var(--size-full);height:var(--size-full);overflow:clip}.thumbnail-item.svelte-w0jac3.svelte-w0jac3:hover{--ring-color:var(--color-accent);filter:brightness(1.1)}.thumbnail-item.selected.svelte-w0jac3.svelte-w0jac3{--ring-color:var(--color-accent)}.thumbnail-small.svelte-w0jac3.svelte-w0jac3{flex:none;transform:scale(.9);transition:75ms;width:var(--size-9);height:var(--size-9)}.thumbnail-small.selected.svelte-w0jac3.svelte-w0jac3{--ring-color:var(--color-accent);transform:scale(1);border-color:var(--color-accent)}.thumbnail-small.svelte-w0jac3>img.svelte-w0jac3{width:var(--size-full);height:var(--size-full);overflow:hidden;object-fit:var(--object-fit)}.grid-wrap.svelte-w0jac3.svelte-w0jac3{position:relative;padding:var(--size-2);height:var(--size-full);overflow-y:auto}.grid-container.svelte-w0jac3.svelte-w0jac3{display:grid;position:relative;grid-template-rows:var(--grid-rows);grid-template-columns:var(--grid-cols);gap:var(--spacing-lg)}@media (min-width: 640px){.grid-container.svelte-w0jac3.svelte-w0jac3{grid-template-columns:var(--sm-grid-cols)}}@media (min-width: 768px){.grid-container.svelte-w0jac3.svelte-w0jac3{grid-template-columns:var(--md-grid-cols)}}@media (min-width: 1024px){.grid-container.svelte-w0jac3.svelte-w0jac3{grid-template-columns:var(--lg-grid-cols)}}@media (min-width: 1280px){.grid-container.svelte-w0jac3.svelte-w0jac3{grid-template-columns:var(--xl-grid-cols)}}@media (min-width: 1536px){.grid-container.svelte-w0jac3.svelte-w0jac3{grid-template-columns:var(--2xl-grid-cols)}}.thumbnail-lg.svelte-w0jac3>img.svelte-w0jac3{width:var(--size-full);height:var(--size-full);overflow:hidden;object-fit:var(--object-fit)}.thumbnail-lg.svelte-w0jac3:hover .caption-label.svelte-w0jac3{opacity:.5}.caption-label.svelte-w0jac3.svelte-w0jac3{position:absolute;right:var(--block-label-margin);bottom:var(--block-label-margin);z-index:var(--layer-1);border-top:1px solid var(--border-color-primary);border-left:1px solid var(--border-color-primary);border-radius:var(--block-label-radius);background:var(--background-fill-secondary);padding:var(--block-label-padding);max-width:80%;overflow:hidden;font-size:var(--block-label-text-size);text-align:left;text-overflow:ellipsis;white-space:nowrap}.icon-button.svelte-w0jac3.svelte-w0jac3{position:absolute;top:0;right:0;z-index:var(--layer-1)} diff --git a/spaces/DaleChen/AutoGPT/autogpt/json_utils/__init__.py b/spaces/DaleChen/AutoGPT/autogpt/json_utils/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/Datasculptor/DescriptionGPT/detic/modeling/roi_heads/detic_fast_rcnn.py b/spaces/Datasculptor/DescriptionGPT/detic/modeling/roi_heads/detic_fast_rcnn.py deleted file mode 100644 index 186822dd8f67ef9d991ee79101b3bf1243a722a5..0000000000000000000000000000000000000000 --- a/spaces/Datasculptor/DescriptionGPT/detic/modeling/roi_heads/detic_fast_rcnn.py +++ /dev/null @@ -1,595 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import logging -import math -import json -import numpy as np -from typing import Dict, Union -import torch -from fvcore.nn import giou_loss, smooth_l1_loss -from torch import nn -from torch.nn import functional as F -import fvcore.nn.weight_init as weight_init -import detectron2.utils.comm as comm -from detectron2.config import configurable -from detectron2.layers import ShapeSpec, batched_nms, cat, cross_entropy, nonzero_tuple -from detectron2.structures import Boxes, Instances -from detectron2.utils.events import get_event_storage -from detectron2.modeling.box_regression import Box2BoxTransform -from detectron2.modeling.roi_heads.fast_rcnn import FastRCNNOutputLayers -from detectron2.modeling.roi_heads.fast_rcnn import fast_rcnn_inference -from detectron2.modeling.roi_heads.fast_rcnn import _log_classification_stats - -from torch.cuda.amp import autocast -from ..utils import load_class_freq, get_fed_loss_inds -from .zero_shot_classifier import ZeroShotClassifier - -__all__ = ["DeticFastRCNNOutputLayers"] - - -class DeticFastRCNNOutputLayers(FastRCNNOutputLayers): - @configurable - def __init__( - self, - input_shape: ShapeSpec, - *, - mult_proposal_score=False, - cls_score=None, - sync_caption_batch = False, - use_sigmoid_ce = False, - use_fed_loss = False, - ignore_zero_cats = False, - fed_loss_num_cat = 50, - dynamic_classifier = False, - image_label_loss = '', - use_zeroshot_cls = False, - image_loss_weight = 0.1, - with_softmax_prop = False, - caption_weight = 1.0, - neg_cap_weight = 1.0, - add_image_box = False, - debug = False, - prior_prob = 0.01, - cat_freq_path = '', - fed_loss_freq_weight = 0.5, - softmax_weak_loss = False, - **kwargs, - ): - super().__init__( - input_shape=input_shape, - **kwargs, - ) - self.mult_proposal_score = mult_proposal_score - self.sync_caption_batch = sync_caption_batch - self.use_sigmoid_ce = use_sigmoid_ce - self.use_fed_loss = use_fed_loss - self.ignore_zero_cats = ignore_zero_cats - self.fed_loss_num_cat = fed_loss_num_cat - self.dynamic_classifier = dynamic_classifier - self.image_label_loss = image_label_loss - self.use_zeroshot_cls = use_zeroshot_cls - self.image_loss_weight = image_loss_weight - self.with_softmax_prop = with_softmax_prop - self.caption_weight = caption_weight - self.neg_cap_weight = neg_cap_weight - self.add_image_box = add_image_box - self.softmax_weak_loss = softmax_weak_loss - self.debug = debug - - if softmax_weak_loss: - assert image_label_loss in ['max_size'] - - if self.use_sigmoid_ce: - bias_value = -math.log((1 - prior_prob) / prior_prob) - nn.init.constant_(self.cls_score.bias, bias_value) - - if self.use_fed_loss or self.ignore_zero_cats: - freq_weight = load_class_freq(cat_freq_path, fed_loss_freq_weight) - self.register_buffer('freq_weight', freq_weight) - else: - self.freq_weight = None - - if self.use_fed_loss and len(self.freq_weight) < self.num_classes: - # assert self.num_classes == 11493 - print('Extending federated loss weight') - self.freq_weight = torch.cat( - [self.freq_weight, - self.freq_weight.new_zeros( - self.num_classes - len(self.freq_weight))] - ) - - assert (not self.dynamic_classifier) or (not self.use_fed_loss) - input_size = input_shape.channels * \ - (input_shape.width or 1) * (input_shape.height or 1) - - if self.use_zeroshot_cls: - del self.cls_score - del self.bbox_pred - assert cls_score is not None - self.cls_score = cls_score - self.bbox_pred = nn.Sequential( - nn.Linear(input_size, input_size), - nn.ReLU(inplace=True), - nn.Linear(input_size, 4) - ) - weight_init.c2_xavier_fill(self.bbox_pred[0]) - nn.init.normal_(self.bbox_pred[-1].weight, std=0.001) - nn.init.constant_(self.bbox_pred[-1].bias, 0) - - if self.with_softmax_prop: - self.prop_score = nn.Sequential( - nn.Linear(input_size, input_size), - nn.ReLU(inplace=True), - nn.Linear(input_size, self.num_classes + 1), - ) - weight_init.c2_xavier_fill(self.prop_score[0]) - nn.init.normal_(self.prop_score[-1].weight, mean=0, std=0.001) - nn.init.constant_(self.prop_score[-1].bias, 0) - - - @classmethod - def from_config(cls, cfg, input_shape): - ret = super().from_config(cfg, input_shape) - ret.update({ - 'mult_proposal_score': cfg.MODEL.ROI_BOX_HEAD.MULT_PROPOSAL_SCORE, - 'sync_caption_batch': cfg.MODEL.SYNC_CAPTION_BATCH, - 'use_sigmoid_ce': cfg.MODEL.ROI_BOX_HEAD.USE_SIGMOID_CE, - 'use_fed_loss': cfg.MODEL.ROI_BOX_HEAD.USE_FED_LOSS, - 'ignore_zero_cats': cfg.MODEL.ROI_BOX_HEAD.IGNORE_ZERO_CATS, - 'fed_loss_num_cat': cfg.MODEL.ROI_BOX_HEAD.FED_LOSS_NUM_CAT, - 'dynamic_classifier': cfg.MODEL.DYNAMIC_CLASSIFIER, - 'image_label_loss': cfg.MODEL.ROI_BOX_HEAD.IMAGE_LABEL_LOSS, - 'use_zeroshot_cls': cfg.MODEL.ROI_BOX_HEAD.USE_ZEROSHOT_CLS, - 'image_loss_weight': cfg.MODEL.ROI_BOX_HEAD.IMAGE_LOSS_WEIGHT, - 'with_softmax_prop': cfg.MODEL.ROI_BOX_HEAD.WITH_SOFTMAX_PROP, - 'caption_weight': cfg.MODEL.ROI_BOX_HEAD.CAPTION_WEIGHT, - 'neg_cap_weight': cfg.MODEL.ROI_BOX_HEAD.NEG_CAP_WEIGHT, - 'add_image_box': cfg.MODEL.ROI_BOX_HEAD.ADD_IMAGE_BOX, - 'debug': cfg.DEBUG or cfg.SAVE_DEBUG or cfg.IS_DEBUG, - 'prior_prob': cfg.MODEL.ROI_BOX_HEAD.PRIOR_PROB, - 'cat_freq_path': cfg.MODEL.ROI_BOX_HEAD.CAT_FREQ_PATH, - 'fed_loss_freq_weight': cfg.MODEL.ROI_BOX_HEAD.FED_LOSS_FREQ_WEIGHT, - 'softmax_weak_loss': cfg.MODEL.ROI_BOX_HEAD.SOFTMAX_WEAK_LOSS, - }) - if ret['use_zeroshot_cls']: - ret['cls_score'] = ZeroShotClassifier(cfg, input_shape) - return ret - - def losses(self, predictions, proposals, \ - use_advanced_loss=True, - classifier_info=(None,None,None)): - """ - enable advanced loss - """ - scores, proposal_deltas = predictions - gt_classes = ( - cat([p.gt_classes for p in proposals], dim=0) if len(proposals) else torch.empty(0) - ) - num_classes = self.num_classes - if self.dynamic_classifier: - _, cls_id_map = classifier_info[1] - gt_classes = cls_id_map[gt_classes] - num_classes = scores.shape[1] - 1 - assert cls_id_map[self.num_classes] == num_classes - _log_classification_stats(scores, gt_classes) - - if len(proposals): - proposal_boxes = cat([p.proposal_boxes.tensor for p in proposals], dim=0) # Nx4 - assert not proposal_boxes.requires_grad, "Proposals should not require gradients!" - gt_boxes = cat( - [(p.gt_boxes if p.has("gt_boxes") else p.proposal_boxes).tensor for p in proposals], - dim=0, - ) - else: - proposal_boxes = gt_boxes = torch.empty((0, 4), device=proposal_deltas.device) - - if self.use_sigmoid_ce: - loss_cls = self.sigmoid_cross_entropy_loss(scores, gt_classes) - else: - loss_cls = self.softmax_cross_entropy_loss(scores, gt_classes) - return { - "loss_cls": loss_cls, - "loss_box_reg": self.box_reg_loss( - proposal_boxes, gt_boxes, proposal_deltas, gt_classes, - num_classes=num_classes) - } - - - def sigmoid_cross_entropy_loss(self, pred_class_logits, gt_classes): - if pred_class_logits.numel() == 0: - return pred_class_logits.new_zeros([1])[0] # This is more robust than .sum() * 0. - - B = pred_class_logits.shape[0] - C = pred_class_logits.shape[1] - 1 - - target = pred_class_logits.new_zeros(B, C + 1) - target[range(len(gt_classes)), gt_classes] = 1 # B x (C + 1) - target = target[:, :C] # B x C - - weight = 1 - - if self.use_fed_loss and (self.freq_weight is not None): # fedloss - appeared = get_fed_loss_inds( - gt_classes, - num_sample_cats=self.fed_loss_num_cat, - C=C, - weight=self.freq_weight) - appeared_mask = appeared.new_zeros(C + 1) - appeared_mask[appeared] = 1 # C + 1 - appeared_mask = appeared_mask[:C] - fed_w = appeared_mask.view(1, C).expand(B, C) - weight = weight * fed_w.float() - if self.ignore_zero_cats and (self.freq_weight is not None): - w = (self.freq_weight.view(-1) > 1e-4).float() - weight = weight * w.view(1, C).expand(B, C) - # import pdb; pdb.set_trace() - - cls_loss = F.binary_cross_entropy_with_logits( - pred_class_logits[:, :-1], target, reduction='none') # B x C - loss = torch.sum(cls_loss * weight) / B - return loss - - - def softmax_cross_entropy_loss(self, pred_class_logits, gt_classes): - """ - change _no_instance handling - """ - if pred_class_logits.numel() == 0: - return pred_class_logits.new_zeros([1])[0] - - if self.ignore_zero_cats and (self.freq_weight is not None): - zero_weight = torch.cat([ - (self.freq_weight.view(-1) > 1e-4).float(), - self.freq_weight.new_ones(1)]) # C + 1 - loss = F.cross_entropy( - pred_class_logits, gt_classes, - weight=zero_weight, reduction="mean") - elif self.use_fed_loss and (self.freq_weight is not None): # fedloss - C = pred_class_logits.shape[1] - 1 - appeared = get_fed_loss_inds( - gt_classes, - num_sample_cats=self.fed_loss_num_cat, - C=C, - weight=self.freq_weight) - appeared_mask = appeared.new_zeros(C + 1).float() - appeared_mask[appeared] = 1. # C + 1 - appeared_mask[C] = 1. - loss = F.cross_entropy( - pred_class_logits, gt_classes, - weight=appeared_mask, reduction="mean") - else: - loss = F.cross_entropy( - pred_class_logits, gt_classes, reduction="mean") - return loss - - - def box_reg_loss( - self, proposal_boxes, gt_boxes, pred_deltas, gt_classes, - num_classes=-1): - """ - Allow custom background index - """ - num_classes = num_classes if num_classes > 0 else self.num_classes - box_dim = proposal_boxes.shape[1] # 4 or 5 - fg_inds = nonzero_tuple((gt_classes >= 0) & (gt_classes < num_classes))[0] - if pred_deltas.shape[1] == box_dim: # cls-agnostic regression - fg_pred_deltas = pred_deltas[fg_inds] - else: - fg_pred_deltas = pred_deltas.view(-1, self.num_classes, box_dim)[ - fg_inds, gt_classes[fg_inds] - ] - - if self.box_reg_loss_type == "smooth_l1": - gt_pred_deltas = self.box2box_transform.get_deltas( - proposal_boxes[fg_inds], - gt_boxes[fg_inds], - ) - loss_box_reg = smooth_l1_loss( - fg_pred_deltas, gt_pred_deltas, self.smooth_l1_beta, reduction="sum" - ) - elif self.box_reg_loss_type == "giou": - fg_pred_boxes = self.box2box_transform.apply_deltas( - fg_pred_deltas, proposal_boxes[fg_inds] - ) - loss_box_reg = giou_loss(fg_pred_boxes, gt_boxes[fg_inds], reduction="sum") - else: - raise ValueError(f"Invalid bbox reg loss type '{self.box_reg_loss_type}'") - return loss_box_reg / max(gt_classes.numel(), 1.0) - - def inference(self, predictions, proposals): - """ - enable use proposal boxes - """ - predictions = (predictions[0], predictions[1]) - boxes = self.predict_boxes(predictions, proposals) - scores = self.predict_probs(predictions, proposals) - if self.mult_proposal_score: - proposal_scores = [p.get('objectness_logits') for p in proposals] - scores = [(s * ps[:, None]) ** 0.5 \ - for s, ps in zip(scores, proposal_scores)] - image_shapes = [x.image_size for x in proposals] - return fast_rcnn_inference( - boxes, - scores, - image_shapes, - self.test_score_thresh, - self.test_nms_thresh, - self.test_topk_per_image, - ) - - - def predict_probs(self, predictions, proposals): - """ - support sigmoid - """ - # scores, _ = predictions - scores = predictions[0] - num_inst_per_image = [len(p) for p in proposals] - if self.use_sigmoid_ce: - probs = scores.sigmoid() - else: - probs = F.softmax(scores, dim=-1) - return probs.split(num_inst_per_image, dim=0) - - - def image_label_losses(self, predictions, proposals, image_labels, \ - classifier_info=(None,None,None), ann_type='image'): - ''' - Inputs: - scores: N x (C + 1) - image_labels B x 1 - ''' - num_inst_per_image = [len(p) for p in proposals] - scores = predictions[0] - scores = scores.split(num_inst_per_image, dim=0) # B x n x (C + 1) - if self.with_softmax_prop: - prop_scores = predictions[2].split(num_inst_per_image, dim=0) - else: - prop_scores = [None for _ in num_inst_per_image] - B = len(scores) - img_box_count = 0 - select_size_count = 0 - select_x_count = 0 - select_y_count = 0 - max_score_count = 0 - storage = get_event_storage() - loss = scores[0].new_zeros([1])[0] - caption_loss = scores[0].new_zeros([1])[0] - for idx, (score, labels, prop_score, p) in enumerate(zip( - scores, image_labels, prop_scores, proposals)): - if score.shape[0] == 0: - loss += score.new_zeros([1])[0] - continue - if 'caption' in ann_type: - score, caption_loss_img = self._caption_loss( - score, classifier_info, idx, B) - caption_loss += self.caption_weight * caption_loss_img - if ann_type == 'caption': - continue - - if self.debug: - p.selected = score.new_zeros( - (len(p),), dtype=torch.long) - 1 - for i_l, label in enumerate(labels): - if self.dynamic_classifier: - if idx == 0 and i_l == 0 and comm.is_main_process(): - storage.put_scalar('stats_label', label) - label = classifier_info[1][1][label] - assert label < score.shape[1] - if self.image_label_loss in ['wsod', 'wsddn']: - loss_i, ind = self._wsddn_loss(score, prop_score, label) - elif self.image_label_loss == 'max_score': - loss_i, ind = self._max_score_loss(score, label) - elif self.image_label_loss == 'max_size': - loss_i, ind = self._max_size_loss(score, label, p) - elif self.image_label_loss == 'first': - loss_i, ind = self._first_loss(score, label) - elif self.image_label_loss == 'image': - loss_i, ind = self._image_loss(score, label) - elif self.image_label_loss == 'min_loss': - loss_i, ind = self._min_loss_loss(score, label) - else: - assert 0 - loss += loss_i / len(labels) - if type(ind) == type([]): - img_box_count = sum(ind) / len(ind) - if self.debug: - for ind_i in ind: - p.selected[ind_i] = label - else: - img_box_count = ind - select_size_count = p[ind].proposal_boxes.area() / \ - (p.image_size[0] * p.image_size[1]) - max_score_count = score[ind, label].sigmoid() - select_x_count = (p.proposal_boxes.tensor[ind, 0] + \ - p.proposal_boxes.tensor[ind, 2]) / 2 / p.image_size[1] - select_y_count = (p.proposal_boxes.tensor[ind, 1] + \ - p.proposal_boxes.tensor[ind, 3]) / 2 / p.image_size[0] - if self.debug: - p.selected[ind] = label - - loss = loss / B - storage.put_scalar('stats_l_image', loss.item()) - if 'caption' in ann_type: - caption_loss = caption_loss / B - loss = loss + caption_loss - storage.put_scalar('stats_l_caption', caption_loss.item()) - if comm.is_main_process(): - storage.put_scalar('pool_stats', img_box_count) - storage.put_scalar('stats_select_size', select_size_count) - storage.put_scalar('stats_select_x', select_x_count) - storage.put_scalar('stats_select_y', select_y_count) - storage.put_scalar('stats_max_label_score', max_score_count) - - return { - 'image_loss': loss * self.image_loss_weight, - 'loss_cls': score.new_zeros([1])[0], - 'loss_box_reg': score.new_zeros([1])[0]} - - - def forward(self, x, classifier_info=(None,None,None)): - """ - enable classifier_info - """ - if x.dim() > 2: - x = torch.flatten(x, start_dim=1) - scores = [] - - if classifier_info[0] is not None: - cls_scores = self.cls_score(x, classifier=classifier_info[0]) - scores.append(cls_scores) - else: - cls_scores = self.cls_score(x) - scores.append(cls_scores) - - if classifier_info[2] is not None: - cap_cls = classifier_info[2] - if self.sync_caption_batch: - caption_scores = self.cls_score(x, classifier=cap_cls[:, :-1]) - else: - caption_scores = self.cls_score(x, classifier=cap_cls) - scores.append(caption_scores) - scores = torch.cat(scores, dim=1) # B x C' or B x N or B x (C'+N) - - proposal_deltas = self.bbox_pred(x) - if self.with_softmax_prop: - prop_score = self.prop_score(x) - return scores, proposal_deltas, prop_score - else: - return scores, proposal_deltas - - - def _caption_loss(self, score, classifier_info, idx, B): - assert (classifier_info[2] is not None) - assert self.add_image_box - cls_and_cap_num = score.shape[1] - cap_num = classifier_info[2].shape[0] - score, caption_score = score.split( - [cls_and_cap_num - cap_num, cap_num], dim=1) - # n x (C + 1), n x B - caption_score = caption_score[-1:] # 1 x B # -1: image level box - caption_target = caption_score.new_zeros( - caption_score.shape) # 1 x B or 1 x MB, M: num machines - if self.sync_caption_batch: - # caption_target: 1 x MB - rank = comm.get_rank() - global_idx = B * rank + idx - assert (classifier_info[2][ - global_idx, -1] - rank) ** 2 < 1e-8, \ - '{} {} {} {} {}'.format( - rank, global_idx, - classifier_info[2][global_idx, -1], - classifier_info[2].shape, - classifier_info[2][:, -1]) - caption_target[:, global_idx] = 1. - else: - assert caption_score.shape[1] == B - caption_target[:, idx] = 1. - caption_loss_img = F.binary_cross_entropy_with_logits( - caption_score, caption_target, reduction='none') - if self.sync_caption_batch: - fg_mask = (caption_target > 0.5).float() - assert (fg_mask.sum().item() - 1.) ** 2 < 1e-8, '{} {}'.format( - fg_mask.shape, fg_mask) - pos_loss = (caption_loss_img * fg_mask).sum() - neg_loss = (caption_loss_img * (1. - fg_mask)).sum() - caption_loss_img = pos_loss + self.neg_cap_weight * neg_loss - else: - caption_loss_img = caption_loss_img.sum() - return score, caption_loss_img - - - def _wsddn_loss(self, score, prop_score, label): - assert prop_score is not None - loss = 0 - final_score = score.sigmoid() * \ - F.softmax(prop_score, dim=0) # B x (C + 1) - img_score = torch.clamp( - torch.sum(final_score, dim=0), - min=1e-10, max=1-1e-10) # (C + 1) - target = img_score.new_zeros(img_score.shape) # (C + 1) - target[label] = 1. - loss += F.binary_cross_entropy(img_score, target) - ind = final_score[:, label].argmax() - return loss, ind - - - def _max_score_loss(self, score, label): - loss = 0 - target = score.new_zeros(score.shape[1]) - target[label] = 1. - ind = score[:, label].argmax().item() - loss += F.binary_cross_entropy_with_logits( - score[ind], target, reduction='sum') - return loss, ind - - - def _min_loss_loss(self, score, label): - loss = 0 - target = score.new_zeros(score.shape) - target[:, label] = 1. - with torch.no_grad(): - x = F.binary_cross_entropy_with_logits( - score, target, reduction='none').sum(dim=1) # n - ind = x.argmin().item() - loss += F.binary_cross_entropy_with_logits( - score[ind], target[0], reduction='sum') - return loss, ind - - - def _first_loss(self, score, label): - loss = 0 - target = score.new_zeros(score.shape[1]) - target[label] = 1. - ind = 0 - loss += F.binary_cross_entropy_with_logits( - score[ind], target, reduction='sum') - return loss, ind - - - def _image_loss(self, score, label): - assert self.add_image_box - target = score.new_zeros(score.shape[1]) - target[label] = 1. - ind = score.shape[0] - 1 - loss = F.binary_cross_entropy_with_logits( - score[ind], target, reduction='sum') - return loss, ind - - - def _max_size_loss(self, score, label, p): - loss = 0 - target = score.new_zeros(score.shape[1]) - target[label] = 1. - sizes = p.proposal_boxes.area() - ind = sizes[:-1].argmax().item() if len(sizes) > 1 else 0 - if self.softmax_weak_loss: - loss += F.cross_entropy( - score[ind:ind+1], - score.new_tensor(label, dtype=torch.long).view(1), - reduction='sum') - else: - loss += F.binary_cross_entropy_with_logits( - score[ind], target, reduction='sum') - return loss, ind - - - -def put_label_distribution(storage, hist_name, hist_counts, num_classes): - """ - """ - ht_min, ht_max = 0, num_classes - hist_edges = torch.linspace( - start=ht_min, end=ht_max, steps=num_classes + 1, dtype=torch.float32) - - hist_params = dict( - tag=hist_name, - min=ht_min, - max=ht_max, - num=float(hist_counts.sum()), - sum=float((hist_counts * torch.arange(len(hist_counts))).sum()), - sum_squares=float(((hist_counts * torch.arange(len(hist_counts))) ** 2).sum()), - bucket_limits=hist_edges[1:].tolist(), - bucket_counts=hist_counts.tolist(), - global_step=storage._iter, - ) - storage._histograms.append(hist_params) \ No newline at end of file diff --git a/spaces/Datasculptor/StyleGAN-NADA/e4e/models/encoders/__init__.py b/spaces/Datasculptor/StyleGAN-NADA/e4e/models/encoders/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/Datasculptor/StyleGAN-NADA/styleclip/styleclip_global.py b/spaces/Datasculptor/StyleGAN-NADA/styleclip/styleclip_global.py deleted file mode 100644 index 96fa7569ebd51a5e6c2deddb57ccceb4f4376904..0000000000000000000000000000000000000000 --- a/spaces/Datasculptor/StyleGAN-NADA/styleclip/styleclip_global.py +++ /dev/null @@ -1,181 +0,0 @@ -''' -Code adapted from Stitch it in Time by Tzaban et al. -https://github.com/rotemtzaban/STIT -''' - - -import numpy as np -import torch -from tqdm import tqdm -from pathlib import Path -import os - -import clip - -imagenet_templates = [ - 'a bad photo of a {}.', - 'a photo of many {}.', - 'a sculpture of a {}.', - 'a photo of the hard to see {}.', - 'a low resolution photo of the {}.', - 'a rendering of a {}.', - 'graffiti of a {}.', - 'a bad photo of the {}.', - 'a cropped photo of the {}.', - 'a tattoo of a {}.', - 'the embroidered {}.', - 'a photo of a hard to see {}.', - 'a bright photo of a {}.', - 'a photo of a clean {}.', - 'a photo of a dirty {}.', - 'a dark photo of the {}.', - 'a drawing of a {}.', - 'a photo of my {}.', - 'the plastic {}.', - 'a photo of the cool {}.', - 'a close-up photo of a {}.', - 'a black and white photo of the {}.', - 'a painting of the {}.', - 'a painting of a {}.', - 'a pixelated photo of the {}.', - 'a sculpture of the {}.', - 'a bright photo of the {}.', - 'a cropped photo of a {}.', - 'a plastic {}.', - 'a photo of the dirty {}.', - 'a jpeg corrupted photo of a {}.', - 'a blurry photo of the {}.', - 'a photo of the {}.', - 'a good photo of the {}.', - 'a rendering of the {}.', - 'a {} in a video game.', - 'a photo of one {}.', - 'a doodle of a {}.', - 'a close-up photo of the {}.', - 'a photo of a {}.', - 'the origami {}.', - 'the {} in a video game.', - 'a sketch of a {}.', - 'a doodle of the {}.', - 'a origami {}.', - 'a low resolution photo of a {}.', - 'the toy {}.', - 'a rendition of the {}.', - 'a photo of the clean {}.', - 'a photo of a large {}.', - 'a rendition of a {}.', - 'a photo of a nice {}.', - 'a photo of a weird {}.', - 'a blurry photo of a {}.', - 'a cartoon {}.', - 'art of a {}.', - 'a sketch of the {}.', - 'a embroidered {}.', - 'a pixelated photo of a {}.', - 'itap of the {}.', - 'a jpeg corrupted photo of the {}.', - 'a good photo of a {}.', - 'a plushie {}.', - 'a photo of the nice {}.', - 'a photo of the small {}.', - 'a photo of the weird {}.', - 'the cartoon {}.', - 'art of the {}.', - 'a drawing of the {}.', - 'a photo of the large {}.', - 'a black and white photo of a {}.', - 'the plushie {}.', - 'a dark photo of a {}.', - 'itap of a {}.', - 'graffiti of the {}.', - 'a toy {}.', - 'itap of my {}.', - 'a photo of a cool {}.', - 'a photo of a small {}.', - 'a tattoo of the {}.', -] - -CONV_CODE_INDICES = [(0, 512), (1024, 1536), (1536, 2048), (2560, 3072), (3072, 3584), (4096, 4608), (4608, 5120), (5632, 6144), (6144, 6656), (7168, 7680), (7680, 7936), (8192, 8448), (8448, 8576), (8704, 8832), (8832, 8896), (8960, 9024), (9024, 9056)] -FFHQ_CODE_INDICES = [(0, 512), (512, 1024), (1024, 1536), (1536, 2048), (2560, 3072), (3072, 3584), (4096, 4608), (4608, 5120), (5632, 6144), (6144, 6656), (7168, 7680), (7680, 7936), (8192, 8448), (8448, 8576), (8704, 8832), (8832, 8896), (8960, 9024), (9024, 9056)] + \ - [(2048, 2560), (3584, 4096), (5120, 5632), (6656, 7168), (7936, 8192), (8576, 8704), (8896, 8960), (9056, 9088)] - -def zeroshot_classifier(model, classnames, templates, device): - - with torch.no_grad(): - zeroshot_weights = [] - for classname in tqdm(classnames): - texts = [template.format(classname) for template in templates] # format with class - texts = clip.tokenize(texts).to(device) # tokenize - class_embeddings = model.encode_text(texts) # embed with text encoder - class_embeddings /= class_embeddings.norm(dim=-1, keepdim=True) - class_embedding = class_embeddings.mean(dim=0) - class_embedding /= class_embedding.norm() - zeroshot_weights.append(class_embedding) - zeroshot_weights = torch.stack(zeroshot_weights, dim=1).to(device) - return zeroshot_weights - -def expand_to_full_dim(partial_tensor): - full_dim_tensor = torch.zeros(size=(1, 9088)) - - start_idx = 0 - for conv_start, conv_end in CONV_CODE_INDICES: - length = conv_end - conv_start - full_dim_tensor[:, conv_start:conv_end] = partial_tensor[start_idx:start_idx + length] - start_idx += length - - return full_dim_tensor - -def get_direction(neutral_class, target_class, beta, di, clip_model=None): - - device = "cuda" if torch.cuda.is_available() else "cpu" - - if clip_model is None: - clip_model, _ = clip.load("ViT-B/32", device=device) - - class_names = [neutral_class, target_class] - class_weights = zeroshot_classifier(clip_model, class_names, imagenet_templates, device) - - dt = class_weights[:, 1] - class_weights[:, 0] - dt = dt / dt.norm() - - dt = dt.float() - di = di.float() - - relevance = di @ dt - mask = relevance.abs() > beta - direction = relevance * mask - direction_max = direction.abs().max() - if direction_max > 0: - direction = direction / direction_max - else: - raise ValueError(f'Beta value {beta} is too high for mapping from {neutral_class} to {target_class},' - f' try setting it to a lower value') - return direction - -def style_tensor_to_style_dict(style_tensor, refernce_generator): - style_layers = refernce_generator.modulation_layers - - style_dict = {} - for layer_idx, layer in enumerate(style_layers): - style_dict[layer] = style_tensor[:, FFHQ_CODE_INDICES[layer_idx][0]:FFHQ_CODE_INDICES[layer_idx][1]] - - return style_dict - -def style_dict_to_style_tensor(style_dict, reference_generator): - style_layers = reference_generator.modulation_layers - - style_tensor = torch.zeros(size=(1, 9088)) - for layer in style_dict: - layer_idx = style_layers.index(layer) - style_tensor[:, FFHQ_CODE_INDICES[layer_idx][0]:FFHQ_CODE_INDICES[layer_idx][1]] = style_dict[layer] - - return style_tensor - -def project_code_with_styleclip(source_latent, source_class, target_class, alpha, beta, reference_generator, di, clip_model=None): - edit_direction = get_direction(source_class, target_class, beta, di, clip_model) - - edit_full_dim = expand_to_full_dim(edit_direction) - - source_s = style_dict_to_style_tensor(source_latent, reference_generator) - - return source_s + alpha * edit_full_dim \ No newline at end of file diff --git a/spaces/Detomo/ai-comic-generation/src/components/icons/full-screen.tsx b/spaces/Detomo/ai-comic-generation/src/components/icons/full-screen.tsx deleted file mode 100644 index 34ec93bbab4b8359868737dbab9c6f7f6d594e03..0000000000000000000000000000000000000000 --- a/spaces/Detomo/ai-comic-generation/src/components/icons/full-screen.tsx +++ /dev/null @@ -1,16 +0,0 @@ -export function FullScreenIcon() { - return ( - - - - - - - - - - - - - ) -} \ No newline at end of file diff --git a/spaces/Dhrushreddy/profile1/README.md b/spaces/Dhrushreddy/profile1/README.md deleted file mode 100644 index 4d6a8835a84f11a82edf37df2d653b976224d5a0..0000000000000000000000000000000000000000 --- a/spaces/Dhrushreddy/profile1/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Profile1 -emoji: 📊 -colorFrom: pink -colorTo: indigo -sdk: streamlit -sdk_version: 1.21.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/DiamondYin/Voice-ChatGPT-Streamlit-12/app.py b/spaces/DiamondYin/Voice-ChatGPT-Streamlit-12/app.py deleted file mode 100644 index c14ec429648632e650cb293f45324b272bd752a7..0000000000000000000000000000000000000000 --- a/spaces/DiamondYin/Voice-ChatGPT-Streamlit-12/app.py +++ /dev/null @@ -1,293 +0,0 @@ -import streamlit as st -import openai -import os -import base64 -import glob -import json -import mistune -import pytz -import math -import requests -import time - -from datetime import datetime -from openai import ChatCompletion -from xml.etree import ElementTree as ET -from bs4 import BeautifulSoup -from collections import deque -from audio_recorder_streamlit import audio_recorder - -def generate_filename(prompt, file_type): - central = pytz.timezone('US/Central') - safe_date_time = datetime.now(central).strftime("%m%d_%I%M") - safe_prompt = "".join(x for x in prompt if x.isalnum())[:45] - return f"{safe_date_time}_{safe_prompt}.{file_type}" - -def transcribe_audio(openai_key, file_path, model): - OPENAI_API_URL = "https://api.openai.com/v1/audio/transcriptions" - headers = { - "Authorization": f"Bearer {openai_key}", - } - with open(file_path, 'rb') as f: - data = {'file': f} - response = requests.post(OPENAI_API_URL, headers=headers, files=data, data={'model': model}) - if response.status_code == 200: - st.write(response.json()) - - response2 = chat_with_model(response.json().get('text'), '') # ************************************* - st.write('Responses:') - #st.write(response) - st.write(response2) - return response.json().get('text') - else: - st.write(response.json()) - st.error("Error in API call.") - return None - -def save_and_play_audio(audio_recorder): - audio_bytes = audio_recorder() - if audio_bytes: - filename = generate_filename("Recording", "wav") - with open(filename, 'wb') as f: - f.write(audio_bytes) - st.audio(audio_bytes, format="audio/wav") - return filename - return None - -def create_file(filename, prompt, response): - if filename.endswith(".txt"): - with open(filename, 'w') as file: - file.write(f"{prompt}\n{response}") - elif filename.endswith(".htm"): - with open(filename, 'w') as file: - file.write(f"{prompt} {response}") - elif filename.endswith(".md"): - with open(filename, 'w') as file: - file.write(f"{prompt}\n\n{response}") - -def truncate_document(document, length): - return document[:length] -def divide_document(document, max_length): - return [document[i:i+max_length] for i in range(0, len(document), max_length)] - -def get_table_download_link(file_path): - with open(file_path, 'r') as file: - data = file.read() - b64 = base64.b64encode(data.encode()).decode() - file_name = os.path.basename(file_path) - ext = os.path.splitext(file_name)[1] # get the file extension - if ext == '.txt': - mime_type = 'text/plain' - elif ext == '.py': - mime_type = 'text/plain' - elif ext == '.xlsx': - mime_type = 'text/plain' - elif ext == '.csv': - mime_type = 'text/plain' - elif ext == '.htm': - mime_type = 'text/html' - elif ext == '.md': - mime_type = 'text/markdown' - else: - mime_type = 'application/octet-stream' # general binary data type - href = f'{file_name}' - return href - -def CompressXML(xml_text): - root = ET.fromstring(xml_text) - for elem in list(root.iter()): - if isinstance(elem.tag, str) and 'Comment' in elem.tag: - elem.parent.remove(elem) - return ET.tostring(root, encoding='unicode', method="xml") - -def read_file_content(file,max_length): - if file.type == "application/json": - content = json.load(file) - return str(content) - elif file.type == "text/html" or file.type == "text/htm": - content = BeautifulSoup(file, "html.parser") - return content.text - elif file.type == "application/xml" or file.type == "text/xml": - tree = ET.parse(file) - root = tree.getroot() - xml = CompressXML(ET.tostring(root, encoding='unicode')) - return xml - elif file.type == "text/markdown" or file.type == "text/md": - md = mistune.create_markdown() - content = md(file.read().decode()) - return content - elif file.type == "text/plain": - return file.getvalue().decode() - else: - return "" - -def chat_with_model(prompt, document_section, model_choice='gpt-3.5-turbo'): - model = model_choice - conversation = [{'role': 'system', 'content': 'You are a helpful assistant.'}] - conversation.append({'role': 'user', 'content': prompt}) - if len(document_section)>0: - conversation.append({'role': 'assistant', 'content': document_section}) - - # iterate through the stream of events - start_time = time.time() - - - report = [] - res_box = st.empty() - - collected_chunks = [] - collected_messages = [] - - for chunk in openai.ChatCompletion.create( - model='gpt-3.5-turbo', - messages=conversation, - temperature=0.5, - stream=True - ): - - collected_chunks.append(chunk) # save the event response - chunk_message = chunk['choices'][0]['delta'] # extract the message - collected_messages.append(chunk_message) # save the message - - content=chunk["choices"][0].get("delta",{}).get("content") - - try: - report.append(content) - if len(content) > 0: - result = "".join(report).strip() - #result = result.replace("\n", "") - res_box.markdown(f'*{result}*') - except: - st.write('.') - - full_reply_content = ''.join([m.get('content', '') for m in collected_messages]) - #st.write(f"Full conversation received: {full_reply_content}") - st.write("Elapsed time:") - st.write(time.time() - start_time) - return full_reply_content - -def chat_with_file_contents(prompt, file_content, model_choice='gpt-3.5-turbo'): - conversation = [{'role': 'system', 'content': 'You are a helpful assistant.'}] - conversation.append({'role': 'user', 'content': prompt}) - if len(file_content)>0: - conversation.append({'role': 'assistant', 'content': file_content}) - response = openai.ChatCompletion.create(model=model_choice, messages=conversation) - return response['choices'][0]['message']['content'] - - -def main(): - # Sidebar and global - openai.api_key = os.getenv('OPENAI_KEY') - st.set_page_config(page_title="GPT Streamlit Document Reasoner",layout="wide") - menu = ["htm", "txt", "xlsx", "csv", "md", "py"] #619 - choice = st.sidebar.selectbox("Output File Type:", menu) - model_choice = st.sidebar.radio("Select Model:", ('gpt-3.5-turbo', 'gpt-3.5-turbo-0301')) - - # Audio, transcribe, GPT: - filename = save_and_play_audio(audio_recorder) - if filename is not None: - transcription = transcribe_audio(openai.api_key, filename, "whisper-1") - st.write(transcription) - gptOutput = chat_with_model(transcription, '', model_choice) # ************************************* - filename = generate_filename(transcription, choice) - create_file(filename, transcription, gptOutput) - st.sidebar.markdown(get_table_download_link(filename), unsafe_allow_html=True) - - - user_prompt = st.text_area("Enter prompts, instructions & questions:", '', height=100) - - collength, colupload = st.columns([2,3]) # adjust the ratio as needed - with collength: - #max_length = 12000 - optimal for gpt35 turbo. 2x=24000 for gpt4. 8x=96000 for gpt4-32k. - max_length = st.slider("File section length for large files", min_value=1000, max_value=128000, value=12000, step=1000) - with colupload: - uploaded_file = st.file_uploader("Add a file for context:", type=["xml", "json", "xlsx","csv","html", "htm", "md", "txt"]) - - document_sections = deque() - document_responses = {} - - if uploaded_file is not None: - file_content = read_file_content(uploaded_file, max_length) - document_sections.extend(divide_document(file_content, max_length)) - - if len(document_sections) > 0: - - if st.button("👁️ View Upload"): - st.markdown("**Sections of the uploaded file:**") - for i, section in enumerate(list(document_sections)): - st.markdown(f"**Section {i+1}**\n{section}") - - st.markdown("**Chat with the model:**") - for i, section in enumerate(list(document_sections)): - if i in document_responses: - st.markdown(f"**Section {i+1}**\n{document_responses[i]}") - else: - if st.button(f"Chat about Section {i+1}"): - st.write('Reasoning with your inputs...') - response = chat_with_model(user_prompt, section, model_choice) # ************************************* - st.write('Response:') - st.write(response) - document_responses[i] = response - filename = generate_filename(f"{user_prompt}_section_{i+1}", choice) - create_file(filename, user_prompt, response) - st.sidebar.markdown(get_table_download_link(filename), unsafe_allow_html=True) - - if st.button('💬 Chat'): - st.write('Reasoning with your inputs...') - response = chat_with_model(user_prompt, ''.join(list(document_sections,)), model_choice) # ************************************* - st.write('Response:') - st.write(response) - - filename = generate_filename(user_prompt, choice) - create_file(filename, user_prompt, response) - st.sidebar.markdown(get_table_download_link(filename), unsafe_allow_html=True) - - all_files = glob.glob("*.*") - all_files = [file for file in all_files if len(os.path.splitext(file)[0]) >= 20] # exclude files with short names - all_files.sort(key=lambda x: (os.path.splitext(x)[1], x), reverse=True) # sort by file type and file name in descending order - - # sidebar of files - file_contents='' - next_action='' - for file in all_files: - col1, col2, col3, col4, col5 = st.sidebar.columns([1,6,1,1,1]) # adjust the ratio as needed - with col1: - if st.button("🌐", key="md_"+file): # md emoji button - with open(file, 'r') as f: - file_contents = f.read() - next_action='md' - with col2: - st.markdown(get_table_download_link(file), unsafe_allow_html=True) - with col3: - if st.button("📂", key="open_"+file): # open emoji button - with open(file, 'r') as f: - file_contents = f.read() - next_action='open' - with col4: - if st.button("🔍", key="read_"+file): # search emoji button - with open(file, 'r') as f: - file_contents = f.read() - next_action='search' - with col5: - if st.button("🗑", key="delete_"+file): - os.remove(file) - st.experimental_rerun() - - if len(file_contents) > 0: - if next_action=='open': - file_content_area = st.text_area("File Contents:", file_contents, height=500) - if next_action=='md': - st.markdown(file_contents) - if next_action=='search': - file_content_area = st.text_area("File Contents:", file_contents, height=500) - st.write('Reasoning with your inputs...') - #response = chat_with_file_contents(user_prompt, file_contents) - response = chat_with_model(user_prompt, file_contents, model_choice) - st.write('Response:') - st.write(response) - filename = generate_filename(file_content_area, choice) - create_file(filename, file_content_area, response) - st.sidebar.markdown(get_table_download_link(filename), unsafe_allow_html=True) - -if __name__ == "__main__": - main() \ No newline at end of file diff --git a/spaces/EPFL-VILAB/MultiMAE/mask2former/data/datasets/register_ade20k_instance.py b/spaces/EPFL-VILAB/MultiMAE/mask2former/data/datasets/register_ade20k_instance.py deleted file mode 100644 index 1ded7095cde756dfa1d94c25b2f7d1d2e5da6313..0000000000000000000000000000000000000000 --- a/spaces/EPFL-VILAB/MultiMAE/mask2former/data/datasets/register_ade20k_instance.py +++ /dev/null @@ -1,53 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import json -import logging -import numpy as np -import os -from PIL import Image - -from detectron2.data import DatasetCatalog, MetadataCatalog -from detectron2.data.datasets.coco import load_coco_json, register_coco_instances -from detectron2.utils.file_io import PathManager - -ADE_CATEGORIES = [{'id': 7, 'name': 'bed'}, {'id': 8, 'name': 'windowpane'}, {'id': 10, 'name': 'cabinet'}, {'id': 12, 'name': 'person'}, {'id': 14, 'name': 'door'}, {'id': 15, 'name': 'table'}, {'id': 18, 'name': 'curtain'}, {'id': 19, 'name': 'chair'}, {'id': 20, 'name': 'car'}, {'id': 22, 'name': 'painting'}, {'id': 23, 'name': 'sofa'}, {'id': 24, 'name': 'shelf'}, {'id': 27, 'name': 'mirror'}, {'id': 30, 'name': 'armchair'}, {'id': 31, 'name': 'seat'}, {'id': 32, 'name': 'fence'}, {'id': 33, 'name': 'desk'}, {'id': 35, 'name': 'wardrobe'}, {'id': 36, 'name': 'lamp'}, {'id': 37, 'name': 'bathtub'}, {'id': 38, 'name': 'railing'}, {'id': 39, 'name': 'cushion'}, {'id': 41, 'name': 'box'}, {'id': 42, 'name': 'column'}, {'id': 43, 'name': 'signboard'}, {'id': 44, 'name': 'chest of drawers'}, {'id': 45, 'name': 'counter'}, {'id': 47, 'name': 'sink'}, {'id': 49, 'name': 'fireplace'}, {'id': 50, 'name': 'refrigerator'}, {'id': 53, 'name': 'stairs'}, {'id': 55, 'name': 'case'}, {'id': 56, 'name': 'pool table'}, {'id': 57, 'name': 'pillow'}, {'id': 58, 'name': 'screen door'}, {'id': 62, 'name': 'bookcase'}, {'id': 64, 'name': 'coffee table'}, {'id': 65, 'name': 'toilet'}, {'id': 66, 'name': 'flower'}, {'id': 67, 'name': 'book'}, {'id': 69, 'name': 'bench'}, {'id': 70, 'name': 'countertop'}, {'id': 71, 'name': 'stove'}, {'id': 72, 'name': 'palm'}, {'id': 73, 'name': 'kitchen island'}, {'id': 74, 'name': 'computer'}, {'id': 75, 'name': 'swivel chair'}, {'id': 76, 'name': 'boat'}, {'id': 78, 'name': 'arcade machine'}, {'id': 80, 'name': 'bus'}, {'id': 81, 'name': 'towel'}, {'id': 82, 'name': 'light'}, {'id': 83, 'name': 'truck'}, {'id': 85, 'name': 'chandelier'}, {'id': 86, 'name': 'awning'}, {'id': 87, 'name': 'streetlight'}, {'id': 88, 'name': 'booth'}, {'id': 89, 'name': 'television receiver'}, {'id': 90, 'name': 'airplane'}, {'id': 92, 'name': 'apparel'}, {'id': 93, 'name': 'pole'}, {'id': 95, 'name': 'bannister'}, {'id': 97, 'name': 'ottoman'}, {'id': 98, 'name': 'bottle'}, {'id': 102, 'name': 'van'}, {'id': 103, 'name': 'ship'}, {'id': 104, 'name': 'fountain'}, {'id': 107, 'name': 'washer'}, {'id': 108, 'name': 'plaything'}, {'id': 110, 'name': 'stool'}, {'id': 111, 'name': 'barrel'}, {'id': 112, 'name': 'basket'}, {'id': 115, 'name': 'bag'}, {'id': 116, 'name': 'minibike'}, {'id': 118, 'name': 'oven'}, {'id': 119, 'name': 'ball'}, {'id': 120, 'name': 'food'}, {'id': 121, 'name': 'step'}, {'id': 123, 'name': 'trade name'}, {'id': 124, 'name': 'microwave'}, {'id': 125, 'name': 'pot'}, {'id': 126, 'name': 'animal'}, {'id': 127, 'name': 'bicycle'}, {'id': 129, 'name': 'dishwasher'}, {'id': 130, 'name': 'screen'}, {'id': 132, 'name': 'sculpture'}, {'id': 133, 'name': 'hood'}, {'id': 134, 'name': 'sconce'}, {'id': 135, 'name': 'vase'}, {'id': 136, 'name': 'traffic light'}, {'id': 137, 'name': 'tray'}, {'id': 138, 'name': 'ashcan'}, {'id': 139, 'name': 'fan'}, {'id': 142, 'name': 'plate'}, {'id': 143, 'name': 'monitor'}, {'id': 144, 'name': 'bulletin board'}, {'id': 146, 'name': 'radiator'}, {'id': 147, 'name': 'glass'}, {'id': 148, 'name': 'clock'}, {'id': 149, 'name': 'flag'}] - - -_PREDEFINED_SPLITS = { - # point annotations without masks - "ade20k_instance_train": ( - "ADEChallengeData2016/images/training", - "ADEChallengeData2016/ade20k_instance_train.json", - ), - "ade20k_instance_val": ( - "ADEChallengeData2016/images/validation", - "ADEChallengeData2016/ade20k_instance_val.json", - ), -} - - -def _get_ade_instances_meta(): - thing_ids = [k["id"] for k in ADE_CATEGORIES] - assert len(thing_ids) == 100, len(thing_ids) - # Mapping from the incontiguous ADE category id to an id in [0, 99] - thing_dataset_id_to_contiguous_id = {k: i for i, k in enumerate(thing_ids)} - thing_classes = [k["name"] for k in ADE_CATEGORIES] - ret = { - "thing_dataset_id_to_contiguous_id": thing_dataset_id_to_contiguous_id, - "thing_classes": thing_classes, - } - return ret - - -def register_all_ade20k_instance(root): - for key, (image_root, json_file) in _PREDEFINED_SPLITS.items(): - # Assume pre-defined datasets live in `./datasets`. - register_coco_instances( - key, - _get_ade_instances_meta(), - os.path.join(root, json_file) if "://" not in json_file else json_file, - os.path.join(root, image_root), - ) - - -_root = os.getenv("DETECTRON2_DATASETS", "datasets") -register_all_ade20k_instance(_root) diff --git a/spaces/Egrt/GCycleGAN/utils/callbacks.py b/spaces/Egrt/GCycleGAN/utils/callbacks.py deleted file mode 100644 index d70115ead91c64f2f7aaa3b559cb0351642ee65d..0000000000000000000000000000000000000000 --- a/spaces/Egrt/GCycleGAN/utils/callbacks.py +++ /dev/null @@ -1,65 +0,0 @@ -import os - -import torch -import matplotlib -matplotlib.use('Agg') -import scipy.signal -from matplotlib import pyplot as plt -from torch.utils.tensorboard import SummaryWriter - - -class LossHistory(): - def __init__(self, log_dir, model, input_shape): - self.log_dir = log_dir - - os.makedirs(self.log_dir) - self.writer = SummaryWriter(self.log_dir) - try: - for m in model: - dummy_input = torch.randn(2, 3, input_shape[0], input_shape[1]) - self.writer.add_graph(m, dummy_input) - except: - pass - - def append_loss(self, epoch, **kwargs): - if not os.path.exists(self.log_dir): - os.makedirs(self.log_dir) - - for key, value in kwargs.items(): - if not hasattr(self, key): - setattr(self, key, []) - #---------------------------------# - # 为列表添加数值 - #---------------------------------# - getattr(self, key).append(value) - - #---------------------------------# - # 写入txt - #---------------------------------# - with open(os.path.join(self.log_dir, key + ".txt"), 'a') as f: - f.write(str(value)) - f.write("\n") - - #---------------------------------# - # 写入tensorboard - #---------------------------------# - self.writer.add_scalar(key, value, epoch) - - self.loss_plot(**kwargs) - - def loss_plot(self, **kwargs): - plt.figure() - - for key, value in kwargs.items(): - losses = getattr(self, key) - plt.plot(range(len(losses)), losses, linewidth = 2, label = key) - - plt.grid(True) - plt.xlabel('Epoch') - plt.ylabel('Loss') - plt.legend(loc="upper right") - - plt.savefig(os.path.join(self.log_dir, "epoch_loss.png")) - - plt.cla() - plt.close("all") \ No newline at end of file diff --git a/spaces/ElainaFanBoy/IRONY-Real-ESRGAN/realesrgan/data/realesrgan_dataset.py b/spaces/ElainaFanBoy/IRONY-Real-ESRGAN/realesrgan/data/realesrgan_dataset.py deleted file mode 100644 index 4cf2d9e6583a6789b771679734ce55bb8a22e628..0000000000000000000000000000000000000000 --- a/spaces/ElainaFanBoy/IRONY-Real-ESRGAN/realesrgan/data/realesrgan_dataset.py +++ /dev/null @@ -1,192 +0,0 @@ -import cv2 -import math -import numpy as np -import os -import os.path as osp -import random -import time -import torch -from basicsr.data.degradations import circular_lowpass_kernel, random_mixed_kernels -from basicsr.data.transforms import augment -from basicsr.utils import FileClient, get_root_logger, imfrombytes, img2tensor -from basicsr.utils.registry import DATASET_REGISTRY -from torch.utils import data as data - - -@DATASET_REGISTRY.register() -class RealESRGANDataset(data.Dataset): - """Dataset used for Real-ESRGAN model: - Real-ESRGAN: Training Real-World Blind Super-Resolution with Pure Synthetic Data. - - It loads gt (Ground-Truth) images, and augments them. - It also generates blur kernels and sinc kernels for generating low-quality images. - Note that the low-quality images are processed in tensors on GPUS for faster processing. - - Args: - opt (dict): Config for train datasets. It contains the following keys: - dataroot_gt (str): Data root path for gt. - meta_info (str): Path for meta information file. - io_backend (dict): IO backend type and other kwarg. - use_hflip (bool): Use horizontal flips. - use_rot (bool): Use rotation (use vertical flip and transposing h and w for implementation). - Please see more options in the codes. - """ - - def __init__(self, opt): - super(RealESRGANDataset, self).__init__() - self.opt = opt - self.file_client = None - self.io_backend_opt = opt['io_backend'] - self.gt_folder = opt['dataroot_gt'] - - # file client (lmdb io backend) - if self.io_backend_opt['type'] == 'lmdb': - self.io_backend_opt['db_paths'] = [self.gt_folder] - self.io_backend_opt['client_keys'] = ['gt'] - if not self.gt_folder.endswith('.lmdb'): - raise ValueError(f"'dataroot_gt' should end with '.lmdb', but received {self.gt_folder}") - with open(osp.join(self.gt_folder, 'meta_info.txt')) as fin: - self.paths = [line.split('.')[0] for line in fin] - else: - # disk backend with meta_info - # Each line in the meta_info describes the relative path to an image - with open(self.opt['meta_info']) as fin: - paths = [line.strip().split(' ')[0] for line in fin] - self.paths = [os.path.join(self.gt_folder, v) for v in paths] - - # blur settings for the first degradation - self.blur_kernel_size = opt['blur_kernel_size'] - self.kernel_list = opt['kernel_list'] - self.kernel_prob = opt['kernel_prob'] # a list for each kernel probability - self.blur_sigma = opt['blur_sigma'] - self.betag_range = opt['betag_range'] # betag used in generalized Gaussian blur kernels - self.betap_range = opt['betap_range'] # betap used in plateau blur kernels - self.sinc_prob = opt['sinc_prob'] # the probability for sinc filters - - # blur settings for the second degradation - self.blur_kernel_size2 = opt['blur_kernel_size2'] - self.kernel_list2 = opt['kernel_list2'] - self.kernel_prob2 = opt['kernel_prob2'] - self.blur_sigma2 = opt['blur_sigma2'] - self.betag_range2 = opt['betag_range2'] - self.betap_range2 = opt['betap_range2'] - self.sinc_prob2 = opt['sinc_prob2'] - - # a final sinc filter - self.final_sinc_prob = opt['final_sinc_prob'] - - self.kernel_range = [2 * v + 1 for v in range(3, 11)] # kernel size ranges from 7 to 21 - # TODO: kernel range is now hard-coded, should be in the configure file - self.pulse_tensor = torch.zeros(21, 21).float() # convolving with pulse tensor brings no blurry effect - self.pulse_tensor[10, 10] = 1 - - def __getitem__(self, index): - if self.file_client is None: - self.file_client = FileClient(self.io_backend_opt.pop('type'), **self.io_backend_opt) - - # -------------------------------- Load gt images -------------------------------- # - # Shape: (h, w, c); channel order: BGR; image range: [0, 1], float32. - gt_path = self.paths[index] - # avoid errors caused by high latency in reading files - retry = 3 - while retry > 0: - try: - img_bytes = self.file_client.get(gt_path, 'gt') - except (IOError, OSError) as e: - logger = get_root_logger() - logger.warn(f'File client error: {e}, remaining retry times: {retry - 1}') - # change another file to read - index = random.randint(0, self.__len__()) - gt_path = self.paths[index] - time.sleep(1) # sleep 1s for occasional server congestion - else: - break - finally: - retry -= 1 - img_gt = imfrombytes(img_bytes, float32=True) - - # -------------------- Do augmentation for training: flip, rotation -------------------- # - img_gt = augment(img_gt, self.opt['use_hflip'], self.opt['use_rot']) - - # crop or pad to 400 - # TODO: 400 is hard-coded. You may change it accordingly - h, w = img_gt.shape[0:2] - crop_pad_size = 400 - # pad - if h < crop_pad_size or w < crop_pad_size: - pad_h = max(0, crop_pad_size - h) - pad_w = max(0, crop_pad_size - w) - img_gt = cv2.copyMakeBorder(img_gt, 0, pad_h, 0, pad_w, cv2.BORDER_REFLECT_101) - # crop - if img_gt.shape[0] > crop_pad_size or img_gt.shape[1] > crop_pad_size: - h, w = img_gt.shape[0:2] - # randomly choose top and left coordinates - top = random.randint(0, h - crop_pad_size) - left = random.randint(0, w - crop_pad_size) - img_gt = img_gt[top:top + crop_pad_size, left:left + crop_pad_size, ...] - - # ------------------------ Generate kernels (used in the first degradation) ------------------------ # - kernel_size = random.choice(self.kernel_range) - if np.random.uniform() < self.opt['sinc_prob']: - # this sinc filter setting is for kernels ranging from [7, 21] - if kernel_size < 13: - omega_c = np.random.uniform(np.pi / 3, np.pi) - else: - omega_c = np.random.uniform(np.pi / 5, np.pi) - kernel = circular_lowpass_kernel(omega_c, kernel_size, pad_to=False) - else: - kernel = random_mixed_kernels( - self.kernel_list, - self.kernel_prob, - kernel_size, - self.blur_sigma, - self.blur_sigma, [-math.pi, math.pi], - self.betag_range, - self.betap_range, - noise_range=None) - # pad kernel - pad_size = (21 - kernel_size) // 2 - kernel = np.pad(kernel, ((pad_size, pad_size), (pad_size, pad_size))) - - # ------------------------ Generate kernels (used in the second degradation) ------------------------ # - kernel_size = random.choice(self.kernel_range) - if np.random.uniform() < self.opt['sinc_prob2']: - if kernel_size < 13: - omega_c = np.random.uniform(np.pi / 3, np.pi) - else: - omega_c = np.random.uniform(np.pi / 5, np.pi) - kernel2 = circular_lowpass_kernel(omega_c, kernel_size, pad_to=False) - else: - kernel2 = random_mixed_kernels( - self.kernel_list2, - self.kernel_prob2, - kernel_size, - self.blur_sigma2, - self.blur_sigma2, [-math.pi, math.pi], - self.betag_range2, - self.betap_range2, - noise_range=None) - - # pad kernel - pad_size = (21 - kernel_size) // 2 - kernel2 = np.pad(kernel2, ((pad_size, pad_size), (pad_size, pad_size))) - - # ------------------------------------- the final sinc kernel ------------------------------------- # - if np.random.uniform() < self.opt['final_sinc_prob']: - kernel_size = random.choice(self.kernel_range) - omega_c = np.random.uniform(np.pi / 3, np.pi) - sinc_kernel = circular_lowpass_kernel(omega_c, kernel_size, pad_to=21) - sinc_kernel = torch.FloatTensor(sinc_kernel) - else: - sinc_kernel = self.pulse_tensor - - # BGR to RGB, HWC to CHW, numpy to tensor - img_gt = img2tensor([img_gt], bgr2rgb=True, float32=True)[0] - kernel = torch.FloatTensor(kernel) - kernel2 = torch.FloatTensor(kernel2) - - return_d = {'gt': img_gt, 'kernel1': kernel, 'kernel2': kernel2, 'sinc_kernel': sinc_kernel, 'gt_path': gt_path} - return return_d - - def __len__(self): - return len(self.paths) diff --git a/spaces/ElainaFanBoy/MusicGen/CHANGELOG.md b/spaces/ElainaFanBoy/MusicGen/CHANGELOG.md deleted file mode 100644 index 6aaad6b5ee31e4685ead54c1a46d7f57b225912d..0000000000000000000000000000000000000000 --- a/spaces/ElainaFanBoy/MusicGen/CHANGELOG.md +++ /dev/null @@ -1,20 +0,0 @@ -# Changelog - -All notable changes to this project will be documented in this file. - -The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/). - -## [0.0.2a] - TBD - -Improved demo, fixed top p (thanks @jnordberg). - -Compressor tanh on output to avoid clipping with some style (especially piano). -Now repeating the conditioning periodically if it is too short. - -More options when launching Gradio app locally (thanks @ashleykleynhans). - -Testing out PyTorch 2.0 memory efficient attention. - -## [0.0.1] - 2023-06-09 - -Initial release, with model evaluation only. diff --git a/spaces/EronSamez/RVC_HFmeu/infer/modules/train/extract_feature_print.py b/spaces/EronSamez/RVC_HFmeu/infer/modules/train/extract_feature_print.py deleted file mode 100644 index f771dd9b8ba92262e6844e7b5781de43c342833a..0000000000000000000000000000000000000000 --- a/spaces/EronSamez/RVC_HFmeu/infer/modules/train/extract_feature_print.py +++ /dev/null @@ -1,137 +0,0 @@ -import os -import sys -import traceback - -os.environ["PYTORCH_ENABLE_MPS_FALLBACK"] = "1" -os.environ["PYTORCH_MPS_HIGH_WATERMARK_RATIO"] = "0.0" - -device = sys.argv[1] -n_part = int(sys.argv[2]) -i_part = int(sys.argv[3]) -if len(sys.argv) == 6: - exp_dir = sys.argv[4] - version = sys.argv[5] -else: - i_gpu = sys.argv[4] - exp_dir = sys.argv[5] - os.environ["CUDA_VISIBLE_DEVICES"] = str(i_gpu) - version = sys.argv[6] -import fairseq -import numpy as np -import soundfile as sf -import torch -import torch.nn.functional as F - -if "privateuseone" not in device: - device = "cpu" - if torch.cuda.is_available(): - device = "cuda" - elif torch.backends.mps.is_available(): - device = "mps" -else: - import torch_directml - - device = torch_directml.device(torch_directml.default_device()) - - def forward_dml(ctx, x, scale): - ctx.scale = scale - res = x.clone().detach() - return res - - fairseq.modules.grad_multiply.GradMultiply.forward = forward_dml - -f = open("%s/extract_f0_feature.log" % exp_dir, "a+") - - -def printt(strr): - print(strr) - f.write("%s\n" % strr) - f.flush() - - -printt(sys.argv) -model_path = "assets/hubert/hubert_base.pt" - -printt(exp_dir) -wavPath = "%s/1_16k_wavs" % exp_dir -outPath = ( - "%s/3_feature256" % exp_dir if version == "v1" else "%s/3_feature768" % exp_dir -) -os.makedirs(outPath, exist_ok=True) - - -# wave must be 16k, hop_size=320 -def readwave(wav_path, normalize=False): - wav, sr = sf.read(wav_path) - assert sr == 16000 - feats = torch.from_numpy(wav).float() - if feats.dim() == 2: # double channels - feats = feats.mean(-1) - assert feats.dim() == 1, feats.dim() - if normalize: - with torch.no_grad(): - feats = F.layer_norm(feats, feats.shape) - feats = feats.view(1, -1) - return feats - - -# HuBERT model -printt("load model(s) from {}".format(model_path)) -# if hubert model is exist -if os.access(model_path, os.F_OK) == False: - printt( - "Error: Extracting is shut down because %s does not exist, you may download it from https://huggingface.co/lj1995/VoiceConversionWebUI/tree/main" - % model_path - ) - exit(0) -models, saved_cfg, task = fairseq.checkpoint_utils.load_model_ensemble_and_task( - [model_path], - suffix="", -) -model = models[0] -model = model.to(device) -printt("move model to %s" % device) -if device not in ["mps", "cpu"]: - model = model.half() -model.eval() - -todo = sorted(list(os.listdir(wavPath)))[i_part::n_part] -n = max(1, len(todo) // 10) # 最多打印十条 -if len(todo) == 0: - printt("no-feature-todo") -else: - printt("all-feature-%s" % len(todo)) - for idx, file in enumerate(todo): - try: - if file.endswith(".wav"): - wav_path = "%s/%s" % (wavPath, file) - out_path = "%s/%s" % (outPath, file.replace("wav", "npy")) - - if os.path.exists(out_path): - continue - - feats = readwave(wav_path, normalize=saved_cfg.task.normalize) - padding_mask = torch.BoolTensor(feats.shape).fill_(False) - inputs = { - "source": feats.half().to(device) - if device not in ["mps", "cpu"] - else feats.to(device), - "padding_mask": padding_mask.to(device), - "output_layer": 9 if version == "v1" else 12, # layer 9 - } - with torch.no_grad(): - logits = model.extract_features(**inputs) - feats = ( - model.final_proj(logits[0]) if version == "v1" else logits[0] - ) - - feats = feats.squeeze(0).float().cpu().numpy() - if np.isnan(feats).sum() == 0: - np.save(out_path, feats, allow_pickle=False) - else: - printt("%s-contains nan" % file) - if idx % n == 0: - printt("now-%s,all-%s,%s,%s" % (len(todo), idx, file, feats.shape)) - except: - printt(traceback.format_exc()) - printt("all-feature-done") diff --git a/spaces/EronSamez/RVC_HFmeu/lib/uvr5_pack/lib_v5/model_param_init.py b/spaces/EronSamez/RVC_HFmeu/lib/uvr5_pack/lib_v5/model_param_init.py deleted file mode 100644 index b995c0bfb1194746187692e2ab1c2a6dbaaaec6c..0000000000000000000000000000000000000000 --- a/spaces/EronSamez/RVC_HFmeu/lib/uvr5_pack/lib_v5/model_param_init.py +++ /dev/null @@ -1,69 +0,0 @@ -import json -import os -import pathlib - -default_param = {} -default_param["bins"] = 768 -default_param["unstable_bins"] = 9 # training only -default_param["reduction_bins"] = 762 # training only -default_param["sr"] = 44100 -default_param["pre_filter_start"] = 757 -default_param["pre_filter_stop"] = 768 -default_param["band"] = {} - - -default_param["band"][1] = { - "sr": 11025, - "hl": 128, - "n_fft": 960, - "crop_start": 0, - "crop_stop": 245, - "lpf_start": 61, # inference only - "res_type": "polyphase", -} - -default_param["band"][2] = { - "sr": 44100, - "hl": 512, - "n_fft": 1536, - "crop_start": 24, - "crop_stop": 547, - "hpf_start": 81, # inference only - "res_type": "sinc_best", -} - - -def int_keys(d): - r = {} - for k, v in d: - if k.isdigit(): - k = int(k) - r[k] = v - return r - - -class ModelParameters(object): - def __init__(self, config_path=""): - if ".pth" == pathlib.Path(config_path).suffix: - import zipfile - - with zipfile.ZipFile(config_path, "r") as zip: - self.param = json.loads( - zip.read("param.json"), object_pairs_hook=int_keys - ) - elif ".json" == pathlib.Path(config_path).suffix: - with open(config_path, "r") as f: - self.param = json.loads(f.read(), object_pairs_hook=int_keys) - else: - self.param = default_param - - for k in [ - "mid_side", - "mid_side_b", - "mid_side_b2", - "stereo_w", - "stereo_n", - "reverse", - ]: - if not k in self.param: - self.param[k] = False diff --git a/spaces/EsoCode/text-generation-webui/extensions/multimodal/DOCS.md b/spaces/EsoCode/text-generation-webui/extensions/multimodal/DOCS.md deleted file mode 100644 index eaa4365e9a304a14ebbdb1d4d435f3a2a1f7a7d2..0000000000000000000000000000000000000000 --- a/spaces/EsoCode/text-generation-webui/extensions/multimodal/DOCS.md +++ /dev/null @@ -1,85 +0,0 @@ -# Technical description of multimodal extension - -## Working principle -Multimodality extension does most of the stuff which is required for any image input: - -- adds the UI -- saves the images as base64 JPEGs to history -- provides the hooks to the UI -- if there are images in the prompt, it: - - splits the prompt to text and image parts - - adds image start/end markers to text parts, then encodes and embeds the text parts - - calls the vision pipeline to embed the images - - stitches the embeddings together, and returns them to text generation -- loads the appropriate vision pipeline, selected either from model name, or by specifying --multimodal-pipeline parameter - -Now, for the pipelines, they: - -- load the required vision models -- return some consts, for example the number of tokens taken up by image -- and most importantly: return the embeddings for LLM, given a list of images - -## Prompts/history - -To save images in prompt/history, this extension is using a base64 JPEG, wrapped in a HTML tag, like so: -``` - -``` -where `{img_str}` is the actual image data. This format makes displaying them in the UI for free. Do note, that this format is required to be exactly the same, the regex used to find the images is: ``. - -## LLM input -To describe the input, let's see it on an example prompt: -``` -text1text2text3 -``` -where `textN` is N-th text, `` is N-th image, in HTML format specified above. - -**The first step is to split the prompt into image/text parts**, so we get: -``` -['text1', '', 'text2', '', 'text3'] -``` -this is done in `MultimodalEmbedder._split_prompt(...)` function, which returns a list of `PromptPart`s - dataclasses wrapping the separate parts. - -This function also appends the image start/end markers to text, which are provided by `AbstractMultimodalPipeline.image_start()` / `AbstractMultimodalPipeline.image_end()` functions. If image start is ``, and end is ``, this function will return: -``` -['text1', '', 'text2', '', 'text3'] -``` - -**The returned prompt parts are then turned into token embeddings.** - -First, they are modified to token IDs, for the text it is done using standard `modules.text_generation.encode()` function, and for the images the returned token IDs are changed to placeholders. The placeholder is a list of `N` times `placeholder token id`, where `N` is specified using `AbstractMultimodalPipeline.num_image_embeds()`, and placeholder token IDs using `AbstractMultimodalPipeline.placeholder_token_id()`. - -Now, based on the token IDs, the prompt might get truncated, especially if `max_new_tokens` are unreasonably high. Unfortunately, it can't be done simply, just by trimming the prompt to be short enough. This way will lead to sometimes splitting the prompt in the middle of an image embedding, which usually breaks the generation. Therefore, in this case, the entire image needs to be removed from input. This is done inside `MultimodalEmbedder._encode_text(...)` function. - -**After the tokenization, the tokens need to get embedded**, the text and images are once again treated separately. - -The text parts are turned to embeddings, using `AbstractMultimodalPipeline.embed_tokens(...)` function. It uses standard embedding function from the model, but to support many LLMs, the actual function is returned by the pipeline (as it might be different for different LLMs), for LLaMA it is `shared.model.model.embed_tokens(...)`. - -The image parts are turned to embeddings, using `AbstractMultimodalPipeline.embed_images(...)` function. This function is specific for a given pipeline, it takes the images as input, forwards them through vision model/projector, and returns the embeddings. - -**Now, the returned embeddings are stitched together**, using `torch.cat()`, this is creating the final input to the LLM. - -## Pipelines - -All of the pipelines should subclass `AbstractMultimodalPipeline` class. The idea is to allow for new pipelines to be added in the same way as user extensions - git clone into `extensions/multimodal/pipelines`. - -The pipelines are the description of the vision part, containing vision model/multimodal projector. All of the pipelines should have an unique `name()`, which is then selected by user, in `--multimodal-pipeline` CLI argument. For an example, see `pipelines/llava/llava.py`. - -## Pipeline modules - -Pipelines are organized into "pipeline modules" - subdirectories in `pipelines` directory. The pipeline modules should contain a file called `pipelines.py`, that should contain the following fields: -- `available_pipelines: List[str]` - list of pipelines provided by this module, shown as the list of available pipelines to the user -- `def get_pipeline(name: str, params: dict) -> Optional[AbstractMultimodalPipeline]`: - a function to get a concrete pipeline by `name`, if `name` doesn't match any, should return `None`. `params` is the user settings for multimodal extension -- `def get_pipeline_from_model_name(model_name: str, params: dict) -> Optional[AbstractMultimodalPipeline]`: - a function to get a pipeline from `model_name`, should be eager to return `None`, unless the determination can be done clearly (for example: minigpt-4 bases on vicuna - it should never return the pipeline, but llava can, as it has its own specific LLM finetune) - -**NOTE**: A pipeline module should lazy-import the pipelines only when necessary, and it should keep its imports to minimum - -## Pipeline params - -The pipelines will get the extension `params` in the constructor. They should honor the following fields: -- `vision_device` - string, specifying `torch.device` to run the vision model (CLIP/ViT) on -- `vision_bits` - int, number of fp bits to load the vision model(s) in -- `projector_device` - string, specifying `torch.device` to run the projector models (Linear layers, QFormer, etc.) on -- `projector_bits` - int, number of fp bits to load the projector models in - -As a helper, `AbstractMultimodalPipeline` has `_get_device(self, setting_name: str, params: dict)` and `_get_dtype(self, setting_name: str, params: dict)` helper functions, which parse string/int and return `torch.device` / `torch.dtype`. diff --git a/spaces/FelixLuoX/codeformer/CodeFormer/basicsr/utils/logger.py b/spaces/FelixLuoX/codeformer/CodeFormer/basicsr/utils/logger.py deleted file mode 100644 index 9714bf59c30fc82de24c1ee58d9118d0864b3572..0000000000000000000000000000000000000000 --- a/spaces/FelixLuoX/codeformer/CodeFormer/basicsr/utils/logger.py +++ /dev/null @@ -1,169 +0,0 @@ -import datetime -import logging -import time - -from .dist_util import get_dist_info, master_only - -initialized_logger = {} - - -class MessageLogger(): - """Message logger for printing. - Args: - opt (dict): Config. It contains the following keys: - name (str): Exp name. - logger (dict): Contains 'print_freq' (str) for logger interval. - train (dict): Contains 'total_iter' (int) for total iters. - use_tb_logger (bool): Use tensorboard logger. - start_iter (int): Start iter. Default: 1. - tb_logger (obj:`tb_logger`): Tensorboard logger. Default: None. - """ - - def __init__(self, opt, start_iter=1, tb_logger=None): - self.exp_name = opt['name'] - self.interval = opt['logger']['print_freq'] - self.start_iter = start_iter - self.max_iters = opt['train']['total_iter'] - self.use_tb_logger = opt['logger']['use_tb_logger'] - self.tb_logger = tb_logger - self.start_time = time.time() - self.logger = get_root_logger() - - @master_only - def __call__(self, log_vars): - """Format logging message. - Args: - log_vars (dict): It contains the following keys: - epoch (int): Epoch number. - iter (int): Current iter. - lrs (list): List for learning rates. - time (float): Iter time. - data_time (float): Data time for each iter. - """ - # epoch, iter, learning rates - epoch = log_vars.pop('epoch') - current_iter = log_vars.pop('iter') - lrs = log_vars.pop('lrs') - - message = (f'[{self.exp_name[:5]}..][epoch:{epoch:3d}, ' f'iter:{current_iter:8,d}, lr:(') - for v in lrs: - message += f'{v:.3e},' - message += ')] ' - - # time and estimated time - if 'time' in log_vars.keys(): - iter_time = log_vars.pop('time') - data_time = log_vars.pop('data_time') - - total_time = time.time() - self.start_time - time_sec_avg = total_time / (current_iter - self.start_iter + 1) - eta_sec = time_sec_avg * (self.max_iters - current_iter - 1) - eta_str = str(datetime.timedelta(seconds=int(eta_sec))) - message += f'[eta: {eta_str}, ' - message += f'time (data): {iter_time:.3f} ({data_time:.3f})] ' - - # other items, especially losses - for k, v in log_vars.items(): - message += f'{k}: {v:.4e} ' - # tensorboard logger - if self.use_tb_logger: - if k.startswith('l_'): - self.tb_logger.add_scalar(f'losses/{k}', v, current_iter) - else: - self.tb_logger.add_scalar(k, v, current_iter) - self.logger.info(message) - - -@master_only -def init_tb_logger(log_dir): - from torch.utils.tensorboard import SummaryWriter - tb_logger = SummaryWriter(log_dir=log_dir) - return tb_logger - - -@master_only -def init_wandb_logger(opt): - """We now only use wandb to sync tensorboard log.""" - import wandb - logger = logging.getLogger('basicsr') - - project = opt['logger']['wandb']['project'] - resume_id = opt['logger']['wandb'].get('resume_id') - if resume_id: - wandb_id = resume_id - resume = 'allow' - logger.warning(f'Resume wandb logger with id={wandb_id}.') - else: - wandb_id = wandb.util.generate_id() - resume = 'never' - - wandb.init(id=wandb_id, resume=resume, name=opt['name'], config=opt, project=project, sync_tensorboard=True) - - logger.info(f'Use wandb logger with id={wandb_id}; project={project}.') - - -def get_root_logger(logger_name='basicsr', log_level=logging.INFO, log_file=None): - """Get the root logger. - The logger will be initialized if it has not been initialized. By default a - StreamHandler will be added. If `log_file` is specified, a FileHandler will - also be added. - Args: - logger_name (str): root logger name. Default: 'basicsr'. - log_file (str | None): The log filename. If specified, a FileHandler - will be added to the root logger. - log_level (int): The root logger level. Note that only the process of - rank 0 is affected, while other processes will set the level to - "Error" and be silent most of the time. - Returns: - logging.Logger: The root logger. - """ - logger = logging.getLogger(logger_name) - # if the logger has been initialized, just return it - if logger_name in initialized_logger: - return logger - - format_str = '%(asctime)s %(levelname)s: %(message)s' - stream_handler = logging.StreamHandler() - stream_handler.setFormatter(logging.Formatter(format_str)) - logger.addHandler(stream_handler) - logger.propagate = False - rank, _ = get_dist_info() - if rank != 0: - logger.setLevel('ERROR') - elif log_file is not None: - logger.setLevel(log_level) - # add file handler - # file_handler = logging.FileHandler(log_file, 'w') - file_handler = logging.FileHandler(log_file, 'a') #Shangchen: keep the previous log - file_handler.setFormatter(logging.Formatter(format_str)) - file_handler.setLevel(log_level) - logger.addHandler(file_handler) - initialized_logger[logger_name] = True - return logger - - -def get_env_info(): - """Get environment information. - Currently, only log the software version. - """ - import torch - import torchvision - - from basicsr.version import __version__ - msg = r""" - ____ _ _____ ____ - / __ ) ____ _ _____ (_)_____/ ___/ / __ \ - / __ |/ __ `// ___// // ___/\__ \ / /_/ / - / /_/ // /_/ /(__ )/ // /__ ___/ // _, _/ - /_____/ \__,_//____//_/ \___//____//_/ |_| - ______ __ __ __ __ - / ____/____ ____ ____/ / / / __ __ _____ / /__ / / - / / __ / __ \ / __ \ / __ / / / / / / // ___// //_/ / / - / /_/ // /_/ // /_/ // /_/ / / /___/ /_/ // /__ / /< /_/ - \____/ \____/ \____/ \____/ /_____/\____/ \___//_/|_| (_) - """ - msg += ('\nVersion Information: ' - f'\n\tBasicSR: {__version__}' - f'\n\tPyTorch: {torch.__version__}' - f'\n\tTorchVision: {torchvision.__version__}') - return msg \ No newline at end of file diff --git a/spaces/FridaZuley/RVC_HFKawaii/Applio-RVC-Fork/utils/README.md b/spaces/FridaZuley/RVC_HFKawaii/Applio-RVC-Fork/utils/README.md deleted file mode 100644 index fb45a36b5909585aa964f2033762ee59b55526b0..0000000000000000000000000000000000000000 --- a/spaces/FridaZuley/RVC_HFKawaii/Applio-RVC-Fork/utils/README.md +++ /dev/null @@ -1,6 +0,0 @@ -# External Colab Code -Code used to make Google Colab work correctly -- Repo link: https://github.com/IAHispano/Applio-RVC-Fork/ - -Thanks to https://github.com/kalomaze/externalcolabcode - diff --git a/spaces/GAIR/Factool/factool/knowledge_qa/google_serper.py b/spaces/GAIR/Factool/factool/knowledge_qa/google_serper.py deleted file mode 100644 index 7e7884f8392ab69e6ece7d3a448fb656d33994dd..0000000000000000000000000000000000000000 --- a/spaces/GAIR/Factool/factool/knowledge_qa/google_serper.py +++ /dev/null @@ -1,118 +0,0 @@ -# The following code was adapted from https://github.com/hwchase17/langchain/blob/master/langchain/utilities/google_serper.py - -"""Util that calls Google Search using the Serper.dev API.""" -import pdb -import requests -import asyncio -import aiohttp -import yaml -import os - -from factool.env_config import factool_env_config - -# env -# serper_api_key = factool_env_config.serper_api_key - - -class GoogleSerperAPIWrapper(): - """Wrapper around the Serper.dev Google Search API. - You can create a free API key at https://serper.dev. - To use, you should have the environment variable ``SERPER_API_KEY`` - set with your API key, or pass `serper_api_key` as a named parameter - to the constructor. - Example: - .. code-block:: python - from langchain import GoogleSerperAPIWrapper - google_serper = GoogleSerperAPIWrapper() - """ - def __init__(self, snippet_cnt = 10) -> None: - self.k = snippet_cnt - self.gl = "us" - self.hl = "en" - self.serper_api_key = os.environ.get("SERPER_API_KEY", None) - assert self.serper_api_key is not None, "Please set the SERPER_API_KEY environment variable." - - async def _google_serper_search_results(self, session, search_term: str, gl: str, hl: str) -> dict: - headers = { - "X-API-KEY": self.serper_api_key or "", - "Content-Type": "application/json", - } - params = {"q": search_term, "gl": gl, "hl": hl} - async with session.post( - "https://google.serper.dev/search", headers=headers, params=params, raise_for_status=True - ) as response: - return await response.json() - - def _parse_results(self, results): - snippets = [] - - if results.get("answerBox"): - answer_box = results.get("answerBox", {}) - if answer_box.get("answer"): - element = {"content":answer_box.get("answer"),"source":"None"} - return [element] - elif answer_box.get("snippet"): - element = {"content":answer_box.get("snippet").replace("\n", " "),"source":"None"} - return [element] - elif answer_box.get("snippetHighlighted"): - element = {"content":answer_box.get("snippetHighlighted"),"source":"None"} - return [element] - - if results.get("knowledgeGraph"): - kg = results.get("knowledgeGraph", {}) - title = kg.get("title") - entity_type = kg.get("type") - if entity_type: - element = {"content":f"{title}: {entity_type}","source":"None"} - snippets.append(element) - description = kg.get("description") - if description: - element = {"content":description,"source":"None"} - snippets.append(element) - for attribute, value in kg.get("attributes", {}).items(): - element = {"content":f"{attribute}: {value}","source":"None"} - snippets.append(element) - - for result in results["organic"][: self.k]: - if "snippet" in result: - element = {"content":result["snippet"],"source":result["link"]} - snippets.append(element) - for attribute, value in result.get("attributes", {}).items(): - element = {"content":f"{attribute}: {value}","source":result["link"]} - snippets.append(element) - - if len(snippets) == 0: - element = {"content":"No good Google Search Result was found","source":"None"} - return [element] - - # keep only the first k snippets - snippets = snippets[:int(self.k / 2)] - - return snippets - - async def parallel_searches(self, search_queries, gl, hl): - async with aiohttp.ClientSession() as session: - tasks = [self._google_serper_search_results(session, query, gl, hl) for query in search_queries] - search_results = await asyncio.gather(*tasks, return_exceptions=True) - return search_results - - async def run(self, queries): - """Run query through GoogleSearch and parse result.""" - flattened_queries = [] - - for sublist in queries: - if sublist is None: - sublist = ['None', 'None'] - for item in sublist: - flattened_queries.append(item) - - results = await self.parallel_searches(flattened_queries, gl=self.gl, hl=self.hl) - snippets_list = [] - for i in range(len(results)): - snippets_list.append(self._parse_results(results[i])) - snippets_split = [snippets_list[i] + snippets_list[i+1] for i in range(0, len(snippets_list), 2)] - return snippets_split - -if __name__ == "__main__": - search = GoogleSerperAPIWrapper() - print(asyncio.run(search.run("What is the capital of the United States?"))) \ No newline at end of file diff --git a/spaces/GIZ/SDSN-demo/utils/sdg_classifier.py b/spaces/GIZ/SDSN-demo/utils/sdg_classifier.py deleted file mode 100644 index 57e633f689b18ec4512730ebb32429c8ea8b7b06..0000000000000000000000000000000000000000 --- a/spaces/GIZ/SDSN-demo/utils/sdg_classifier.py +++ /dev/null @@ -1,177 +0,0 @@ -from haystack.nodes import TransformersDocumentClassifier -from haystack.schema import Document -from typing import List, Tuple -from typing_extensions import Literal -import logging -import pandas as pd -from pandas import DataFrame, Series -from utils.checkconfig import getconfig -from utils.streamlitcheck import check_streamlit -from utils.preprocessing import processingpipeline -try: - import streamlit as st -except ImportError: - logging.info("Streamlit not installed") - -## Labels dictionary ### -_lab_dict = {0: 'no_cat', - 1:'SDG 1 - No poverty', - 2:'SDG 2 - Zero hunger', - 3:'SDG 3 - Good health and well-being', - 4:'SDG 4 - Quality education', - 5:'SDG 5 - Gender equality', - 6:'SDG 6 - Clean water and sanitation', - 7:'SDG 7 - Affordable and clean energy', - 8:'SDG 8 - Decent work and economic growth', - 9:'SDG 9 - Industry, Innovation and Infrastructure', - 10:'SDG 10 - Reduced inequality', - 11:'SDG 11 - Sustainable cities and communities', - 12:'SDG 12 - Responsible consumption and production', - 13:'SDG 13 - Climate action', - 14:'SDG 14 - Life below water', - 15:'SDG 15 - Life on land', - 16:'SDG 16 - Peace, justice and strong institutions', - 17:'SDG 17 - Partnership for the goals',} - -@st.cache(allow_output_mutation=True) -def load_sdgClassifier(config_file:str = None, classifier_name:str = None): - """ - loads the document classifier using haystack, where the name/path of model - in HF-hub as string is used to fetch the model object.Either configfile or - model should be passed. - 1. https://docs.haystack.deepset.ai/reference/document-classifier-api - 2. https://docs.haystack.deepset.ai/docs/document_classifier - - Params - -------- - config_file: config file path from which to read the model name - classifier_name: if modelname is passed, it takes a priority if not \ - found then will look for configfile, else raise error. - - - Return: document classifier model - """ - if not classifier_name: - if not config_file: - logging.warning("Pass either model name or config file") - return - else: - config = getconfig(config_file) - classifier_name = config.get('sdg','MODEL') - - logging.info("Loading classifier") - doc_classifier = TransformersDocumentClassifier( - model_name_or_path=classifier_name, - task="text-classification") - - return doc_classifier - - -@st.cache(allow_output_mutation=True) -def sdg_classification(haystack_doc:List[Document], - threshold:float = 0.8, - classifier_model:TransformersDocumentClassifier= None - )->Tuple[DataFrame,Series]: - """ - Text-Classification on the list of texts provided. Classifier provides the - most appropriate label for each text. these labels are in terms of if text - belongs to which particular Sustainable Devleopment Goal (SDG). - - Params - --------- - haystack_doc: List of haystack Documents. The output of Preprocessing Pipeline - contains the list of paragraphs in different format,here the list of - Haystack Documents is used. - threshold: threshold value for the model to keep the results from classifier - classifiermodel: you can pass the classifier model directly,which takes priority - however if not then looks for model in streamlit session. - In case of streamlit avoid passing the model directly. - - - Returns - ---------- - df: Dataframe with two columns['SDG:int', 'text'] - x: Series object with the unique SDG covered in the document uploaded and - the number of times it is covered/discussed/count_of_paragraphs. - - """ - logging.info("Working on SDG Classification") - if not classifier_model: - if check_streamlit(): - classifier_model = st.session_state['sdg_classifier'] - else: - logging.warning("No streamlit envinornment found, Pass the classifier") - return - - results = classifier_model.predict(haystack_doc) - - - labels_= [(l.meta['classification']['label'], - l.meta['classification']['score'],l.content,) for l in results] - - df = DataFrame(labels_, columns=["SDG","Relevancy","text"]) - - df = df.sort_values(by="Relevancy", ascending=False).reset_index(drop=True) - df.index += 1 - df =df[df['Relevancy']>threshold] - - # creating the dataframe for value counts of SDG, along with 'title' of SDGs - x = df['SDG'].value_counts() - x = x.rename('count') - x = x.rename_axis('SDG').reset_index() - x["SDG"] = pd.to_numeric(x["SDG"]) - x = x.sort_values(by=['count'], ascending=False) - x['SDG_name'] = x['SDG'].apply(lambda x: _lab_dict[x]) - x['SDG_Num'] = x['SDG'].apply(lambda x: "SDG "+str(x)) - - df['SDG'] = pd.to_numeric(df['SDG']) - df = df.sort_values('SDG') - - return df, x - -def runSDGPreprocessingPipeline(file_name:str, file_path:str, - split_by: Literal["sentence", "word"] = 'sentence', - split_length:int = 2, split_respect_sentence_boundary:bool = False, - split_overlap:int = 0,remove_punc:bool = False)->List[Document]: - """ - creates the pipeline and runs the preprocessing pipeline, - the params for pipeline are fetched from paramconfig - - Params - ------------ - - file_name: filename, in case of streamlit application use - st.session_state['filename'] - file_path: filepath, in case of streamlit application use st.session_state['filepath'] - split_by: document splitting strategy either as word or sentence - split_length: when synthetically creating the paragrpahs from document, - it defines the length of paragraph. - split_respect_sentence_boundary: Used when using 'word' strategy for - splititng of text. - split_overlap: Number of words or sentences that overlap when creating - the paragraphs. This is done as one sentence or 'some words' make sense - when read in together with others. Therefore the overlap is used. - remove_punc: to remove all Punctuation including ',' and '.' or not - - - Return - -------------- - List[Document]: When preprocessing pipeline is run, the output dictionary - has four objects. For the Haysatck implementation of SDG classification we, - need to use the List of Haystack Document, which can be fetched by - key = 'documents' on output. - - """ - - sdg_processing_pipeline = processingpipeline() - - output_sdg_pre = sdg_processing_pipeline.run(file_paths = file_path, - params= {"FileConverter": {"file_path": file_path, \ - "file_name": file_name}, - "UdfPreProcessor": {"remove_punc": remove_punc, \ - "split_by": split_by, \ - "split_length":split_length,\ - "split_overlap": split_overlap, \ - "split_respect_sentence_boundary":split_respect_sentence_boundary}}) - - return output_sdg_pre diff --git a/spaces/GMFTBY/PandaGPT/scripts/train.sh b/spaces/GMFTBY/PandaGPT/scripts/train.sh deleted file mode 100644 index e071d72afdd773e803e1ce316538594c31d7d41d..0000000000000000000000000000000000000000 --- a/spaces/GMFTBY/PandaGPT/scripts/train.sh +++ /dev/null @@ -1,12 +0,0 @@ -#!/bin/bash - -deepspeed --include localhost:0,1,2,3,4,5,6,7 --master_addr 127.0.0.1 --master_port 28457 train_sft.py \ - --model openllama_peft \ - --stage 1\ - --data_path ../data/pandagpt4_visual_instruction_data.json\ - --image_root_path ../data/images/\ - --imagebind_ckpt_path ../pretrained_ckpt/imagebind_ckpt/\ - --vicuna_ckpt_path ../pretrained_ckpt/vicuna_ckpt/13b_v0/\ - --max_tgt_len 400\ - --save_path ./ckpt/pandagpt_13b_v0_peft/\ - --log_path ./ckpt/pandagpt_13b_v0_peft/log_rest/ diff --git a/spaces/Gen-Sim/Gen-Sim/cliport/agents/transporter.py b/spaces/Gen-Sim/Gen-Sim/cliport/agents/transporter.py deleted file mode 100644 index d24967d1e7f2684176732a06bb9271676f43bbc3..0000000000000000000000000000000000000000 --- a/spaces/Gen-Sim/Gen-Sim/cliport/agents/transporter.py +++ /dev/null @@ -1,539 +0,0 @@ -import os -import numpy as np - -import torch -import torch.nn.functional as F -from pytorch_lightning import LightningModule - -from cliport.tasks import cameras -from cliport.utils import utils -from cliport.models.core.attention import Attention -from cliport.models.core.transport import Transport -from cliport.models.streams.two_stream_attention import TwoStreamAttention -from cliport.models.streams.two_stream_transport import TwoStreamTransport - -from cliport.models.streams.two_stream_attention import TwoStreamAttentionLat -from cliport.models.streams.two_stream_transport import TwoStreamTransportLat -import time -import IPython - -class TransporterAgent(LightningModule): - def __init__(self, name, cfg, train_ds, test_ds): - super().__init__() - utils.set_seed(0) - self.automatic_optimization=False - self.device_type = torch.device('cuda' if torch.cuda.is_available() else 'cpu') # this is bad for PL :( - self.name = name - self.cfg = cfg - self.train_loader = train_ds - self.test_loader = test_ds - - self.train_ds = train_ds.dataset - self.test_ds = test_ds.dataset - - self.name = name - self.task = cfg['train']['task'] - self.total_steps = 0 - self.crop_size = 64 - self.n_rotations = cfg['train']['n_rotations'] - - self.pix_size = 0.003125 - self.in_shape = (320, 160, 6) - self.cam_config = cameras.RealSenseD415.CONFIG - self.bounds = np.array([[0.25, 0.75], [-0.5, 0.5], [0, 0.28]]) - - self.val_repeats = cfg['train']['val_repeats'] - self.save_steps = cfg['train']['save_steps'] - - self._build_model() - ## - # reduce the number of parameters here - ## - self._optimizers = { - 'attn': torch.optim.Adam(self.attention.parameters(), lr=self.cfg['train']['lr']), - 'trans': torch.optim.Adam(self.transport.parameters(), lr=self.cfg['train']['lr']) - } - print("Agent: {}, Logging: {}".format(name, cfg['train']['log'])) - - def configure_optimizers(self): - return self._optimizers - - def _build_model(self): - self.attention = None - self.transport = None - raise NotImplementedError() - - def forward(self, x): - raise NotImplementedError() - - def cross_entropy_with_logits(self, pred, labels, reduction='mean'): - # Lucas found that both sum and mean work equally well - x = (-labels.view(len(labels), -1) * F.log_softmax(pred.view(len(labels), -1), -1)) - if reduction == 'sum': - return x.sum() - elif reduction == 'mean': - return x.mean() - else: - raise NotImplementedError() - - def attn_forward(self, inp, softmax=True): - inp_img = inp['inp_img'] - output = self.attention.forward(inp_img, softmax=softmax) - return output - - def attn_training_step(self, frame, backprop=True, compute_err=False): - inp_img = frame['img'] - p0, p0_theta = frame['p0'], frame['p0_theta'] - - inp = {'inp_img': inp_img} - out = self.attn_forward(inp, softmax=False) - return self.attn_criterion(backprop, compute_err, inp, out, p0, p0_theta) - - def attn_criterion(self, backprop, compute_err, inp, out, p, theta): - # Get label. - if type(theta) is torch.Tensor: - theta = theta.detach().cpu().numpy() - - theta_i = theta / (2 * np.pi / self.attention.n_rotations) - theta_i = np.int32(np.round(theta_i)) % self.attention.n_rotations - inp_img = inp['inp_img'].float() - - label_size = inp_img.shape[:3] + (self.attention.n_rotations,) - label = torch.zeros(label_size, dtype=torch.float, device=out.device) - - # remove this for-loop laters - for idx, p_i in enumerate(p): - label[idx, int(p_i[0]), int(p_i[1]), theta_i[idx]] = 1 - label = label.permute((0, 3, 1, 2)).contiguous() - - # Get loss. - loss = self.cross_entropy_with_logits(out, label) - - # Backpropagate. - if backprop: - attn_optim = self._optimizers['attn'] - self.manual_backward(loss) - attn_optim.step() - attn_optim.zero_grad() - - # Pixel and Rotation error (not used anywhere). - err = {} - if compute_err: - with torch.no_grad(): - pick_conf = self.attn_forward(inp) - pick_conf = pick_conf[0].permute(1,2,0) - pick_conf = pick_conf.detach().cpu().numpy() - p = p[0] - theta = theta[0] - - # single batch - argmax = np.argmax(pick_conf) - argmax = np.unravel_index(argmax, shape=pick_conf.shape) - p0_pix = argmax[:2] - p0_theta = argmax[2] * (2 * np.pi / pick_conf.shape[2]) - - err = { - 'dist': np.linalg.norm(np.array(p.detach().cpu().numpy()) - p0_pix, ord=1), - 'theta': np.absolute((theta - p0_theta) % np.pi) - } - return loss, err - - def trans_forward(self, inp, softmax=True): - inp_img = inp['inp_img'] - p0 = inp['p0'] - - output = self.transport.forward(inp_img, p0, softmax=softmax) - return output - - def transport_training_step(self, frame, backprop=True, compute_err=False): - inp_img = frame['img'].float() - p0 = frame['p0'] - p1, p1_theta = frame['p1'], frame['p1_theta'] - - inp = {'inp_img': inp_img, 'p0': p0} - output = self.trans_forward(inp, softmax=False) - err, loss = self.transport_criterion(backprop, compute_err, inp, output, p0, p1, p1_theta) - return loss, err - - def transport_criterion(self, backprop, compute_err, inp, output, p, q, theta): - s = time.time() - if type(theta) is torch.Tensor: - theta = theta.detach().cpu().numpy() - - itheta = theta / (2 * np.pi / self.transport.n_rotations) - itheta = np.int32(np.round(itheta)) % self.transport.n_rotations - - # Get one-hot pixel label map. - inp_img = inp['inp_img'] - - # label_size = inp_img.shape[:2] + (self.transport.n_rotations,) - label_size = inp_img.shape[:3] + (self.transport.n_rotations,) - label = torch.zeros(label_size, dtype=torch.float, device=output.device) - - # remove this for-loop laters - q[:,0] = torch.clamp(q[:,0], 0, label.shape[1]-1) - q[:,1] = torch.clamp(q[:,1], 0, label.shape[2]-1) - - for idx, q_i in enumerate(q): - label[idx, int(q_i[0]), int(q_i[1]), itheta[idx]] = 1 - label = label.permute((0, 3, 1, 2)).contiguous() - - # Get loss. - loss = self.cross_entropy_with_logits(output, label) - - if backprop: - transport_optim = self._optimizers['trans'] - transport_optim.zero_grad() - self.manual_backward(loss) - transport_optim.step() - - # Pixel and Rotation error (not used anywhere). - err = {} - if compute_err: - with torch.no_grad(): - place_conf = self.trans_forward(inp) - # pick the first batch - place_conf = place_conf[0] - q = q[0] - theta = theta[0] - place_conf = place_conf.permute(1, 2, 0) - place_conf = place_conf.detach().cpu().numpy() - argmax = np.argmax(place_conf) - argmax = np.unravel_index(argmax, shape=place_conf.shape) - p1_pix = argmax[:2] - p1_theta = argmax[2] * (2 * np.pi / place_conf.shape[2]) - - err = { - 'dist': np.linalg.norm(np.array(q.detach().cpu().numpy()) - p1_pix, ord=1), - 'theta': np.absolute((theta - p1_theta) % np.pi) - } - - self.transport.iters += 1 - return err, loss - - def training_step(self, batch, batch_idx): - - self.attention.train() - self.transport.train() - - frame, _ = batch - self.start_time = time.time() - - # Get training losses. - step = self.total_steps + 1 - loss0, err0 = self.attn_training_step(frame) - - self.start_time = time.time() - - if isinstance(self.transport, Attention): - loss1, err1 = self.attn_training_step(frame) - else: - loss1, err1 = self.transport_training_step(frame) - - total_loss = loss0 + loss1 - self.total_steps = step - self.start_time = time.time() - self.log('tr/attn/loss', loss0) - self.log('tr/trans/loss', loss1) - self.log('tr/loss', total_loss) - self.check_save_iteration() - - return dict( - loss=total_loss, - ) - - def check_save_iteration(self): - global_step = self.total_steps - - if (global_step + 1) % 100 == 0: - # save lastest checkpoint - print(f"Saving last.ckpt Epoch: {self.trainer.current_epoch} | Global Step: {self.trainer.global_step}") - self.save_last_checkpoint() - - def save_last_checkpoint(self): - checkpoint_path = os.path.join(self.cfg['train']['train_dir'], 'checkpoints') - ckpt_path = os.path.join(checkpoint_path, 'last.ckpt') - self.trainer.save_checkpoint(ckpt_path) - - def validation_step(self, batch, batch_idx): - self.attention.eval() - self.transport.eval() - - loss0, loss1 = 0, 0 - assert self.val_repeats >= 1 - for i in range(self.val_repeats): - frame, _ = batch - l0, err0 = self.attn_training_step(frame, backprop=False, compute_err=True) - loss0 += l0 - if isinstance(self.transport, Attention): - l1, err1 = self.attn_training_step(frame, backprop=False, compute_err=True) - loss1 += l1 - else: - l1, err1 = self.transport_training_step(frame, backprop=False, compute_err=True) - loss1 += l1 - loss0 /= self.val_repeats - loss1 /= self.val_repeats - val_total_loss = loss0 + loss1 - - return dict( - val_loss=val_total_loss, - val_loss0=loss0, - val_loss1=loss1, - val_attn_dist_err=err0['dist'], - val_attn_theta_err=err0['theta'], - val_trans_dist_err=err1['dist'], - val_trans_theta_err=err1['theta'], - ) - - def training_epoch_end(self, all_outputs): - super().training_epoch_end(all_outputs) - utils.set_seed(self.trainer.current_epoch+1) - - def validation_epoch_end(self, all_outputs): - mean_val_total_loss = np.mean([v['val_loss'].item() for v in all_outputs]) - mean_val_loss0 = np.mean([v['val_loss0'].item() for v in all_outputs]) - mean_val_loss1 = np.mean([v['val_loss1'].item() for v in all_outputs]) - total_attn_dist_err = np.sum([v['val_attn_dist_err'].sum() for v in all_outputs]) - total_attn_theta_err = np.sum([v['val_attn_theta_err'].sum() for v in all_outputs]) - total_trans_dist_err = np.sum([v['val_trans_dist_err'].sum() for v in all_outputs]) - total_trans_theta_err = np.sum([v['val_trans_theta_err'].sum() for v in all_outputs]) - - - self.log('vl/attn/loss', mean_val_loss0) - self.log('vl/trans/loss', mean_val_loss1) - self.log('vl/loss', mean_val_total_loss) - self.log('vl/total_attn_dist_err', total_attn_dist_err) - self.log('vl/total_attn_theta_err', total_attn_theta_err) - self.log('vl/total_trans_dist_err', total_trans_dist_err) - self.log('vl/total_trans_theta_err', total_trans_theta_err) - - print("\nAttn Err - Dist: {:.2f}, Theta: {:.2f}".format(total_attn_dist_err, total_attn_theta_err)) - print("Transport Err - Dist: {:.2f}, Theta: {:.2f}".format(total_trans_dist_err, total_trans_theta_err)) - - return dict( - val_loss=mean_val_total_loss, - val_loss0=mean_val_loss0, - mean_val_loss1=mean_val_loss1, - total_attn_dist_err=total_attn_dist_err, - total_attn_theta_err=total_attn_theta_err, - total_trans_dist_err=total_trans_dist_err, - total_trans_theta_err=total_trans_theta_err, - ) - - def act(self, obs, info=None, goal=None): # pylint: disable=unused-argument - """Run inference and return best action given visual observations.""" - # Get heightmap from RGB-D images. - img = self.test_ds.get_image(obs) - - # Attention model forward pass. - pick_inp = {'inp_img': img} - pick_conf = self.attn_forward(pick_inp) - - - pick_conf = pick_conf.detach().cpu().numpy() - argmax = np.argmax(pick_conf) - argmax = np.unravel_index(argmax, shape=pick_conf.shape) - p0_pix = argmax[:2] - p0_theta = argmax[2] * (2 * np.pi / pick_conf.shape[2]) - - # Transport model forward pass. - place_inp = {'inp_img': img, 'p0': p0_pix} - place_conf = self.trans_forward(place_inp) - place_conf = place_conf.permute(1, 2, 0) - place_conf = place_conf.detach().cpu().numpy() - argmax = np.argmax(place_conf) - argmax = np.unravel_index(argmax, shape=place_conf.shape) - p1_pix = argmax[:2] - p1_theta = argmax[2] * (2 * np.pi / place_conf.shape[2]) - - # Pixels to end effector poses. - hmap = img[:, :, 3] - p0_xyz = utils.pix_to_xyz(p0_pix, hmap, self.bounds, self.pix_size) - p1_xyz = utils.pix_to_xyz(p1_pix, hmap, self.bounds, self.pix_size) - p0_xyzw = utils.eulerXYZ_to_quatXYZW((0, 0, -p0_theta)) - p1_xyzw = utils.eulerXYZ_to_quatXYZW((0, 0, -p1_theta)) - - return { - 'pose0': (np.asarray(p0_xyz), np.asarray(p0_xyzw)), - 'pose1': (np.asarray(p1_xyz), np.asarray(p1_xyzw)), - 'pick': p0_pix, - 'place': p1_pix, - } - - def optimizer_step(self, current_epoch, batch_nb, optimizer, optimizer_i, second_order_closure, on_tpu, using_native_amp, using_lbfgs): - pass - - def configure_optimizers(self): - pass - - def train_dataloader(self): - return self.train_loader - - def val_dataloader(self): - return self.test_loader - - def load(self, model_path): - self.load_state_dict(torch.load(model_path)['state_dict']) - self.to(device=self.device_type) - - -class OriginalTransporterAgent(TransporterAgent): - - def __init__(self, name, cfg, train_ds, test_ds): - super().__init__(name, cfg, train_ds, test_ds) - - def _build_model(self): - stream_fcn = 'plain_resnet' - self.attention = Attention( - stream_fcn=(stream_fcn, None), - in_shape=self.in_shape, - n_rotations=1, - preprocess=utils.preprocess, - cfg=self.cfg, - device=self.device_type, - ) - self.transport = Transport( - stream_fcn=(stream_fcn, None), - in_shape=self.in_shape, - n_rotations=self.n_rotations, - crop_size=self.crop_size, - preprocess=utils.preprocess, - cfg=self.cfg, - device=self.device_type, - ) - - -class ClipUNetTransporterAgent(TransporterAgent): - - def __init__(self, name, cfg, train_ds, test_ds): - super().__init__(name, cfg, train_ds, test_ds) - - def _build_model(self): - stream_fcn = 'clip_unet' - self.attention = Attention( - stream_fcn=(stream_fcn, None), - in_shape=self.in_shape, - n_rotations=1, - preprocess=utils.preprocess, - cfg=self.cfg, - device=self.device_type, - ) - self.transport = Transport( - stream_fcn=(stream_fcn, None), - in_shape=self.in_shape, - n_rotations=self.n_rotations, - crop_size=self.crop_size, - preprocess=utils.preprocess, - cfg=self.cfg, - device=self.device_type, - ) - - -class TwoStreamClipUNetTransporterAgent(TransporterAgent): - - def __init__(self, name, cfg, train_ds, test_ds): - super().__init__(name, cfg, train_ds, test_ds) - - def _build_model(self): - stream_one_fcn = 'plain_resnet' - stream_two_fcn = 'clip_unet' - self.attention = TwoStreamAttention( - stream_fcn=(stream_one_fcn, stream_two_fcn), - in_shape=self.in_shape, - n_rotations=1, - preprocess=utils.preprocess, - cfg=self.cfg, - device=self.device_type, - ) - self.transport = TwoStreamTransport( - stream_fcn=(stream_one_fcn, stream_two_fcn), - in_shape=self.in_shape, - n_rotations=self.n_rotations, - crop_size=self.crop_size, - preprocess=utils.preprocess, - cfg=self.cfg, - device=self.device_type, - ) - - -class TwoStreamClipUNetLatTransporterAgent(TransporterAgent): - - def __init__(self, name, cfg, train_ds, test_ds): - super().__init__(name, cfg, train_ds, test_ds) - - def _build_model(self): - stream_one_fcn = 'plain_resnet_lat' - stream_two_fcn = 'clip_unet_lat' - self.attention = TwoStreamAttentionLat( - stream_fcn=(stream_one_fcn, stream_two_fcn), - in_shape=self.in_shape, - n_rotations=1, - preprocess=utils.preprocess, - cfg=self.cfg, - device=self.device_type, - ) - self.transport = TwoStreamTransportLat( - stream_fcn=(stream_one_fcn, stream_two_fcn), - in_shape=self.in_shape, - n_rotations=self.n_rotations, - crop_size=self.crop_size, - preprocess=utils.preprocess, - cfg=self.cfg, - device=self.device_type, - ) - - -class TwoStreamClipWithoutSkipsTransporterAgent(TransporterAgent): - - def __init__(self, name, cfg, train_ds, test_ds): - super().__init__(name, cfg, train_ds, test_ds) - - def _build_model(self): - # TODO: lateral version - stream_one_fcn = 'plain_resnet' - stream_two_fcn = 'clip_woskip' - self.attention = TwoStreamAttention( - stream_fcn=(stream_one_fcn, stream_two_fcn), - in_shape=self.in_shape, - n_rotations=1, - preprocess=utils.preprocess, - cfg=self.cfg, - device=self.device_type, - ) - self.transport = TwoStreamTransport( - stream_fcn=(stream_one_fcn, stream_two_fcn), - in_shape=self.in_shape, - n_rotations=self.n_rotations, - crop_size=self.crop_size, - preprocess=utils.preprocess, - cfg=self.cfg, - device=self.device_type, - ) - - -class TwoStreamRN50BertUNetTransporterAgent(TransporterAgent): - - def __init__(self, name, cfg, train_ds, test_ds): - super().__init__(name, cfg, train_ds, test_ds) - - def _build_model(self): - # TODO: lateral version - stream_one_fcn = 'plain_resnet' - stream_two_fcn = 'rn50_bert_unet' - self.attention = TwoStreamAttention( - stream_fcn=(stream_one_fcn, stream_two_fcn), - in_shape=self.in_shape, - n_rotations=1, - preprocess=utils.preprocess, - cfg=self.cfg, - device=self.device_type, - ) - self.transport = TwoStreamTransport( - stream_fcn=(stream_one_fcn, stream_two_fcn), - in_shape=self.in_shape, - n_rotations=self.n_rotations, - crop_size=self.crop_size, - preprocess=utils.preprocess, - cfg=self.cfg, - device=self.device_type, - ) diff --git a/spaces/Gen-Sim/Gen-Sim/cliport/tasks/primitives.py b/spaces/Gen-Sim/Gen-Sim/cliport/tasks/primitives.py deleted file mode 100644 index de6d4da015622d54c160f65dc9a1682bab649267..0000000000000000000000000000000000000000 --- a/spaces/Gen-Sim/Gen-Sim/cliport/tasks/primitives.py +++ /dev/null @@ -1,113 +0,0 @@ -"""Motion primitives.""" - -import numpy as np -from cliport.utils import utils - - -class PickPlace(): - """Pick and place primitive.""" - - def __init__(self, height=0.32, speed=0.01): - self.height, self.speed = height, speed - - def __call__(self, movej, movep, ee, pose0, pose1): - """Execute pick and place primitive. - - Args: - movej: function to move robot joints. - movep: function to move robot end effector pose. - ee: robot end effector. - pose0: SE(3) picking pose. - pose1: SE(3) placing pose. - - Returns: - timeout: robot movement timed out if True. - """ - - pick_pose, place_pose = pose0, pose1 - - # Execute picking primitive. - prepick_to_pick = ((0, 0, 0.32), (0, 0, 0, 1)) - postpick_to_pick = ((0, 0, self.height), (0, 0, 0, 1)) - prepick_pose = utils.multiply(pick_pose, prepick_to_pick) - postpick_pose = utils.multiply(pick_pose, postpick_to_pick) - timeout = movep(prepick_pose) - - # Move towards pick pose until contact is detected. - delta = (np.float32([0, 0, -0.001]), - utils.eulerXYZ_to_quatXYZW((0, 0, 0))) - targ_pose = prepick_pose - while not ee.detect_contact(): # and target_pose[2] > 0: - targ_pose = utils.multiply(targ_pose, delta) - timeout |= movep(targ_pose) - if timeout: - return True - - # Activate end effector, move up, and check picking success. - ee.activate() - timeout |= movep(postpick_pose, self.speed) - pick_success = ee.check_grasp() - - # Execute placing primitive if pick is successful. - if pick_success: - preplace_to_place = ((0, 0, self.height), (0, 0, 0, 1)) - postplace_to_place = ((0, 0, 0.32), (0, 0, 0, 1)) - preplace_pose = utils.multiply(place_pose, preplace_to_place) - postplace_pose = utils.multiply(place_pose, postplace_to_place) - targ_pose = preplace_pose - while not ee.detect_contact(): - targ_pose = utils.multiply(targ_pose, delta) - timeout |= movep(targ_pose, self.speed) - if timeout: - return True - ee.release() - timeout |= movep(postplace_pose) - - # Move to prepick pose if pick is not successful. - else: - ee.release() - timeout |= movep(prepick_pose) - - return timeout - - -def push(movej, movep, ee, pose0, pose1): # pylint: disable=unused-argument - """Execute pushing primitive. - - Args: - movej: function to move robot joints. - movep: function to move robot end effector pose. - ee: robot end effector. - pose0: SE(3) starting pose. - pose1: SE(3) ending pose. - - Returns: - timeout: robot movement timed out if True. - """ - - # Adjust push start and end positions. - pos0 = np.float32((pose0[0][0], pose0[0][1], 0.005)) - pos1 = np.float32((pose1[0][0], pose1[0][1], 0.005)) - vec = np.float32(pos1) - np.float32(pos0) - length = np.linalg.norm(vec) - vec = vec / length - pos0 -= vec * 0.02 - pos1 -= vec * 0.05 - - # Align spatula against push direction. - theta = np.arctan2(vec[1], vec[0]) - rot = utils.eulerXYZ_to_quatXYZW((0, 0, theta)) - - over0 = (pos0[0], pos0[1], 0.31) - over1 = (pos1[0], pos1[1], 0.31) - - # Execute push. - timeout = movep((over0, rot)) - timeout |= movep((pos0, rot)) - n_push = np.int32(np.floor(np.linalg.norm(pos1 - pos0) / 0.01)) - for _ in range(n_push): - target = pos0 + vec * n_push * 0.01 - timeout |= movep((target, rot), speed=0.003) - timeout |= movep((pos1, rot), speed=0.003) - timeout |= movep((over1, rot)) - return timeout diff --git a/spaces/Gen-Sim/Gen-Sim/cliport/utils/utils.py b/spaces/Gen-Sim/Gen-Sim/cliport/utils/utils.py deleted file mode 100644 index 8d1ecf6a5925b7a4e7ac254b8bdbf5d3f1ed1ee4..0000000000000000000000000000000000000000 --- a/spaces/Gen-Sim/Gen-Sim/cliport/utils/utils.py +++ /dev/null @@ -1,1257 +0,0 @@ -"""Miscellaneous utilities.""" - -import cv2 -import random -import matplotlib -import matplotlib.pyplot as plt -import meshcat -import meshcat.geometry as g -import meshcat.transformations as mtf - -import PIL -import yaml -import numpy as np -from transforms3d import euler - -import pybullet as p -import kornia -from omegaconf import OmegaConf - -import os -import torch -import torchvision - - -# ----------------------------------------------------------------------------- -# HEIGHTMAP UTILS -# ----------------------------------------------------------------------------- - -def get_heightmap(points, colors, bounds, pixel_size): - """Get top-down (z-axis) orthographic heightmap image from 3D pointcloud. - - Args: - points: HxWx3 float array of 3D points in world coordinates. - colors: HxWx3 uint8 array of values in range 0-255 aligned with points. - bounds: 3x2 float array of values (rows: X,Y,Z; columns: min,max) defining - region in 3D space to generate heightmap in world coordinates. - pixel_size: float defining size of each pixel in meters. - - Returns: - heightmap: HxW float array of height (from lower z-bound) in meters. - colormap: HxWx3 uint8 array of backprojected color aligned with heightmap. - """ - width = int(np.round((bounds[0, 1] - bounds[0, 0]) / pixel_size)) - height = int(np.round((bounds[1, 1] - bounds[1, 0]) / pixel_size)) - heightmap = np.zeros((height, width), dtype=np.float32) - colormap = np.zeros((height, width, colors.shape[-1]), dtype=np.uint8) - - # Filter out 3D points that are outside of the predefined bounds. - ix = (points[Ellipsis, 0] >= bounds[0, 0]) & (points[Ellipsis, 0] < bounds[0, 1]) - iy = (points[Ellipsis, 1] >= bounds[1, 0]) & (points[Ellipsis, 1] < bounds[1, 1]) - iz = (points[Ellipsis, 2] >= bounds[2, 0]) & (points[Ellipsis, 2] < bounds[2, 1]) - valid = ix & iy & iz - points = points[valid] - colors = colors[valid] - - # Sort 3D points by z-value, which works with array assignment to simulate - # z-buffering for rendering the heightmap image. - iz = np.argsort(points[:, -1]) - points, colors = points[iz], colors[iz] - px = np.int32(np.floor((points[:, 0] - bounds[0, 0]) / pixel_size)) - py = np.int32(np.floor((points[:, 1] - bounds[1, 0]) / pixel_size)) - px = np.clip(px, 0, width - 1) - py = np.clip(py, 0, height - 1) - heightmap[py, px] = points[:, 2] - bounds[2, 0] - for c in range(colors.shape[-1]): - colormap[py, px, c] = colors[:, c] - return heightmap, colormap - - -def get_pointcloud(depth, intrinsics): - """Get 3D pointcloud from perspective depth image. - - Args: - depth: HxW float array of perspective depth in meters. - intrinsics: 3x3 float array of camera intrinsics matrix. - - Returns: - points: HxWx3 float array of 3D points in camera coordinates. - """ - height, width = depth.shape - xlin = np.linspace(0, width - 1, width) - ylin = np.linspace(0, height - 1, height) - px, py = np.meshgrid(xlin, ylin) - px = (px - intrinsics[0, 2]) * (depth / intrinsics[0, 0]) - py = (py - intrinsics[1, 2]) * (depth / intrinsics[1, 1]) - points = np.float32([px, py, depth]).transpose(1, 2, 0) - return points - - -def transform_pointcloud(points, transform): - """Apply rigid transformation to 3D pointcloud. - - Args: - points: HxWx3 float array of 3D points in camera coordinates. - transform: 4x4 float array representing a rigid transformation matrix. - - Returns: - points: HxWx3 float array of transformed 3D points. - """ - padding = ((0, 0), (0, 0), (0, 1)) - homogen_points = np.pad(points.copy(), padding, - 'constant', constant_values=1) - for i in range(3): - points[Ellipsis, i] = np.sum(transform[i, :] * homogen_points, axis=-1) - return points - - -def reconstruct_heightmaps(color, depth, configs, bounds, pixel_size): - """Reconstruct top-down heightmap views from multiple 3D pointclouds.""" - heightmaps, colormaps = [], [] - for color, depth, config in zip(color, depth, configs): - intrinsics = np.array(config['intrinsics']).reshape(3, 3) - xyz = get_pointcloud(depth, intrinsics) - position = np.array(config['position']).reshape(3, 1) - rotation = p.getMatrixFromQuaternion(config['rotation']) - rotation = np.array(rotation).reshape(3, 3) - transform = np.eye(4) - transform[:3, :] = np.hstack((rotation, position)) - xyz = transform_pointcloud(xyz, transform) - heightmap, colormap = get_heightmap(xyz, color, bounds, pixel_size) - heightmaps.append(heightmap) - colormaps.append(colormap) - return heightmaps, colormaps - - -def pix_to_xyz(pixel, height, bounds, pixel_size, skip_height=False): - """Convert from pixel location on heightmap to 3D position.""" - u, v = pixel - x = bounds[0, 0] + v * pixel_size - y = bounds[1, 0] + u * pixel_size - if not skip_height: - z = bounds[2, 0] + height[u, v] - else: - z = 0.0 - return (x, y, z) - - -def xyz_to_pix(position, bounds, pixel_size): - """Convert from 3D position to pixel location on heightmap.""" - u = int(np.round((position[1] - bounds[1, 0]) / pixel_size)) - v = int(np.round((position[0] - bounds[0, 0]) / pixel_size)) - return (u, v) - - -def unproject_vectorized(uv_coordinates, depth_values, - intrinsic, - distortion): - """Vectorized version of unproject(), for N points. - - Args: - uv_coordinates: pixel coordinates to unproject of shape (n, 2). - depth_values: depth values corresponding index-wise to the uv_coordinates of - shape (n). - intrinsic: array of shape (3, 3). This is typically the return value of - intrinsics_to_matrix. - distortion: camera distortion parameters of shape (5,). - - Returns: - xyz coordinates in camera frame of shape (n, 3). - """ - cam_mtx = intrinsic # shape [3, 3] - cam_dist = np.array(distortion) # shape [5] - - # shape of points_undistorted is [N, 2] after the squeeze(). - points_undistorted = cv2.undistortPoints( - uv_coordinates.reshape((-1, 1, 2)), cam_mtx, cam_dist).squeeze() - - x = points_undistorted[:, 0] * depth_values - y = points_undistorted[:, 1] * depth_values - - xyz = np.vstack((x, y, depth_values)).T - return xyz - - -def unproject_depth_vectorized(im_depth, depth_dist, - camera_mtx, - camera_dist): - """Unproject depth image into 3D point cloud, using calibration. - - Args: - im_depth: raw depth image, pre-calibration of shape (height, width). - depth_dist: depth distortion parameters of shape (8,) - camera_mtx: intrinsics matrix of shape (3, 3). This is typically the return - value of intrinsics_to_matrix. - camera_dist: camera distortion parameters shape (5,). - - Returns: - numpy array of shape [3, H*W]. each column is xyz coordinates - """ - h, w = im_depth.shape - - # shape of each u_map, v_map is [H, W]. - u_map, v_map = np.meshgrid(np.linspace( - 0, w - 1, w), np.linspace(0, h - 1, h)) - - adjusted_depth = depth_dist[0] + im_depth * depth_dist[1] - - # shape after stack is [N, 2], where N = H * W. - uv_coordinates = np.stack((u_map.reshape(-1), v_map.reshape(-1)), axis=-1) - - return unproject_vectorized(uv_coordinates, adjusted_depth.reshape(-1), - camera_mtx, camera_dist) - - -# ----------------------------------------------------------------------------- -# MATH UTILS -# ----------------------------------------------------------------------------- - - -def sample_distribution(prob, n_samples=1): - """Sample data point from a custom distribution.""" - flat_prob = prob.flatten() / np.sum(prob) - rand_ind = np.random.choice( - np.arange(len(flat_prob)), n_samples, p=flat_prob, replace=False) - rand_ind_coords = np.array(np.unravel_index(rand_ind, prob.shape)).T - return np.int32(rand_ind_coords.squeeze()) - - -# ------------------------------------------------------------------------- -# Transformation Helper Functions -# ------------------------------------------------------------------------- - - -def invert(pose): - return p.invertTransform(pose[0], pose[1]) - - -def multiply(pose0, pose1): - return p.multiplyTransforms(pose0[0], pose0[1], pose1[0], pose1[1]) - - -def apply(pose, position): - position = np.float32(position) - position_shape = position.shape - position = np.float32(position).reshape(3, -1) - rotation = np.float32(p.getMatrixFromQuaternion(pose[1])).reshape(3, 3) - translation = np.float32(pose[0]).reshape(3, 1) - position = rotation @ position + translation - return tuple(position.reshape(position_shape)) - - -def eulerXYZ_to_quatXYZW(rotation): # pylint: disable=invalid-name - """Abstraction for converting from a 3-parameter rotation to quaterion. - - This will help us easily switch which rotation parameterization we use. - Quaternion should be in xyzw order for pybullet. - - Args: - rotation: a 3-parameter rotation, in xyz order tuple of 3 floats - - Returns: - quaternion, in xyzw order, tuple of 4 floats - """ - euler_zxy = (rotation[2], rotation[0], rotation[1]) - quaternion_wxyz = euler.euler2quat(*euler_zxy, axes='szxy') - q = quaternion_wxyz - quaternion_xyzw = (q[1], q[2], q[3], q[0]) - return quaternion_xyzw - - -def quatXYZW_to_eulerXYZ(quaternion_xyzw): # pylint: disable=invalid-name - """Abstraction for converting from quaternion to a 3-parameter toation. - - This will help us easily switch which rotation parameterization we use. - Quaternion should be in xyzw order for pybullet. - - Args: - quaternion_xyzw: in xyzw order, tuple of 4 floats - - Returns: - rotation: a 3-parameter rotation, in xyz order, tuple of 3 floats - """ - q = quaternion_xyzw - quaternion_wxyz = np.array([q[3], q[0], q[1], q[2]]) - euler_zxy = euler.quat2euler(quaternion_wxyz, axes='szxy') - euler_xyz = (euler_zxy[1], euler_zxy[2], euler_zxy[0]) - return euler_xyz - - -def apply_transform(transform_to_from, points_from): - r"""Transforms points (3D) into new frame. - - Using transform_to_from notation. - - Args: - transform_to_from: numpy.ndarray of shape [B,4,4], SE3 - points_from: numpy.ndarray of shape [B,3,N] - - Returns: - points_to: numpy.ndarray of shape [B,3,N] - """ - num_points = points_from.shape[-1] - - # non-batched - if len(transform_to_from.shape) == 2: - ones = np.ones((1, num_points)) - - # makes these each into homogenous vectors - points_from = np.vstack((points_from, ones)) # [4,N] - points_to = transform_to_from @ points_from # [4,N] - return points_to[0:3, :] # [3,N] - - # batched - else: - assert len(transform_to_from.shape) == 3 - batch_size = transform_to_from.shape[0] - zeros = np.ones((batch_size, 1, num_points)) - points_from = np.concatenate((points_from, zeros), axis=1) - assert points_from.shape[1] == 4 - points_to = transform_to_from @ points_from - return points_to[:, 0:3, :] - - -# ----------------------------------------------------------------------------- -# IMAGE UTILS -# ----------------------------------------------------------------------------- - - -def preprocess(img, dist='transporter'): - """Pre-process input (subtract mean, divide by std).""" - - transporter_color_mean = [0.18877631, 0.18877631, 0.18877631] - transporter_color_std = [0.07276466, 0.07276466, 0.07276466] - transporter_depth_mean = 0.00509261 - transporter_depth_std = 0.00903967 - - franka_color_mean = [0.622291933, 0.628313992, 0.623031488] - franka_color_std = [0.168154213, 0.17626014, 0.184527364] - franka_depth_mean = 0.872146842 - franka_depth_std = 0.195743116 - - clip_color_mean = [0.48145466, 0.4578275, 0.40821073] - clip_color_std = [0.26862954, 0.26130258, 0.27577711] - - # choose distribution - if dist == 'clip': - color_mean = clip_color_mean - color_std = clip_color_std - elif dist == 'mdetr': - color_mean = [0.485, 0.456, 0.406] - color_std = [0.229, 0.224, 0.225] - elif dist == 'franka': - color_mean = franka_color_mean - color_std = franka_color_std - else: - color_mean = transporter_color_mean - color_std = transporter_color_std - - if dist == 'franka': - depth_mean = franka_depth_mean - depth_std = franka_depth_std - else: - depth_mean = transporter_depth_mean - depth_std = transporter_depth_std - - # convert to pytorch tensor (if required) - if type(img) == torch.Tensor: - def cast_shape(stat, img): - tensor = torch.from_numpy(np.array(stat)).to(device=img.device, dtype=img.dtype) - tensor = tensor.unsqueeze(0).unsqueeze(-1).unsqueeze(-1) - tensor = tensor.repeat(img.shape[0], 1, img.shape[-2], img.shape[-1]) - return tensor - - color_mean = cast_shape(color_mean, img) - color_std = cast_shape(color_std, img) - depth_mean = cast_shape(depth_mean, img) - depth_std = cast_shape(depth_std, img) - - # normalize - img = img.clone() - img[:, :3, :, :] = ((img[:, :3, :, :] / 255 - color_mean) / color_std) - img[:, 3:, :, :] = ((img[:, 3:, :, :] - depth_mean) / depth_std) - else: - # normalize - img[:, :, :3] = (img[:, :, :3] / 255 - color_mean) / color_std - img[:, :, 3:] = (img[:, :, 3:] - depth_mean) / depth_std - - # if dist == 'franka' or dist == 'transporter': - # print(np.mean(img[:,:3,:,:].detach().cpu().numpy(), axis=(0,2,3)), - # np.mean(img[:,3,:,:].detach().cpu().numpy())) - - return img - -def map_kit_scale(scale): - return (scale[0] / 10, scale[1] / 10, scale[2] / 10) - -def deprocess(img): - color_mean = 0.18877631 - depth_mean = 0.00509261 - color_std = 0.07276466 - depth_std = 0.00903967 - - img[:, :, :3] = np.uint8(((img[:, :, :3] * color_std) + color_mean) * 255) - img[:, :, 3:] = np.uint8(((img[:, :, 3:] * depth_std) + depth_mean) * 255) - return img - - -def get_fused_heightmap(obs, configs, bounds, pix_size): - """Reconstruct orthographic heightmaps with segmentation masks.""" - heightmaps, colormaps = reconstruct_heightmaps( - obs['color'], obs['depth'], configs, bounds, pix_size) - colormaps = np.float32(colormaps) - heightmaps = np.float32(heightmaps) - - # Fuse maps from different views. - valid = np.sum(colormaps, axis=3) > 0 - repeat = np.sum(valid, axis=0) - repeat[repeat == 0] = 1 - cmap = np.sum(colormaps, axis=0) / repeat[Ellipsis, None] - cmap = np.uint8(np.round(cmap)) - hmap = np.max(heightmaps, axis=0) # Max to handle occlusions. - return cmap, hmap - - -def get_image_transform(theta, trans, pivot=(0, 0)): - """Compute composite 2D rigid transformation matrix.""" - # Get 2D rigid transformation matrix that rotates an image by theta (in - # radians) around pivot (in pixels) and translates by trans vector (in - # pixels) - pivot_t_image = np.array([[1., 0., -pivot[0]], [0., 1., -pivot[1]], - [0., 0., 1.]]) - image_t_pivot = np.array([[1., 0., pivot[0]], [0., 1., pivot[1]], - [0., 0., 1.]]) - transform = np.array([[np.cos(theta), -np.sin(theta), trans[0]], - [np.sin(theta), np.cos(theta), trans[1]], [0., 0., 1.]]) - return np.dot(image_t_pivot, np.dot(transform, pivot_t_image)) - - -def check_transform(image, pixel, transform): - """Valid transform only if pixel locations are still in FoV after transform.""" - new_pixel = np.flip( - np.int32( - np.round( - np.dot(transform, - np.float32([pixel[1], pixel[0], - 1.]).reshape(3, 1))))[:2].squeeze()) - valid = np.all( - new_pixel >= 0 - ) and new_pixel[0] < image.shape[0] and new_pixel[1] < image.shape[1] - return valid, new_pixel - - -def get_se3_from_image_transform(theta, trans, pivot, heightmap, bounds, - pixel_size): - """Calculate SE3 from image transform.""" - position_center = pix_to_xyz( - np.flip(np.int32(np.round(pivot))), - heightmap, - bounds, - pixel_size, - skip_height=False) - new_position_center = pix_to_xyz( - np.flip(np.int32(np.round(pivot + trans))), - heightmap, - bounds, - pixel_size, - skip_height=True) - # Don't look up the z height, it might get augmented out of frame - new_position_center = (new_position_center[0], new_position_center[1], - position_center[2]) - - delta_position = np.array(new_position_center) - np.array(position_center) - - t_world_center = np.eye(4) - t_world_center[0:3, 3] = np.array(position_center) - - t_centernew_center = np.eye(4) - euler_zxy = (-theta, 0, 0) - t_centernew_center[0:3, 0:3] = euler.euler2mat( - *euler_zxy, axes='szxy')[0:3, 0:3] - - t_centernew_center_tonly = np.eye(4) - t_centernew_center_tonly[0:3, 3] = -delta_position - t_centernew_center = t_centernew_center @ t_centernew_center_tonly - - t_world_centernew = t_world_center @ np.linalg.inv(t_centernew_center) - return t_world_center, t_world_centernew - - -def get_random_image_transform_params(image_size, theta_sigma=60): - theta = np.random.normal(0, np.deg2rad(theta_sigma)) - - trans_sigma = np.min(image_size) / 6 - trans = np.random.normal(0, trans_sigma, size=2) # [x, y] - pivot = (image_size[1] / 2, image_size[0] / 2) - return theta, trans, pivot - - -def q_mult(q1, q2): - w1, x1, y1, z1 = q1 - w2, x2, y2, z2 = q2 - w = w1 * w2 - x1 * x2 - y1 * y2 - z1 * z2 - x = w1 * x2 + x1 * w2 + y1 * z2 - z1 * y2 - y = w1 * y2 + y1 * w2 + z1 * x2 - x1 * z2 - z = w1 * z2 + z1 * w2 + x1 * y2 - y1 * x2 - return (w, x, y, z) - -def perturb(input_image, pixels, theta_sigma=60, add_noise=False): - """Data augmentation on images.""" - image_size = input_image.shape[:2] - - # Compute random rigid transform. - while True: - theta, trans, pivot = get_random_image_transform_params(image_size, theta_sigma=theta_sigma) - transform = get_image_transform(theta, trans, pivot) - transform_params = theta, trans, pivot - - # Ensure pixels remain in the image after transform. - is_valid = True - new_pixels = [] - new_rounded_pixels = [] - for pixel in pixels: - pixel = np.float32([pixel[1], pixel[0], 1.]).reshape(3, 1) - - rounded_pixel = np.int32(np.round(transform @ pixel))[:2].squeeze() - rounded_pixel = np.flip(rounded_pixel) - - pixel = (transform @ pixel)[:2].squeeze() - pixel = np.flip(pixel) - - in_fov_rounded = rounded_pixel[0] < image_size[0] and rounded_pixel[ - 1] < image_size[1] - in_fov = pixel[0] < image_size[0] and pixel[1] < image_size[1] - - is_valid = is_valid and np.all(rounded_pixel >= 0) and np.all( - pixel >= 0) and in_fov_rounded and in_fov - - new_pixels.append(pixel) - new_rounded_pixels.append(rounded_pixel) - if is_valid: - break - - # Apply rigid transform to image and pixel labels. - input_image = cv2.warpAffine( - input_image, - transform[:2, :], (image_size[1], image_size[0]), - flags=cv2.INTER_LINEAR) - - # Apply noise - color = np.int32(input_image[:,:,:3]) - depth = np.float32(input_image[:,:,3:]) - - if add_noise: - color += np.int32(np.random.normal(0, 3, image_size + (3,))) - color = np.uint8(np.clip(color, 0, 255)) - - depth += np.float32(np.random.normal(0, 0.003, image_size + (3,))) - - input_image = np.concatenate((color, depth), axis=2) - - # length of 5 - transform_params = np.array([theta, trans[0], trans[1], pivot[0], pivot[1]]) - return input_image, new_pixels, new_rounded_pixels, transform_params - - -def apply_perturbation(input_image, transform_params): - '''Apply data augmentation with specific transform params''' - image_size = input_image.shape[:2] - - # Apply rigid transform to image and pixel labels. - theta, trans, pivot = transform_params[0], transform_params[1:3], transform_params[3:5] - transform = get_image_transform(theta, trans, pivot) - - input_image = cv2.warpAffine( - input_image, - transform[:2, :], (image_size[1], image_size[0]), - flags=cv2.INTER_LINEAR) - return input_image - - -class ImageRotator: - """Rotate for n rotations.""" - # Reference: https://kornia.readthedocs.io/en/latest/tutorials/warp_affine.html?highlight=rotate - - def __init__(self, n_rotations): - self.angles = [] - for i in range(n_rotations): - theta = i * 2 * 180 / n_rotations - self.angles.append(theta) - - def __call__(self, x_list, pivot, reverse=False): - rot_x_list = [] - for i, angle in enumerate(self.angles): - x = x_list[i]# .unsqueeze(0) - # create transformation (rotation) - size = len(x) - alpha = angle if not reverse else (-1.0 * angle) # in degrees - angle = torch.ones(size) * alpha - - # define the rotation center - if type(pivot) is not torch.Tensor: - center = torch.FloatTensor(pivot)[...,[1,0]] - center = center.view(1,-1).repeat((size,1)) - else: - center = pivot[...,[1,0]].view(1,-1).clone().to(angle.device) - # center: torch.tensor = torch.ones(size, 2) - # center[..., 0] = int(pivot[1]) - # center[..., 1] = int(pivot[0]) - - # define the scale factor - scale = torch.ones(size, 2) - - # # compute the transformation matrix - M = kornia.geometry.get_rotation_matrix2d(center, angle, scale) - # x_warped = torchvision.transforms.functional.affine(x.float(), scale=1., - # center=[int(pivot[1]),int(pivot[0])], - # angle=alpha, translate=[0,0], shear=0, - # interpolation= torchvision.transforms.InterpolationMode.BILINEAR) - - - # apply the transformation to original image - # M = M.repeat(len(x), 1, 1) - _, _, h, w = x.shape - x_warped = kornia.geometry.transform.warp_affine(x.float(), M.to(x.device), dsize=(h, w)) - x_warped = x_warped - rot_x_list.append(x_warped) - - return rot_x_list - -# KD Tree Utils -# Construct K-D Tree to roughly estimate how many objects can fit inside the box. -class TreeNode: - - def __init__(self, parent, children, bbox): - self.parent = parent - self.children = children - self.bbox = bbox # min x, min y, min z, max x, max y, max z - -def KDTree(node, min_object_dim, margin, bboxes): - size = node.bbox[3:] - node.bbox[:3] - - # Choose which axis to split. - split = size > 2 * min_object_dim - if np.sum(split) == 0: - bboxes.append(node.bbox) - return - split = np.float32(split) / np.sum(split) - split_axis = np.random.choice(range(len(split)), 1, p=split)[0] - - # Split along chosen axis and create 2 children - cut_ind = np.random.rand() * \ - (size[split_axis] - 2 * min_object_dim) + \ - node.bbox[split_axis] + min_object_dim - child1_bbox = node.bbox.copy() - child1_bbox[3 + split_axis] = cut_ind - margin / 2. - child2_bbox = node.bbox.copy() - child2_bbox[split_axis] = cut_ind + margin / 2. - node.children = [ - TreeNode(node, [], bbox=child1_bbox), - TreeNode(node, [], bbox=child2_bbox) - ] - KDTree(node.children[0], min_object_dim, margin, bboxes) - KDTree(node.children[1], min_object_dim, margin, bboxes) - -# ----------------------------------------------------------------------------- -# Shape Name UTILS -# ----------------------------------------------------------------------------- -google_seen_obj_shapes = { - 'train': [ - 'alarm clock', - 'android toy', - 'black boot with leopard print', - 'black fedora', - 'black razer mouse', - 'black sandal', - 'black shoe with orange stripes', - 'bull figure', - 'butterfinger chocolate', - 'c clamp', - 'can opener', - 'crayon box', - 'dog statue', - 'frypan', - 'green and white striped towel', - 'grey soccer shoe with cleats', - 'hard drive', - 'honey dipper', - 'magnifying glass', - 'mario figure', - 'nintendo 3ds', - 'nintendo cartridge', - 'office depot box', - 'orca plush toy', - 'pepsi gold caffeine free box', - 'pepsi wild cherry box', - 'porcelain cup', - 'purple tape', - 'red and white flashlight', - 'rhino figure', - 'rocket racoon figure', - 'scissors', - 'silver tape', - 'spatula with purple head', - 'spiderman figure', - 'tablet', - 'toy school bus', - ], - 'val': [ - 'ball puzzle', - 'black and blue sneakers', - 'black shoe with green stripes', - 'brown fedora', - 'dinosaur figure', - 'hammer', - 'light brown boot with golden laces', - 'lion figure', - 'pepsi max box', - 'pepsi next box', - 'porcelain salad plate', - 'porcelain spoon', - 'red and white striped towel', - 'red cup', - 'screwdriver', - 'toy train', - 'unicorn toy', - 'white razer mouse', - 'yoshi figure' - ], - 'test': [ - 'ball puzzle', - 'black and blue sneakers', - 'black shoe with green stripes', - 'brown fedora', - 'dinosaur figure', - 'hammer', - 'light brown boot with golden laces', - 'lion figure', - 'pepsi max box', - 'pepsi next box', - 'porcelain salad plate', - 'porcelain spoon', - 'red and white striped towel', - 'red cup', - 'screwdriver', - 'toy train', - 'unicorn toy', - 'white razer mouse', - 'yoshi figure' - ], - } - -google_unseen_obj_shapes = { - 'train': [ - 'alarm clock', - 'android toy', - 'black boot with leopard print', - 'black fedora', - 'black razer mouse', - 'black sandal', - 'black shoe with orange stripes', - 'bull figure', - 'butterfinger chocolate', - 'c clamp', - 'can opener', - 'crayon box', - 'dog statue', - 'frypan', - 'green and white striped towel', - 'grey soccer shoe with cleats', - 'hard drive', - 'honey dipper', - 'magnifying glass', - 'mario figure', - 'nintendo 3ds', - 'nintendo cartridge', - 'office depot box', - 'orca plush toy', - 'pepsi gold caffeine free box', - 'pepsi wild cherry box', - 'porcelain cup', - 'purple tape', - 'red and white flashlight', - 'rhino figure', - 'rocket racoon figure', - 'scissors', - 'silver tape', - 'spatula with purple head', - 'spiderman figure', - 'tablet', - 'toy school bus', - ], - 'val': [ - 'ball puzzle', - 'black and blue sneakers', - 'black shoe with green stripes', - 'brown fedora', - 'dinosaur figure', - 'hammer', - 'light brown boot with golden laces', - 'lion figure', - 'pepsi max box', - 'pepsi next box', - 'porcelain salad plate', - 'porcelain spoon', - 'red and white striped towel', - 'red cup', - 'screwdriver', - 'toy train', - 'unicorn toy', - 'white razer mouse', - 'yoshi figure' - ], - 'test': [ - 'ball puzzle', - 'black and blue sneakers', - 'black shoe with green stripes', - 'brown fedora', - 'dinosaur figure', - 'hammer', - 'light brown boot with golden laces', - 'lion figure', - 'pepsi max box', - 'pepsi next box', - 'porcelain salad plate', - 'porcelain spoon', - 'red and white striped towel', - 'red cup', - 'screwdriver', - 'toy train', - 'unicorn toy', - 'white razer mouse', - 'yoshi figure' - ], - } - -google_all_shapes = { - 'train': [ - 'alarm clock', - 'android toy', - 'ball puzzle', - 'black and blue sneakers', - 'black boot with leopard print', - 'black fedora', - 'black razer mouse', - 'black sandal', - 'black shoe with green stripes', - 'black shoe with orange stripes', - 'brown fedora', - 'bull figure', - 'butterfinger chocolate', - 'c clamp', - 'can opener', - 'crayon box', - 'dinosaur figure', - 'dog statue', - 'frypan', - 'green and white striped towel', - 'grey soccer shoe with cleats', - 'hammer', - 'hard drive', - 'honey dipper', - 'light brown boot with golden laces', - 'lion figure', - 'magnifying glass', - 'mario figure', - 'nintendo 3ds', - 'nintendo cartridge', - 'office depot box', - 'orca plush toy', - 'pepsi gold caffeine free box', - 'pepsi max box', - 'pepsi next box', - 'pepsi wild cherry box', - 'porcelain cup', - 'porcelain salad plate', - 'porcelain spoon', - 'purple tape', - 'red and white flashlight', - 'red and white striped towel', - 'red cup', - 'rhino figure', - 'rocket racoon figure', - 'scissors', - 'screwdriver', - 'silver tape', - 'spatula with purple head', - 'spiderman figure', - 'tablet', - 'toy school bus', - 'toy train', - 'unicorn toy', - 'white razer mouse', - 'yoshi figure', - ], - 'val': [ - 'alarm clock', - 'android toy', - 'ball puzzle', - 'black and blue sneakers', - 'black boot with leopard print', - 'black fedora', - 'black razer mouse', - 'black sandal', - 'black shoe with green stripes', - 'black shoe with orange stripes', - 'brown fedora', - 'bull figure', - 'butterfinger chocolate', - 'c clamp', - 'can opener', - 'crayon box', - 'dinosaur figure', - 'dog statue', - 'frypan', - 'green and white striped towel', - 'grey soccer shoe with cleats', - 'hammer', - 'hard drive', - 'honey dipper', - 'light brown boot with golden laces', - 'lion figure', - 'magnifying glass', - 'mario figure', - 'nintendo 3ds', - 'nintendo cartridge', - 'office depot box', - 'orca plush toy', - 'pepsi gold caffeine free box', - 'pepsi max box', - 'pepsi next box', - 'pepsi wild cherry box', - 'porcelain cup', - 'porcelain salad plate', - 'porcelain spoon', - 'purple tape', - 'red and white flashlight', - 'red and white striped towel', - 'red cup', - 'rhino figure', - 'rocket racoon figure', - 'scissors', - 'screwdriver', - 'silver tape', - 'spatula with purple head', - 'spiderman figure', - 'tablet', - 'toy school bus', - 'toy train', - 'unicorn toy', - 'white razer mouse', - 'yoshi figure', - ], - 'test': [ - 'alarm clock', - 'android toy', - 'ball puzzle', - 'black and blue sneakers', - 'black boot with leopard print', - 'black fedora', - 'black razer mouse', - 'black sandal', - 'black shoe with green stripes', - 'black shoe with orange stripes', - 'brown fedora', - 'bull figure', - 'butterfinger chocolate', - 'c clamp', - 'can opener', - 'crayon box', - 'dinosaur figure', - 'dog statue', - 'frypan', - 'green and white striped towel', - 'grey soccer shoe with cleats', - 'hammer', - 'hard drive', - 'honey dipper', - 'light brown boot with golden laces', - 'lion figure', - 'magnifying glass', - 'mario figure', - 'nintendo 3ds', - 'nintendo cartridge', - 'office depot box', - 'orca plush toy', - 'pepsi gold caffeine free box', - 'pepsi max box', - 'pepsi next box', - 'pepsi wild cherry box', - 'porcelain cup', - 'porcelain salad plate', - 'porcelain spoon', - 'purple tape', - 'red and white flashlight', - 'red and white striped towel', - 'red cup', - 'rhino figure', - 'rocket racoon figure', - 'scissors', - 'screwdriver', - 'silver tape', - 'spatula with purple head', - 'spiderman figure', - 'tablet', - 'toy school bus', - 'toy train', - 'unicorn toy', - 'white razer mouse', - 'yoshi figure', - ], - } -assembling_kit_shapes = { - 0: "letter R shape", - 1: "letter A shape", - 2: "triangle", - 3: "square", - 4: "plus", - 5: "letter T shape", - 6: "diamond", - 7: "pentagon", - 8: "rectangle", - 9: "flower", - 10: "star", - 11: "circle", - 12: "letter G shape", - 13: "letter V shape", - 14: "letter E shape", - 15: "letter L shape", - 16: "ring", - 17: "hexagon", - 18: "heart", - 19: "letter M shape", - } - -# ----------------------------------------------------------------------------- -# COLOR AND PLOT UTILS -# ----------------------------------------------------------------------------- - - -# Colors (Tableau palette). -COLORS = { - 'blue': [78.0 / 255.0, 121.0 / 255.0, 167.0 / 255.0], - 'red': [255.0 / 255.0, 087.0 / 255.0, 089.0 / 255.0], - 'green': [089.0 / 255.0, 169.0 / 255.0, 078.0 / 255.0], - 'orange': [242.0 / 255.0, 142.0 / 255.0, 043.0 / 255.0], - 'yellow': [237.0 / 255.0, 201.0 / 255.0, 072.0 / 255.0], - 'purple': [176.0 / 255.0, 122.0 / 255.0, 161.0 / 255.0], - 'pink': [255.0 / 255.0, 157.0 / 255.0, 167.0 / 255.0], - 'cyan': [118.0 / 255.0, 183.0 / 255.0, 178.0 / 255.0], - 'brown': [156.0 / 255.0, 117.0 / 255.0, 095.0 / 255.0], - 'white': [255.0 / 255.0, 255.0 / 255.0, 255.0 / 255.0], - 'gray': [186.0 / 255.0, 176.0 / 255.0, 172.0 / 255.0], - 'indigo': [75.0 / 255.0, 0.0 / 255.0, 130.0 / 255.0], - 'violet': [143.0 / 255.0, 0.0 / 255.0, 255.0 / 255.0], - 'black': [0.0 / 255.0, 0.0 / 255.0, 0.0 / 255.0], - 'silver': [192.0 / 255.0, 192.0 / 255.0, 192.0 / 255.0], - 'gold': [255.0 / 255.0, 215.0 / 255.0, 0.0 / 255.0], - -} - -COLORS_NAMES = list(COLORS.keys()) -TRAIN_COLORS = ['blue', 'red', 'green', 'yellow', 'brown', 'gray', 'cyan'] -EVAL_COLORS = ['blue', 'red', 'green', 'orange', 'purple', 'pink', 'white'] - - -def get_colors(mode, n_colors=-1, **kwargs): - all_color_names = get_colors_names(mode) - - if n_colors == -1: - all_color_names = all_color_names - else: - all_color_names = random.sample(all_color_names, n_colors) - return [COLORS[cn] for cn in all_color_names], all_color_names - -def get_colors_names(mode): - if mode == 'train': - return TRAIN_COLORS - elif mode == 'full': - return TRAIN_COLORS - else: - return TRAIN_COLORS - -def get_random_color(): - return get_colors(mode='train', n_colors=1) - -def solve_hanoi_all(n_disks): - # Solve Hanoi sequence with dynamic programming. - hanoi_steps = [] # [[object index, from rod, to rod], ...] - - def solve_hanoi(n, t0, t1, t2): - if n == 0: - hanoi_steps.append([n, t0, t1]) - return - solve_hanoi(n - 1, t0, t2, t1) - hanoi_steps.append([n, t0, t1]) - solve_hanoi(n - 1, t2, t1, t0) - - solve_hanoi(n_disks - 1, 0, 2, 1) - return hanoi_steps - -def plot(fname, # pylint: disable=dangerous-default-value - title, - ylabel, - xlabel, - data, - xlim=[-np.inf, 0], - xticks=None, - ylim=[np.inf, -np.inf], - show_std=True): - """Plot frame data.""" - # Data is a dictionary that maps experiment names to tuples with 3 - # elements: x (size N array) and y (size N array) and y_std (size N array) - - # Get data limits. - for name, (x, y, _) in data.items(): - del name - y = np.array(y) - xlim[0] = max(xlim[0], np.min(x)) - xlim[1] = max(xlim[1], np.max(x)) - ylim[0] = min(ylim[0], np.min(y)) - ylim[1] = max(ylim[1], np.max(y)) - - # Draw background. - plt.title(title, fontsize=14) - plt.ylim(ylim) - plt.ylabel(ylabel, fontsize=14) - plt.yticks(fontsize=14) - plt.xlim(xlim) - plt.xlabel(xlabel, fontsize=14) - plt.grid(True, linestyle='-', color=[0.8, 0.8, 0.8]) - ax = plt.gca() - for axis in ['top', 'bottom', 'left', 'right']: - ax.spines[axis].set_color('#000000') - plt.rcParams.update({'font.size': 14}) - plt.rcParams['mathtext.default'] = 'regular' - matplotlib.rcParams['pdf.fonttype'] = 42 - matplotlib.rcParams['ps.fonttype'] = 42 - - # Draw data. - color_iter = 0 - for name, (x, y, std) in data.items(): - del name - x, y, std = np.float32(x), np.float32(y), np.float32(std) - upper = np.clip(y + std, ylim[0], ylim[1]) - lower = np.clip(y - std, ylim[0], ylim[1]) - color = COLORS[list(COLORS.keys())[color_iter]] - if show_std: - plt.fill_between(x, upper, lower, color=color, linewidth=0, alpha=0.3) - plt.plot(x, y, color=color, linewidth=2, marker='o', alpha=1.) - color_iter += 1 - - if xticks: - plt.xticks(ticks=range(len(xticks)), labels=xticks, fontsize=14) - else: - plt.xticks(fontsize=14) - plt.legend([name for name, _ in data.items()], - loc='lower right', fontsize=14) - plt.tight_layout() - plt.savefig(fname) - plt.clf() - - -# ----------------------------------------------------------------------------- -# MESHCAT UTILS -# ----------------------------------------------------------------------------- - -def create_visualizer(clear=True): - print('Waiting for meshcat server... have you started a server?') - vis = meshcat.Visualizer(zmq_url='tcp://127.0.0.1:6000') - if clear: - vis.delete() - return vis - - -def make_frame(vis, name, h, radius, o=1.0): - """Add a red-green-blue triad to the Meschat visualizer. - - Args: - vis (MeshCat Visualizer): the visualizer - name (string): name for this frame (should be unique) - h (float): height of frame visualization - radius (float): radius of frame visualization - o (float): opacity - """ - vis[name]['x'].set_object( - g.Cylinder(height=h, radius=radius), - g.MeshLambertMaterial(color=0xff0000, reflectivity=0.8, opacity=o)) - rotate_x = mtf.rotation_matrix(np.pi / 2.0, [0, 0, 1]) - rotate_x[0, 3] = h / 2 - vis[name]['x'].set_transform(rotate_x) - - vis[name]['y'].set_object( - g.Cylinder(height=h, radius=radius), - g.MeshLambertMaterial(color=0x00ff00, reflectivity=0.8, opacity=o)) - rotate_y = mtf.rotation_matrix(np.pi / 2.0, [0, 1, 0]) - rotate_y[1, 3] = h / 2 - vis[name]['y'].set_transform(rotate_y) - - vis[name]['z'].set_object( - g.Cylinder(height=h, radius=radius), - g.MeshLambertMaterial(color=0x0000ff, reflectivity=0.8, opacity=o)) - rotate_z = mtf.rotation_matrix(np.pi / 2.0, [1, 0, 0]) - rotate_z[2, 3] = h / 2 - vis[name]['z'].set_transform(rotate_z) - - -def meshcat_visualize(vis, obs, act, info): - """Visualize data using meshcat.""" - - for key in sorted(info.keys()): - pose = info[key] - pick_transform = np.eye(4) - pick_transform[0:3, 3] = pose[0] - quaternion_wxyz = np.asarray( - [pose[1][3], pose[1][0], pose[1][1], pose[1][2]]) - pick_transform[0:3, 0:3] = mtf.quaternion_matrix(quaternion_wxyz)[0:3, 0:3] - label = 'obj_' + str(key) - make_frame(vis, label, h=0.05, radius=0.0012, o=1.0) - vis[label].set_transform(pick_transform) - - for cam_index in range(len(act['camera_config'])): - verts = unproject_depth_vectorized( - obs['depth'][cam_index], np.array([0, 1]), - np.array(act['camera_config'][cam_index]['intrinsics']).reshape(3, 3), - np.zeros(5)) - - # switch from [N,3] to [3,N] - verts = verts.T - - cam_transform = np.eye(4) - cam_transform[0:3, 3] = act['camera_config'][cam_index]['position'] - quaternion_xyzw = act['camera_config'][cam_index]['rotation'] - quaternion_wxyz = np.asarray([ - quaternion_xyzw[3], quaternion_xyzw[0], quaternion_xyzw[1], - quaternion_xyzw[2] - ]) - cam_transform[0:3, 0:3] = mtf.quaternion_matrix(quaternion_wxyz)[0:3, 0:3] - verts = apply_transform(cam_transform, verts) - - colors = obs['color'][cam_index].reshape(-1, 3).T / 255.0 - - vis['pointclouds/' + str(cam_index)].set_object( - g.PointCloud(position=verts, color=colors)) - - -# ----------------------------------------------------------------------------- -# CONFIG UTILS -# ----------------------------------------------------------------------------- - -def set_seed(seed, torch=False): - random.seed(seed) - os.environ['PYTHONHASHSEED'] = str(seed) - np.random.seed(seed) - - if torch: - import torch - torch.manual_seed(seed) - - -def load_cfg(yaml_path): - with open(yaml_path, 'r') as f: - data = yaml.safe_load(f) - return data - - -def load_hydra_config(config_path): - return OmegaConf.load(config_path) diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/configs/detr/README.md b/spaces/Gradio-Blocks/uniformer_image_detection/configs/detr/README.md deleted file mode 100644 index 711a308a5549b28c36515405feabf2ca0f7c7c1f..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_detection/configs/detr/README.md +++ /dev/null @@ -1,27 +0,0 @@ -# DETR - -## Introduction - -[ALGORITHM] - -We provide the config files for DETR: [End-to-End Object Detection with Transformers](https://arxiv.org/abs/2005.12872). - -```BibTeX -@inproceedings{detr, - author = {Nicolas Carion and - Francisco Massa and - Gabriel Synnaeve and - Nicolas Usunier and - Alexander Kirillov and - Sergey Zagoruyko}, - title = {End-to-End Object Detection with Transformers}, - booktitle = {ECCV}, - year = {2020} -} -``` - -## Results and Models - -| Backbone | Model | Lr schd | Mem (GB) | Inf time (fps) | box AP | Config | Download | -|:------:|:--------:|:-------:|:--------:|:--------------:|:------:|:------:|:--------:| -| R-50 | DETR |150e |7.9| | 40.1 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/detr/detr_r50_8x2_150e_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/detr/detr_r50_8x2_150e_coco/detr_r50_8x2_150e_coco_20201130_194835-2c4b8974.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/detr/detr_r50_8x2_150e_coco/detr_r50_8x2_150e_coco_20201130_194835.log.json) | diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/configs/vfnet/vfnet_r2_101_fpn_mdconv_c3-c5_mstrain_2x_coco.py b/spaces/Gradio-Blocks/uniformer_image_detection/configs/vfnet/vfnet_r2_101_fpn_mdconv_c3-c5_mstrain_2x_coco.py deleted file mode 100644 index 8da3122657adc2785129c28a84473c25777abba3..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_detection/configs/vfnet/vfnet_r2_101_fpn_mdconv_c3-c5_mstrain_2x_coco.py +++ /dev/null @@ -1,16 +0,0 @@ -_base_ = './vfnet_r50_fpn_mdconv_c3-c5_mstrain_2x_coco.py' -model = dict( - pretrained='open-mmlab://res2net101_v1d_26w_4s', - backbone=dict( - type='Res2Net', - depth=101, - scales=4, - base_width=26, - num_stages=4, - out_indices=(0, 1, 2, 3), - frozen_stages=1, - norm_cfg=dict(type='BN', requires_grad=True), - norm_eval=True, - style='pytorch', - dcn=dict(type='DCNv2', deform_groups=1, fallback_on_stride=False), - stage_with_dcn=(False, True, True, True))) diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/fcn/fcn_r50-d8_480x480_40k_pascal_context.py b/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/fcn/fcn_r50-d8_480x480_40k_pascal_context.py deleted file mode 100644 index 7c57a6f8ff0a7dbb18666c1b9c882da10e586aa3..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/fcn/fcn_r50-d8_480x480_40k_pascal_context.py +++ /dev/null @@ -1,9 +0,0 @@ -_base_ = [ - '../_base_/models/fcn_r50-d8.py', '../_base_/datasets/pascal_context.py', - '../_base_/default_runtime.py', '../_base_/schedules/schedule_40k.py' -] -model = dict( - decode_head=dict(num_classes=60), - auxiliary_head=dict(num_classes=60), - test_cfg=dict(mode='slide', crop_size=(480, 480), stride=(320, 320))) -optimizer = dict(type='SGD', lr=0.004, momentum=0.9, weight_decay=0.0001) diff --git a/spaces/GroveStreet/GTA_SOVITS/vencoder/dphubert/__init__.py b/spaces/GroveStreet/GTA_SOVITS/vencoder/dphubert/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/GuXiaoBei/wechat-chatbot/scripts/shutdown.sh b/spaces/GuXiaoBei/wechat-chatbot/scripts/shutdown.sh deleted file mode 100644 index c2bf6b14adcafd46e7278ab3730ab7f78b82c593..0000000000000000000000000000000000000000 --- a/spaces/GuXiaoBei/wechat-chatbot/scripts/shutdown.sh +++ /dev/null @@ -1,16 +0,0 @@ -#!/bin/bash - -#关闭服务 -cd `dirname $0`/.. -export BASE_DIR=`pwd` -pid=`ps ax | grep -i app.py | grep "${BASE_DIR}" | grep python3 | grep -v grep | awk '{print $1}'` -if [ -z "$pid" ] ; then - echo "No chatgpt-on-wechat running." - exit -1; -fi - -echo "The chatgpt-on-wechat(${pid}) is running..." - -kill ${pid} - -echo "Send shutdown request to chatgpt-on-wechat(${pid}) OK" diff --git a/spaces/HaoFeng2019/DocGeoNet/extractor.py b/spaces/HaoFeng2019/DocGeoNet/extractor.py deleted file mode 100644 index 2e242193b8e14be6c74f89afd20b7d11cd8b6d62..0000000000000000000000000000000000000000 --- a/spaces/HaoFeng2019/DocGeoNet/extractor.py +++ /dev/null @@ -1,115 +0,0 @@ -import torch -import torch.nn as nn -import torch.nn.functional as F - - -class ResidualBlock(nn.Module): - def __init__(self, in_planes, planes, norm_fn='group', stride=1): - super(ResidualBlock, self).__init__() - - self.conv1 = nn.Conv2d(in_planes, planes, kernel_size=3, padding=1, stride=stride) - self.conv2 = nn.Conv2d(planes, planes, kernel_size=3, padding=1) - self.relu = nn.ReLU(inplace=True) - - num_groups = planes // 8 - - if norm_fn == 'group': - self.norm1 = nn.GroupNorm(num_groups=num_groups, num_channels=planes) - self.norm2 = nn.GroupNorm(num_groups=num_groups, num_channels=planes) - if not stride == 1: - self.norm3 = nn.GroupNorm(num_groups=num_groups, num_channels=planes) - - elif norm_fn == 'batch': - self.norm1 = nn.BatchNorm2d(planes) - self.norm2 = nn.BatchNorm2d(planes) - if not stride == 1: - self.norm3 = nn.BatchNorm2d(planes) - - elif norm_fn == 'instance': - self.norm1 = nn.InstanceNorm2d(planes) - self.norm2 = nn.InstanceNorm2d(planes) - if not stride == 1: - self.norm3 = nn.InstanceNorm2d(planes) - - elif norm_fn == 'none': - self.norm1 = nn.Sequential() - self.norm2 = nn.Sequential() - if not stride == 1: - self.norm3 = nn.Sequential() - - if stride == 1: - self.downsample = None - - else: - self.downsample = nn.Sequential( - nn.Conv2d(in_planes, planes, kernel_size=1, stride=stride), self.norm3) - - - def forward(self, x): - y = x - y = self.relu(self.norm1(self.conv1(y))) - y = self.relu(self.norm2(self.conv2(y))) - - if self.downsample is not None: - x = self.downsample(x) - - return self.relu(x+y) - - -class BasicEncoder(nn.Module): - def __init__(self, input_dim=128, output_dim=128, norm_fn='batch'): - super(BasicEncoder, self).__init__() - self.norm_fn = norm_fn - - if self.norm_fn == 'group': - self.norm1 = nn.GroupNorm(num_groups=8, num_channels=64) - - elif self.norm_fn == 'batch': - self.norm1 = nn.BatchNorm2d(64) - - elif self.norm_fn == 'instance': - self.norm1 = nn.InstanceNorm2d(64) - - elif self.norm_fn == 'none': - self.norm1 = nn.Sequential() - - self.conv1 = nn.Conv2d(input_dim, 64, kernel_size=7, stride=2, padding=3) - self.relu1 = nn.ReLU(inplace=True) - - self.in_planes = 64 - self.layer1 = self._make_layer(64, stride=1) - self.layer2 = self._make_layer(128, stride=2) - self.layer3 = self._make_layer(192, stride=2) - - # output convolution - self.conv2 = nn.Conv2d(192, output_dim, kernel_size=1) - - for m in self.modules(): - if isinstance(m, nn.Conv2d): - nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu') - elif isinstance(m, (nn.BatchNorm2d, nn.InstanceNorm2d, nn.GroupNorm)): - if m.weight is not None: - nn.init.constant_(m.weight, 1) - if m.bias is not None: - nn.init.constant_(m.bias, 0) - - def _make_layer(self, dim, stride=1): - layer1 = ResidualBlock(self.in_planes, dim, self.norm_fn, stride=stride) - layer2 = ResidualBlock(dim, dim, self.norm_fn, stride=1) - layers = (layer1, layer2) - - self.in_planes = dim - return nn.Sequential(*layers) - - def forward(self, x): - x = self.conv1(x) - x = self.norm1(x) - x = self.relu1(x) - - x = self.layer1(x) - x = self.layer2(x) - x = self.layer3(x) - - x = self.conv2(x) - - return x diff --git a/spaces/HarryLee/eCommerceImageCaptioning/run_scripts/caption/train_caption_stage1_el.sh b/spaces/HarryLee/eCommerceImageCaptioning/run_scripts/caption/train_caption_stage1_el.sh deleted file mode 100644 index f12ee52c9d24fe296410da30b67e0ef5e9e76254..0000000000000000000000000000000000000000 --- a/spaces/HarryLee/eCommerceImageCaptioning/run_scripts/caption/train_caption_stage1_el.sh +++ /dev/null @@ -1,109 +0,0 @@ -#!/usr/bin/env - -# The port for communication. Note that if you want to run multiple tasks on the same machine, -# you need to specify different port numbers. -export MASTER_PORT=1051 - -log_dir=./stage1_logs -save_dir=./stage1_checkpoints -mkdir -p $log_dir $save_dir - -bpe_dir=../../utils/BPE -user_dir=../../ofa_module - -data_dir=../../dataset/caption_data -data=${data_dir}/caption_stage1_train.tsv,${data_dir}/caption_val.tsv -restore_file=../../checkpoints/ofa_large.pt -selected_cols=0,4,2 - -task=caption -arch=ofa_large -criterion=adjust_label_smoothed_encouraging_loss # for el -label_smoothing=0.1 -lr=1e-5 -max_epoch=5 -warmup_ratio=0.06 -batch_size=8 -update_freq=4 -resnet_drop_path_rate=0.0 -encoder_drop_path_rate=0.1 -decoder_drop_path_rate=0.1 -dropout=0.1 -attention_dropout=0.0 -max_src_length=80 -max_tgt_length=20 -num_bins=1000 -patch_image_size=480 -eval_cider_cached=${data_dir}/cider_cached_tokens/coco-valid-words.p -drop_worst_ratio=0.05 # modified from 0.2 for el -log_end=0.75 # for el -for max_epoch in {2,}; do - echo "max_epoch "${max_epoch} - for warmup_ratio in {0.06,}; do - echo "warmup_ratio "${warmup_ratio} - for drop_worst_after in {2500,}; do - echo "drop_worst_after "${drop_worst_after} - - log_file=${log_dir}/${max_epoch}"_"${warmup_ratio}"_"${drop_worst_after}_el${log_end}_".log" - save_path=${save_dir}/${max_epoch}"_"${warmup_ratio}"_"${drop_worst_after}_el${log_end}_ - mkdir -p $save_path - - CUDA_VISIBLE_DEVICES=0,1,2,3 python3 -m torch.distributed.launch --nproc_per_node=4 --master_port=${MASTER_PORT} ../../train.py \ - $data \ - --selected-cols=${selected_cols} \ - --bpe-dir=${bpe_dir} \ - --user-dir=${user_dir} \ - --restore-file=${restore_file} \ - --reset-optimizer --reset-dataloader --reset-meters \ - --save-dir=${save_path} \ - --task=${task} \ - --arch=${arch} \ - --criterion=${criterion} \ - --label-smoothing=${label_smoothing} \ - --batch-size=${batch_size} \ - --update-freq=${update_freq} \ - --encoder-normalize-before \ - --decoder-normalize-before \ - --share-decoder-input-output-embed \ - --share-all-embeddings \ - --layernorm-embedding \ - --patch-layernorm-embedding \ - --code-layernorm-embedding \ - --resnet-drop-path-rate=${resnet_drop_path_rate} \ - --encoder-drop-path-rate=${encoder_drop_path_rate} \ - --decoder-drop-path-rate=${decoder_drop_path_rate} \ - --dropout=${dropout} \ - --attention-dropout=${attention_dropout} \ - --weight-decay=0.01 --optimizer=adam --adam-betas="(0.9,0.999)" --adam-eps=1e-08 --clip-norm=1.0 \ - --lr-scheduler=polynomial_decay --lr=${lr} \ - --max-epoch=${max_epoch} --warmup-ratio=${warmup_ratio} \ - --log-format=simple --log-interval=10 \ - --fixed-validation-seed=7 \ - --no-epoch-checkpoints --keep-best-checkpoints=1 \ - --save-interval=1 --validate-interval=1 \ - --save-interval-updates=500 --validate-interval-updates=500 \ - --eval-cider \ - --eval-cider-cached-tokens=${eval_cider_cached} \ - --eval-args='{"beam":5,"max_len_b":16,"no_repeat_ngram_size":3}' \ - --best-checkpoint-metric=cider --maximize-best-checkpoint-metric \ - --max-src-length=${max_src_length} \ - --max-tgt-length=${max_tgt_length} \ - --find-unused-parameters \ - --freeze-encoder-embedding \ - --freeze-decoder-embedding \ - --add-type-embedding \ - --scale-attn \ - --scale-fc \ - --scale-heads \ - --disable-entangle \ - --num-bins=${num_bins} \ - --patch-image-size=${patch_image_size} \ - --drop-worst-ratio=${drop_worst_ratio} \ - --drop-worst-after=${drop_worst_after} \ - --log-end ${log_end} \ - --fp16 \ - --fp16-scale-window=512 \ - --num-workers=0 > ${log_file} 2>&1 - done - done -done diff --git a/spaces/Harveenchadha/Hindi_TTS/vakyansh_tts/scripts/train_hifi.sh b/spaces/Harveenchadha/Hindi_TTS/vakyansh_tts/scripts/train_hifi.sh deleted file mode 100644 index 6955a6e0f07777c1db68eae0e25bb48900adb70d..0000000000000000000000000000000000000000 --- a/spaces/Harveenchadha/Hindi_TTS/vakyansh_tts/scripts/train_hifi.sh +++ /dev/null @@ -1,10 +0,0 @@ -python ../src/hifi_gan/train.py \ - --config '' \ - --input_wavs_dir '' \ - --input_mels_dir '' \ - --input_training_file '' \ - --input_validation_file '' \ - --checkpoint_path '' \ - --logs_path '' \ - --checkpoint_interval 10000 \ - --stdout_interval 50 diff --git a/spaces/Harveenchadha/Vakyansh-Tamil-TTS/ttsv/src/glow_tts/utils.py b/spaces/Harveenchadha/Vakyansh-Tamil-TTS/ttsv/src/glow_tts/utils.py deleted file mode 100644 index a591aa319ccb264110111cda55c4a232b41aae74..0000000000000000000000000000000000000000 --- a/spaces/Harveenchadha/Vakyansh-Tamil-TTS/ttsv/src/glow_tts/utils.py +++ /dev/null @@ -1,282 +0,0 @@ -import os -import glob -import sys -import argparse -import logging -import json -import subprocess -import numpy as np -from scipy.io.wavfile import read -import torch - -MATPLOTLIB_FLAG = False - -logging.basicConfig(stream=sys.stdout, level=logging.DEBUG) -logger = logging - - -def load_checkpoint(checkpoint_path, model, optimizer=None): - assert os.path.isfile(checkpoint_path) - checkpoint_dict = torch.load(checkpoint_path, map_location="cpu") - iteration = 1 - if "iteration" in checkpoint_dict.keys(): - iteration = checkpoint_dict["iteration"] - if "learning_rate" in checkpoint_dict.keys(): - learning_rate = checkpoint_dict["learning_rate"] - if optimizer is not None and "optimizer" in checkpoint_dict.keys(): - optimizer.load_state_dict(checkpoint_dict["optimizer"]) - saved_state_dict = checkpoint_dict["model"] - if hasattr(model, "module"): - state_dict = model.module.state_dict() - else: - state_dict = model.state_dict() - new_state_dict = {} - for k, v in state_dict.items(): - try: - new_state_dict[k] = saved_state_dict[k] - except: - logger.info("%s is not in the checkpoint" % k) - new_state_dict[k] = v - if hasattr(model, "module"): - model.module.load_state_dict(new_state_dict) - else: - model.load_state_dict(new_state_dict) - logger.info( - "Loaded checkpoint '{}' (iteration {})".format(checkpoint_path, iteration) - ) - return model, optimizer, learning_rate, iteration - - -def save_checkpoint(model, optimizer, learning_rate, iteration, checkpoint_path): - logger.info( - "Saving model and optimizer state at iteration {} to {}".format( - iteration, checkpoint_path - ) - ) - if hasattr(model, "module"): - state_dict = model.module.state_dict() - else: - state_dict = model.state_dict() - torch.save( - { - "model": state_dict, - "iteration": iteration, - "optimizer": optimizer.state_dict(), - "learning_rate": learning_rate, - }, - checkpoint_path, - ) - - -def summarize(writer, global_step, scalars={}, histograms={}, images={}): - for k, v in scalars.items(): - writer.add_scalar(k, v, global_step) - for k, v in histograms.items(): - writer.add_histogram(k, v, global_step) - for k, v in images.items(): - writer.add_image(k, v, global_step, dataformats="HWC") - - -def latest_checkpoint_path(dir_path, regex="G_*.pth"): - f_list = glob.glob(os.path.join(dir_path, regex)) - f_list.sort(key=lambda f: int("".join(filter(str.isdigit, f)))) - x = f_list[-1] - print(x) - return x - - -def plot_spectrogram_to_numpy(spectrogram): - global MATPLOTLIB_FLAG - if not MATPLOTLIB_FLAG: - import matplotlib - - matplotlib.use("Agg") - MATPLOTLIB_FLAG = True - mpl_logger = logging.getLogger("matplotlib") - mpl_logger.setLevel(logging.WARNING) - import matplotlib.pylab as plt - import numpy as np - - fig, ax = plt.subplots() - im = ax.imshow(spectrogram, aspect="auto", origin="lower", interpolation="none") - plt.colorbar(im, ax=ax) - plt.xlabel("Frames") - plt.ylabel("Channels") - plt.tight_layout() - - fig.canvas.draw() - data = np.fromstring(fig.canvas.tostring_rgb(), dtype=np.uint8, sep="") - data = data.reshape(fig.canvas.get_width_height()[::-1] + (3,)) - plt.close() - return data - - -def plot_alignment_to_numpy(alignment, info=None): - global MATPLOTLIB_FLAG - if not MATPLOTLIB_FLAG: - import matplotlib - - matplotlib.use("Agg") - MATPLOTLIB_FLAG = True - mpl_logger = logging.getLogger("matplotlib") - mpl_logger.setLevel(logging.WARNING) - import matplotlib.pylab as plt - import numpy as np - - fig, ax = plt.subplots(figsize=(6, 4)) - im = ax.imshow(alignment, aspect="auto", origin="lower", interpolation="none") - fig.colorbar(im, ax=ax) - xlabel = "Decoder timestep" - if info is not None: - xlabel += "\n\n" + info - plt.xlabel(xlabel) - plt.ylabel("Encoder timestep") - plt.tight_layout() - - fig.canvas.draw() - data = np.fromstring(fig.canvas.tostring_rgb(), dtype=np.uint8, sep="") - data = data.reshape(fig.canvas.get_width_height()[::-1] + (3,)) - plt.close() - return data - - -def load_wav_to_torch(full_path): - sampling_rate, data = read(full_path) - return torch.FloatTensor(data.astype(np.float32)), sampling_rate - - -def load_filepaths_and_text(filename, split="|"): - with open(filename, encoding="utf-8") as f: - filepaths_and_text = [line.strip().split(split) for line in f] - return filepaths_and_text - - -def get_hparams(init=True): - parser = argparse.ArgumentParser() - parser.add_argument("-c", "--config", type=str, help="JSON file for configuration") - parser.add_argument("-m", "--model", type=str, help="Model name") - # parser.add_argument('-g', '--gan', type=str, - # help='Model name') - parser.add_argument("-l", "--logs", type=str, help="logs name") - # parser.add_argument('-s', '--mels', type=str, - # help='logs name') - - args = parser.parse_args() - # model_dir = os.path.join("./logs", args.model) - model_dir = args.model - if not os.path.exists(model_dir): - os.makedirs(model_dir) - - config_path = args.config - config_save_path = os.path.join(model_dir, "config.json") - - # if not config_path : config_path = config_save_path - - if init: - with open(config_path, "r") as f: - data = f.read() - with open(config_save_path, "w") as f: - f.write(data) - else: - with open(config_save_path, "r") as f: - data = f.read() - config = json.loads(data) - - hparams = HParams(**config) - hparams.model_dir = model_dir - hparams.log_dir = args.logs - # hparams.mels_dir = args.mels - # hparams.gan_dir = args.gan - return hparams - - -def get_hparams_from_dir(model_dir): - config_save_path = os.path.join(model_dir, "config.json") - with open(config_save_path, "r") as f: - data = f.read() - config = json.loads(data) - - hparams = HParams(**config) - hparams.model_dir = model_dir - return hparams - - -def get_hparams_from_file(config_path): - with open(config_path, "r") as f: - data = f.read() - config = json.loads(data) - - hparams = HParams(**config) - return hparams - - -def check_git_hash(model_dir): - source_dir = os.path.dirname(os.path.realpath(__file__)) - if not os.path.exists(os.path.join(source_dir, ".git")): - logger.warn( - "{} is not a git repository, therefore hash value comparison will be ignored.".format( - source_dir - ) - ) - return - - cur_hash = subprocess.getoutput("git rev-parse HEAD") - - path = os.path.join(model_dir, "githash") - if os.path.exists(path): - saved_hash = open(path).read() - if saved_hash != cur_hash: - logger.warn( - "git hash values are different. {}(saved) != {}(current)".format( - saved_hash[:8], cur_hash[:8] - ) - ) - else: - open(path, "w").write(cur_hash) - - -def get_logger(model_dir, filename="train.log"): - global logger - logger = logging.getLogger(os.path.basename(model_dir)) - logger.setLevel(logging.DEBUG) - - formatter = logging.Formatter("%(asctime)s\t%(name)s\t%(levelname)s\t%(message)s") - if not os.path.exists(model_dir): - os.makedirs(model_dir) - h = logging.FileHandler(os.path.join(model_dir, filename)) - h.setLevel(logging.DEBUG) - h.setFormatter(formatter) - logger.addHandler(h) - return logger - - -class HParams: - def __init__(self, **kwargs): - for k, v in kwargs.items(): - if type(v) == dict: - v = HParams(**v) - self[k] = v - - def keys(self): - return self.__dict__.keys() - - def items(self): - return self.__dict__.items() - - def values(self): - return self.__dict__.values() - - def __len__(self): - return len(self.__dict__) - - def __getitem__(self, key): - return getattr(self, key) - - def __setitem__(self, key, value): - return setattr(self, key, value) - - def __contains__(self, key): - return key in self.__dict__ - - def __repr__(self): - return self.__dict__.__repr__() diff --git a/spaces/Harveenchadha/oiTrans/scripts/concat_joint_data.py b/spaces/Harveenchadha/oiTrans/scripts/concat_joint_data.py deleted file mode 100644 index f1496177b0f47869e8e58ebdb0395c2c457e300a..0000000000000000000000000000000000000000 --- a/spaces/Harveenchadha/oiTrans/scripts/concat_joint_data.py +++ /dev/null @@ -1,130 +0,0 @@ -import os -from tqdm import tqdm -import sys - -LANGS = [ - "as", - "bn", - "gu", - "hi", - "kn", - "ml", - "mr", - "or", - "pa", - "ta", - "te", - #"ur" -] - - -def add_token(sent, tag_infos): - """ add special tokens specified by tag_infos to each element in list - - tag_infos: list of tuples (tag_type,tag) - - each tag_info results in a token of the form: __{tag_type}__{tag}__ - - """ - - tokens = [] - for tag_type, tag in tag_infos: - token = '__' + tag_type + '__' + tag + '__' - tokens.append(token) - - return ' '.join(tokens) + ' ' + sent - - -def concat_data(data_dir, outdir, lang_pair_list, - out_src_lang='SRC', out_trg_lang='TGT', split='train'): - """ - data_dir: input dir, contains directories for language pairs named l1-l2 - """ - os.makedirs(outdir, exist_ok=True) - - out_src_fname = '{}/{}.{}'.format(outdir, split, out_src_lang) - out_trg_fname = '{}/{}.{}'.format(outdir, split, out_trg_lang) -# out_meta_fname='{}/metadata.txt'.format(outdir) - - print() - print(out_src_fname) - print(out_trg_fname) -# print(out_meta_fname) - - # concatenate train data - if os.path.isfile(out_src_fname): - os.unlink(out_src_fname) - if os.path.isfile(out_trg_fname): - os.unlink(out_trg_fname) -# if os.path.isfile(out_meta_fname): -# os.unlink(out_meta_fname) - - for src_lang, trg_lang in tqdm(lang_pair_list): - print('src: {}, tgt:{}'.format(src_lang, trg_lang)) - - in_src_fname = '{}/{}-{}/{}.{}'.format( - data_dir, src_lang, trg_lang, split, src_lang) - in_trg_fname = '{}/{}-{}/{}.{}'.format( - data_dir, src_lang, trg_lang, split, trg_lang) - - if not os.path.exists(in_src_fname): - continue - if not os.path.exists(in_trg_fname): - continue - - print(in_src_fname) - os.system('cat {} >> {}'.format(in_src_fname, out_src_fname)) - - print(in_trg_fname) - os.system('cat {} >> {}'.format(in_trg_fname, out_trg_fname)) - - -# with open('{}/lang_pairs.txt'.format(outdir),'w',encoding='utf-8') as lpfile: -# lpfile.write('\n'.join( [ '-'.join(x) for x in lang_pair_list ] )) - - corpus_stats(data_dir, outdir, lang_pair_list, split) - - -def corpus_stats(data_dir, outdir, lang_pair_list, split): - """ - data_dir: input dir, contains directories for language pairs named l1-l2 - """ - - with open('{}/{}_lang_pairs.txt'.format(outdir, split), 'w', encoding='utf-8') as lpfile: - - for src_lang, trg_lang in tqdm(lang_pair_list): - print('src: {}, tgt:{}'.format(src_lang, trg_lang)) - - in_src_fname = '{}/{}-{}/{}.{}'.format( - data_dir, src_lang, trg_lang, split, src_lang) - # in_trg_fname='{}/{}-{}/train.{}'.format(data_dir,src_lang,trg_lang,trg_lang) - if not os.path.exists(in_src_fname): - continue - - print(in_src_fname) - corpus_size = 0 - with open(in_src_fname, 'r', encoding='utf-8') as infile: - corpus_size = sum(map(lambda x: 1, infile)) - - lpfile.write('{}\t{}\t{}\n'.format( - src_lang, trg_lang, corpus_size)) - - -if __name__ == '__main__': - - in_dir = sys.argv[1] - out_dir = sys.argv[2] - src_lang = sys.argv[3] - tgt_lang = sys.argv[4] - split = sys.argv[5] - lang_pair_list = [] - - if src_lang == 'en': - for lang in LANGS: - lang_pair_list.append(['en', lang]) - else: - for lang in LANGS: - lang_pair_list.append([lang, 'en']) - - concat_data(in_dir, out_dir, lang_pair_list, split=split) - diff --git a/spaces/Hellisotherpeople/HF-SHAP/app.py b/spaces/Hellisotherpeople/HF-SHAP/app.py deleted file mode 100644 index 157f6498f33f927a3e81c095104daf7c7d0050d4..0000000000000000000000000000000000000000 --- a/spaces/Hellisotherpeople/HF-SHAP/app.py +++ /dev/null @@ -1,157 +0,0 @@ -import subprocess -import sys - - -##Lines 1-8 are necessary because the normal requirements.txt path for installing a package from disk doesn't work on HF spaces, thank you to Omar Sanseviero for the help! - -import numpy as np -import pandas as pd -import shap -import streamlit as st -import streamlit.components.v1 as components -from datasets import load_dataset -from transformers import (AutoModelForCausalLM, AutoModelForQuestionAnswering, - AutoModelForSeq2SeqLM, - AutoModelForSequenceClassification, AutoTokenizer, - pipeline) - - -st.set_page_config(page_title="HF-SHAP") -st.title("HF-SHAP: A front end for SHAP") -st.caption("By Allen Roush") -st.caption("github: https://github.com/Hellisotherpeople") -st.caption("Linkedin: https://www.linkedin.com/in/allen-roush-27721011b/") -st.title("SHAP (SHapley Additive exPlanations)") -st.image("https://shap.readthedocs.io/en/latest/_images/shap_header.png", width = 700) -st.caption("By Lundberg, Scott M and Lee, Su-In") -st.caption("Slightly modified by Allen Roush to fix a bug with text plotting not working outside of Jupyter Notebooks") -st.caption("Full Citation: https://raw.githubusercontent.com/slundberg/shap/master/docs/references/shap_nips.bib") -st.caption("See on github:: https://github.com/slundberg/shap") -st.caption("More details of how SHAP works: https://christophm.github.io/interpretable-ml-book/shap.html") - - -form = st.sidebar.form("Main Settings") - -form.header("Main Settings") - - - -task_done = form.selectbox("Which NLP task do you want to solve?", ["Text Generation", "Sentiment Analysis", "Translation", "Summarization"]) - - - - - -custom_doc = form.checkbox("Use a document from an existing dataset?", value = False) -if custom_doc: - dataset_name = form.text_area("Enter the name of the huggingface Dataset to do analysis of:", value = "Hellisotherpeople/DebateSum") - dataset_name_2 = form.text_area("Enter the name of the config for the dataset if it has one", value = "") - split_name = form.text_area("Enter the name of the split of the dataset that you want to use", value = "train") - number_of_records = form.number_input("Enter the number of documents that you want to analyze from the dataset", value = 200) - column_name = form.text_area("Enter the name of the column that we are doing analysis on (the X value)", value = "Full-Document") - index_to_analyze_start = form.number_input("Enter the index start of the document that you want to analyze of the dataset", value = 1) - index_to_analyze_end = form.number_input("Enter the index end of the document that you want to analyze of the dataset", value = 2) - form.caption("Multiple documents may not work on certain tasks") -else: - doc = st.text_area("Enter a custom document", value = "This is an example custom document") - - - -if task_done == "Text Generation": - model_name = form.text_area("Enter the name of the pre-trained model from transformers that we are using for Text Generation", value = "gpt2") - form.caption("This will download a new model, so it may take awhile or even break if the model is too large") - decoder = form.checkbox("Is this a decoder model?", value = True) - form.caption("This should be true for models like GPT-2, and false for models like BERT") - max_length = form.number_input("What's the max length of the text?", value = 50) - min_length = form.number_input("What's the min length of the text?", value = 20, max_value = max_length) - penalize_repetion = form.number_input("How strongly do we want to penalize repetition in the text generation?", value = 2) - sample = form.checkbox("Shall we use top-k and top-p decoding?", value = True) - form.caption("Setting this to false makes it greedy") - if sample: - top_k = form.number_input("What value of K should we use for Top-K sampling? Set to zero to disable", value = 50) - form.caption("In Top-K sampling, the K most likely next words are filtered and the probability mass is redistributed among only those K next words. ") - top_p = form.number_input("What value of P should we use for Top-p sampling? Set to zero to disable", value = 0.95, max_value = 1.0, min_value = 0.0) - form.caption("Top-p sampling chooses from the smallest possible set of words whose cumulative probability exceeds the probability p. The probability mass is then redistributed among this set of words.") - temperature = form.number_input("How spicy/interesting do we want our models output to be", value = 1.05, min_value = 0.0) - form.caption("Setting this higher decreases the likelihood of high probability words and increases the likelihood of low probability (and presumably more interesting) words") - form.caption("For more details on what these settings mean, see here: https://huggingface.co/blog/how-to-generate") - - -elif task_done == "Sentiment Analysis": - model_name = form.text_area("Enter the name of the pre-trained model from transformers that we are using for Sentiment Analysis", value = "nateraw/bert-base-uncased-emotion") - rescale_logits = form.checkbox("Do we rescale the probabilities in terms of log odds?", value = False) -elif task_done == "Translation": - model_name = form.text_area("Enter the name of the pre-trained model from transformers that we are using for Translation", value = "Helsinki-NLP/opus-mt-en-es") -elif task_done == "Summarization": - model_name = form.text_area("Enter the name of the pre-trained model from transformers that we are using for Translation", value = "sshleifer/distilbart-xsum-12-1") -else: - model_name = form.text_area("Enter the name of the pre-trained model from transformers that we are using for Question Answering", value = "deepset/roberta-base-squad2") - -form.header("Model Explanation Display Settings") -output_width = form.number_input("Enter the number of pixels for width of model explanation html display", value = 800) -output_height = form.number_input("Enter the number of pixels for height of model explanation html display", value = 1000) -form.form_submit_button("Submit") - -@st.cache -def load_and_process_data(path, name, streaming, split_name, number_of_records): - dataset = load_dataset(path = path, name = name, streaming=streaming) - #return list(dataset) - dataset_head = dataset[split_name].take(number_of_records) - df = pd.DataFrame.from_dict(dataset_head) - return df[column_name] - - - -@st.cache(allow_output_mutation=True) -def load_model(model_name): - tokenizer = AutoTokenizer.from_pretrained(model_name, use_fast=True) - if task_done == "Text Generation": - model = AutoModelForCausalLM.from_pretrained(model_name) - model.config.is_decoder=decoder - if sample == True: - model.config.task_specific_params["text-generation"] = {"do_sample": sample, "max_length": max_length, "min_length": min_length, "temperature": temperature, "top_k": top_k, "top_p" : top_p, "no_repeat_ngram_size": penalize_repetion} - else: - model.config.task_specific_params["text-generation"] = {"do_sample": sample, "max_length": max_length, "min_length": min_length, "no_repeat_ngram_size": penalize_repetion} - - elif task_done == "Sentiment Analysis": - model = AutoModelForSequenceClassification.from_pretrained(model_name) - elif task_done == "Translation": - model = AutoModelForSeq2SeqLM.from_pretrained(model_name) - elif task_done == "Summarization": - model = AutoModelForSeq2SeqLM.from_pretrained(model_name) - elif task_done == "Question Answering": - #TODO: This one is going to be harder... - # https://shap.readthedocs.io/en/latest/example_notebooks/text_examples/question_answering/Explaining%20a%20Question%20Answering%20Transformers%20Model.html - model = AutoModelForQuestionAnswering.from_pretrained(model_name) - - return tokenizer, model - -tokenizer, model = load_model(model_name) - - - - - - -if custom_doc: - df = load_and_process_data(dataset_name, dataset_name_2, True, split_name, number_of_records) - doc = list(df[index_to_analyze_start:index_to_analyze_end]) - st.write(doc) - -if task_done == "Sentiment Analysis": - pred = pipeline("text-classification", model=model, tokenizer=tokenizer, return_all_scores=True) - explainer = shap.Explainer(pred, rescale_to_logits = rescale_logits) -else: - explainer = shap.Explainer(model, tokenizer) - -if custom_doc: - shap_values = explainer(doc) -else: - shap_values = explainer([doc]) - - - -the_plot = shap.plots.text(shap_values, display = False) -st.caption("The plot is interactive! Try Hovering over or clicking on the input or output text") -components.html(the_plot, height = output_height, width = output_width, scrolling = True) - diff --git a/spaces/HighCWu/GPEN/face_model/op/fused_bias_act.cpp b/spaces/HighCWu/GPEN/face_model/op/fused_bias_act.cpp deleted file mode 100644 index 02be898f970bcc8ea297867fcaa4e71b24b3d949..0000000000000000000000000000000000000000 --- a/spaces/HighCWu/GPEN/face_model/op/fused_bias_act.cpp +++ /dev/null @@ -1,21 +0,0 @@ -#include - - -torch::Tensor fused_bias_act_op(const torch::Tensor& input, const torch::Tensor& bias, const torch::Tensor& refer, - int act, int grad, float alpha, float scale); - -#define CHECK_CUDA(x) TORCH_CHECK(x.type().is_cuda(), #x " must be a CUDA tensor") -#define CHECK_CONTIGUOUS(x) TORCH_CHECK(x.is_contiguous(), #x " must be contiguous") -#define CHECK_INPUT(x) CHECK_CUDA(x); CHECK_CONTIGUOUS(x) - -torch::Tensor fused_bias_act(const torch::Tensor& input, const torch::Tensor& bias, const torch::Tensor& refer, - int act, int grad, float alpha, float scale) { - CHECK_CUDA(input); - CHECK_CUDA(bias); - - return fused_bias_act_op(input, bias, refer, act, grad, alpha, scale); -} - -PYBIND11_MODULE(TORCH_EXTENSION_NAME, m) { - m.def("fused_bias_act", &fused_bias_act, "fused bias act (CUDA)"); -} \ No newline at end of file diff --git a/spaces/HuggingFaceH4/open_llm_leaderboard/src/get_model_info/hardocded_metadata/flags.py b/spaces/HuggingFaceH4/open_llm_leaderboard/src/get_model_info/hardocded_metadata/flags.py deleted file mode 100644 index cbd47b2608a0e6e07681b0ee1391af8e364ad00b..0000000000000000000000000000000000000000 --- a/spaces/HuggingFaceH4/open_llm_leaderboard/src/get_model_info/hardocded_metadata/flags.py +++ /dev/null @@ -1,18 +0,0 @@ -# Models which have been flagged by users as being problematic for a reason or another -# (Model name to forum discussion link) -FLAGGED_MODELS = { - "Voicelab/trurl-2-13b": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard/discussions/202", - "deepnight-research/llama-2-70B-inst": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard/discussions/207", - "Aspik101/trurl-2-13b-pl-instruct_unload": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard/discussions/213", - "Fredithefish/ReasonixPajama-3B-HF": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard/discussions/236", - "TigerResearch/tigerbot-7b-sft-v1": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard/discussions/237", - "gaodrew/gaodrew-gorgonzola-13b": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard/discussions/215", - "AIDC-ai-business/Marcoroni-70B": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard/discussions/287", - "AIDC-ai-business/Marcoroni-13B": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard/discussions/287", - "AIDC-ai-business/Marcoroni-7B": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard/discussions/287", -} - -# Models which have been requested by orgs to not be submitted on the leaderboard -DO_NOT_SUBMIT_MODELS = [ - "Voicelab/trurl-2-13b", # trained on MMLU -] diff --git a/spaces/HuguesdeF/moulinette/README.md b/spaces/HuguesdeF/moulinette/README.md deleted file mode 100644 index fb946ece8788e8e39ad92576774ced43520b7ad4..0000000000000000000000000000000000000000 --- a/spaces/HuguesdeF/moulinette/README.md +++ /dev/null @@ -1,39 +0,0 @@ ---- -title: Moulinette -emoji: ⚙️ -colorFrom: indigo -colorTo: indigo -sdk: docker -pinned: false -license: apache-2.0 ---- -# Moulinette Seguin Moreau - -## Installation Windows - -L'installation Windows de la Moulinette à logos passe par la création d'un conteneur virtuel, appelé "image Docker". -Il fonctionne comme une petite machine virtuelle. Afin de s'en servir, il faut donc 1/Construire cette image virtuelle et 2/L'exécuter. - -Pour installer la Moulinette sur Windows, voici la procédure: -* Installer docker desktop: https://www.docker.com/products/docker-desktop/ -* Ouvrir Docker Desktop. Se rendre dans les paramètres/général, puis cliquer sur "Start Docker Desktop when you log in", ce qui ouvrira automatiquement Docker au démarrage de l'ordinateur. -* Ouvrir une invite de commande Windows, en cherchant "Run" dans la barre de recherche, puis tapper "cmd" dans la fenêtre Run. -* Se rendre, depuis l'invite de commande dans le dossier contenant le code (Et donc ce fichier readme !). Pour aller dans un dossier utiliser la commande "cd". -* Vérifier que docker est maintenant accessible, après l'installation précédente, en tapant: ```docker -v``` qui doit donner la version de Docker installée. -* Entrer la commande: -``` -docker build . -t moulinette -``` -S'assurer que le build s'est bien passé en tapant ```docker images``` qui liste toutes les images Docker présentes sur l'ordinateur. -Une image doit s'appeller "moulinette". - -* Puis, une fois le "build" realisé, entrer la commande: -``` -docker run -d --restart unless-stopped -p 8501:8501 moulinette -``` -Cette commande exécute (docker run) l'image docker "moulinette", aiguille le port du docker 8501 vers le port de la machine hôte 8501 (avec le -p), - et relance l'image docker au démarrage de l'ordinateur (--restart). Enfin le -d indique de lancer l'exécution en mode "détaché", c'est-à-dire en tâche de fond. - -* Se rendre dans son navigateur web et rentrer l'url: localhost:8501 -* Ajouter cette page aux favoris. - diff --git a/spaces/ICML2022/OFA/fairseq/examples/criss/download_and_preprocess_flores_test.sh b/spaces/ICML2022/OFA/fairseq/examples/criss/download_and_preprocess_flores_test.sh deleted file mode 100644 index ed4b390fbdee3991efeb298050e12065d7fe605b..0000000000000000000000000000000000000000 --- a/spaces/ICML2022/OFA/fairseq/examples/criss/download_and_preprocess_flores_test.sh +++ /dev/null @@ -1,64 +0,0 @@ -#!/bin/bash -# Copyright (c) Facebook, Inc. and its affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -SPM_ENCODE=flores/scripts/spm_encode.py -DATA=data_tmp -SPM_MODEL=criss_checkpoints/sentence.bpe.model -DICT=criss_checkpoints/dict.txt - -download_data() { - CORPORA=$1 - URL=$2 - - if [ -f $CORPORA ]; then - echo "$CORPORA already exists, skipping download" - else - echo "Downloading $URL" - wget $URL -O $CORPORA --no-check-certificate || rm -f $CORPORA - if [ -f $CORPORA ]; then - echo "$URL successfully downloaded." - else - echo "$URL not successfully downloaded." - rm -f $CORPORA - fi - fi -} - -if [[ -f flores ]]; then - echo "flores already cloned" -else - git clone https://github.com/facebookresearch/flores -fi - -mkdir -p $DATA -download_data $DATA/wikipedia_en_ne_si_test_sets.tgz "https://github.com/facebookresearch/flores/raw/master/data/wikipedia_en_ne_si_test_sets.tgz" -pushd $DATA -pwd -tar -vxf wikipedia_en_ne_si_test_sets.tgz -popd - - -for lang in ne_NP si_LK; do - datadir=$DATA/${lang}-en_XX-flores - rm -rf $datadir - mkdir -p $datadir - TEST_PREFIX=$DATA/wikipedia_en_ne_si_test_sets/wikipedia.test - python $SPM_ENCODE \ - --model ${SPM_MODEL} \ - --output_format=piece \ - --inputs ${TEST_PREFIX}.${lang:0:2}-en.${lang:0:2} ${TEST_PREFIX}.${lang:0:2}-en.en \ - --outputs $datadir/test.bpe.${lang}-en_XX.${lang} $datadir/test.bpe.${lang}-en_XX.en_XX - - # binarize data - fairseq-preprocess \ - --source-lang ${lang} --target-lang en_XX \ - --testpref $datadir/test.bpe.${lang}-en_XX \ - --destdir $datadir \ - --srcdict ${DICT} \ - --joined-dictionary \ - --workers 4 -done diff --git a/spaces/ICML2022/OFA/fairseq/examples/noisychannel/__init__.py b/spaces/ICML2022/OFA/fairseq/examples/noisychannel/__init__.py deleted file mode 100644 index 89f1aef4f6328d25425e0bcabb42dfffd2ed35f0..0000000000000000000000000000000000000000 --- a/spaces/ICML2022/OFA/fairseq/examples/noisychannel/__init__.py +++ /dev/null @@ -1,6 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from .rerank_options import * # noqa diff --git a/spaces/ICML2022/resefa/utils/loggers/__init__.py b/spaces/ICML2022/resefa/utils/loggers/__init__.py deleted file mode 100644 index 665fd01dc34ae7a520dadfe4581c97e59dd6affe..0000000000000000000000000000000000000000 --- a/spaces/ICML2022/resefa/utils/loggers/__init__.py +++ /dev/null @@ -1,32 +0,0 @@ -# python3.7 -"""Collects all loggers.""" - -from .normal_logger import NormalLogger -from .rich_logger import RichLogger -from .dummy_logger import DummyLogger - -__all__ = ['build_logger'] - -_LOGGERS = { - 'normal': NormalLogger, - 'rich': RichLogger, - 'dummy': DummyLogger -} - - -def build_logger(logger_type='normal', **kwargs): - """Builds a logger. - - Args: - logger_type: Type of logger, which is case insensitive. - (default: `normal`) - **kwargs: Additional arguments to build the logger. - - Raises: - ValueError: If the `logger_type` is not supported. - """ - logger_type = logger_type.lower() - if logger_type not in _LOGGERS: - raise ValueError(f'Invalid logger type: `{logger_type}`!\n' - f'Types allowed: {list(_LOGGERS)}.') - return _LOGGERS[logger_type](**kwargs) diff --git a/spaces/Iceclear/StableSR/StableSR/ldm/util.py b/spaces/Iceclear/StableSR/StableSR/ldm/util.py deleted file mode 100644 index 1b1301a55396c445ecdb28cc444fa10fcbd06391..0000000000000000000000000000000000000000 --- a/spaces/Iceclear/StableSR/StableSR/ldm/util.py +++ /dev/null @@ -1,211 +0,0 @@ -import importlib - -import torch -import numpy as np -from collections import abc -from einops import rearrange -from functools import partial - -import multiprocessing as mp -from threading import Thread -from queue import Queue - -from inspect import isfunction -from PIL import Image, ImageDraw, ImageFont - - -def log_txt_as_img(wh, xc, size=10): - # wh a tuple of (width, height) - # xc a list of captions to plot - b = len(xc) - txts = list() - for bi in range(b): - txt = Image.new("RGB", wh, color="white") - draw = ImageDraw.Draw(txt) - font = ImageFont.truetype('data/DejaVuSans.ttf', size=size) - nc = int(40 * (wh[0] / 256)) - lines = "\n".join(xc[bi][start:start + nc] for start in range(0, len(xc[bi]), nc)) - - try: - draw.text((0, 0), lines, fill="black", font=font) - except UnicodeEncodeError: - print("Cant encode string for logging. Skipping.") - - txt = np.array(txt).transpose(2, 0, 1) / 127.5 - 1.0 - txts.append(txt) - txts = np.stack(txts) - txts = torch.tensor(txts) - return txts - - -def ismap(x): - if not isinstance(x, torch.Tensor): - return False - return (len(x.shape) == 4) and (x.shape[1] > 3) - - -def isimage(x): - if not isinstance(x, torch.Tensor): - return False - return (len(x.shape) == 4) and (x.shape[1] == 3 or x.shape[1] == 1) - - -def exists(x): - return x is not None - - -def default(val, d): - if exists(val): - return val - return d() if isfunction(d) else d - - -def mean_flat(tensor): - """ - https://github.com/openai/guided-diffusion/blob/27c20a8fab9cb472df5d6bdd6c8d11c8f430b924/guided_diffusion/nn.py#L86 - Take the mean over all non-batch dimensions. - """ - return tensor.mean(dim=list(range(1, len(tensor.shape)))) - - -def count_params(model, verbose=False): - total_params = sum(p.numel() for p in model.parameters()) - if verbose: - print(f"{model.__class__.__name__} has {total_params * 1.e-6:.2f} M params.") - return total_params - - -def instantiate_from_config(config): - if not "target" in config: - if config == '__is_first_stage__': - return None - elif config == "__is_unconditional__": - return None - raise KeyError("Expected key `target` to instantiate.") - return get_obj_from_str(config["target"])(**config.get("params", dict())) - -def instantiate_from_config_sr(config): - if not "target" in config: - if config == '__is_first_stage__': - return None - elif config == "__is_unconditional__": - return None - raise KeyError("Expected key `target` to instantiate.") - return get_obj_from_str(config["target"])(config.get("params", dict())) - -def get_obj_from_str(string, reload=False): - module, cls = string.rsplit(".", 1) - if reload: - module_imp = importlib.import_module(module) - importlib.reload(module_imp) - return getattr(importlib.import_module(module, package=None), cls) - - -def _do_parallel_data_prefetch(func, Q, data, idx, idx_to_fn=False): - # create dummy dataset instance - - # run prefetching - if idx_to_fn: - res = func(data, worker_id=idx) - else: - res = func(data) - Q.put([idx, res]) - Q.put("Done") - - -def parallel_data_prefetch( - func: callable, data, n_proc, target_data_type="ndarray", cpu_intensive=True, use_worker_id=False -): - # if target_data_type not in ["ndarray", "list"]: - # raise ValueError( - # "Data, which is passed to parallel_data_prefetch has to be either of type list or ndarray." - # ) - if isinstance(data, np.ndarray) and target_data_type == "list": - raise ValueError("list expected but function got ndarray.") - elif isinstance(data, abc.Iterable): - if isinstance(data, dict): - print( - f'WARNING:"data" argument passed to parallel_data_prefetch is a dict: Using only its values and disregarding keys.' - ) - data = list(data.values()) - if target_data_type == "ndarray": - data = np.asarray(data) - else: - data = list(data) - else: - raise TypeError( - f"The data, that shall be processed parallel has to be either an np.ndarray or an Iterable, but is actually {type(data)}." - ) - - if cpu_intensive: - Q = mp.Queue(1000) - proc = mp.Process - else: - Q = Queue(1000) - proc = Thread - # spawn processes - if target_data_type == "ndarray": - arguments = [ - [func, Q, part, i, use_worker_id] - for i, part in enumerate(np.array_split(data, n_proc)) - ] - else: - step = ( - int(len(data) / n_proc + 1) - if len(data) % n_proc != 0 - else int(len(data) / n_proc) - ) - arguments = [ - [func, Q, part, i, use_worker_id] - for i, part in enumerate( - [data[i: i + step] for i in range(0, len(data), step)] - ) - ] - processes = [] - for i in range(n_proc): - p = proc(target=_do_parallel_data_prefetch, args=arguments[i]) - processes += [p] - - # start processes - print(f"Start prefetching...") - import time - - start = time.time() - gather_res = [[] for _ in range(n_proc)] - try: - for p in processes: - p.start() - - k = 0 - while k < n_proc: - # get result - res = Q.get() - if res == "Done": - k += 1 - else: - gather_res[res[0]] = res[1] - - except Exception as e: - print("Exception: ", e) - for p in processes: - p.terminate() - - raise e - finally: - for p in processes: - p.join() - print(f"Prefetching complete. [{time.time() - start} sec.]") - - if target_data_type == 'ndarray': - if not isinstance(gather_res[0], np.ndarray): - return np.concatenate([np.asarray(r) for r in gather_res], axis=0) - - # order outputs - return np.concatenate(gather_res, axis=0) - elif target_data_type == 'list': - out = [] - for r in gather_res: - out.extend(r) - return out - else: - return gather_res diff --git a/spaces/Intel/NeuralChat-ICX-INT4/fastchat/serve/gradio_block_arena_named.py b/spaces/Intel/NeuralChat-ICX-INT4/fastchat/serve/gradio_block_arena_named.py deleted file mode 100644 index 0db977bad8887f7b7a653b835bac508efd65aba6..0000000000000000000000000000000000000000 --- a/spaces/Intel/NeuralChat-ICX-INT4/fastchat/serve/gradio_block_arena_named.py +++ /dev/null @@ -1,395 +0,0 @@ -import json -import time - -import gradio as gr -import numpy as np - -from fastchat.conversation import get_default_conv_template -from fastchat.utils import ( - build_logger, - violates_moderation, - moderation_msg, -) -from fastchat.serve.gradio_patch import Chatbot as grChatbot -from fastchat.serve.gradio_web_server import ( - http_bot, - get_conv_log_filename, - no_change_btn, - enable_btn, - disable_btn, -) - - -logger = build_logger("gradio_web_server_multi", "gradio_web_server_multi.log") - -num_models = 2 -enable_moderation = False - - -def set_global_vars_named(enable_moderation_): - global enable_moderation - enable_moderation = enable_moderation_ - - -def load_demo_side_by_side_named(models, url_params): - states = (None,) * num_models - - model_left = models[0] - if len(models) > 1: - weights = ([8, 4, 2, 1] + [1] * 32)[:len(models) - 1] - weights = weights / np.sum(weights) - model_right = np.random.choice(models[1:], p=weights) - else: - model_right = model_left - - selector_updates = ( - gr.Dropdown.update(model_left, visible=True), - gr.Dropdown.update(model_right, visible=True), - ) - - return ( - states - + selector_updates - + (gr.Chatbot.update(visible=True),) * num_models - + ( - gr.Textbox.update(visible=True), - gr.Box.update(visible=True), - gr.Row.update(visible=True), - gr.Row.update(visible=True), - gr.Accordion.update(visible=True), - ) - ) - - -def vote_last_response(states, vote_type, model_selectors, request: gr.Request): - with open(get_conv_log_filename(), "a") as fout: - data = { - "tstamp": round(time.time(), 4), - "type": vote_type, - "models": [x for x in model_selectors], - "states": [x.dict() for x in states], - "ip": request.client.host, - } - fout.write(json.dumps(data) + "\n") - - -def leftvote_last_response( - state0, state1, model_selector0, model_selector1, request: gr.Request -): - logger.info(f"leftvote (named). ip: {request.client.host}") - vote_last_response( - [state0, state1], "leftvote", [model_selector0, model_selector1], request - ) - return ("",) + (disable_btn,) * 3 - - -def rightvote_last_response( - state0, state1, model_selector0, model_selector1, request: gr.Request -): - logger.info(f"rightvote (named). ip: {request.client.host}") - vote_last_response( - [state0, state1], "rightvote", [model_selector0, model_selector1], request - ) - return ("",) + (disable_btn,) * 3 - - -def tievote_last_response( - state0, state1, model_selector0, model_selector1, request: gr.Request -): - logger.info(f"tievote (named). ip: {request.client.host}") - vote_last_response( - [state0, state1], "tievote", [model_selector0, model_selector1], request - ) - return ("",) + (disable_btn,) * 3 - - -def regenerate(state0, state1, request: gr.Request): - logger.info(f"regenerate (named). ip: {request.client.host}") - states = [state0, state1] - for i in range(num_models): - states[i].messages[-1][-1] = None - states[i].skip_next = False - return states + [x.to_gradio_chatbot() for x in states] + [""] + [disable_btn] * 5 - - -def clear_history(request: gr.Request): - logger.info(f"clear_history (named). ip: {request.client.host}") - return [None] * num_models + [None] * num_models + [""] + [disable_btn] * 5 - - -def share_click(state0, state1, model_selector0, model_selector1, - request: gr.Request): - logger.info(f"share (named). ip: {request.client.host}") - if state0 is not None and state1 is not None: - vote_last_response( - [state0, state1], "share", [model_selector0, model_selector1], request - ) - - -def add_text(state0, state1, text, request: gr.Request): - logger.info(f"add_text (named). ip: {request.client.host}. len: {len(text)}") - states = [state0, state1] - - for i in range(num_models): - if states[i] is None: - states[i] = get_default_conv_template("vicuna").copy() - - if len(text) <= 0: - for i in range(num_models): - states[i].skip_next = True - return ( - states - + [x.to_gradio_chatbot() for x in states] - + [""] - + [ - no_change_btn, - ] - * 5 - ) - - if enable_moderation: - flagged = violates_moderation(text) - if flagged: - logger.info(f"violate moderation (named). ip: {request.client.host}. text: {text}") - for i in range(num_models): - states[i].skip_next = True - return ( - states - + [x.to_gradio_chatbot() for x in states] - + [moderation_msg] - + [ - no_change_btn, - ] - * 5 - ) - - text = text[:1536] # Hard cut-off - for i in range(num_models): - states[i].append_message(states[i].roles[0], text) - states[i].append_message(states[i].roles[1], None) - states[i].skip_next = False - - return ( - states - + [x.to_gradio_chatbot() for x in states] - + [""] - + [ - disable_btn, - ] - * 5 - ) - - -def http_bot_all( - state0, - state1, - model_selector0, - model_selector1, - temperature, - max_new_tokens, - request: gr.Request, -): - logger.info(f"http_bot_all (named). ip: {request.client.host}") - states = [state0, state1] - model_selector = [model_selector0, model_selector1] - gen = [] - for i in range(num_models): - gen.append( - http_bot(states[i], model_selector[i], temperature, max_new_tokens, request) - ) - - chatbots = [None] * num_models - while True: - stop = True - for i in range(num_models): - try: - ret = next(gen[i]) - states[i], chatbots[i] = ret[0], ret[1] - buttons = ret[2:] - stop = False - except StopIteration: - pass - yield states + chatbots + list(buttons) - if stop: - break - - for i in range(10): - if i % 2 == 0: - yield states + chatbots + [disable_btn] * 3 + list(buttons)[3:] - else: - yield states + chatbots + list(buttons) - time.sleep(0.2) - - -def build_side_by_side_ui_named(models): - notice_markdown = """ -# ⚔️ Chatbot Arena ⚔️ -Rules: -- Chat with two models side-by-side and vote for which one is better! -- You pick the models you want to chat with. -- You can continue chating and voting or click "Clear history" to start a new round. -- A leaderboard will be available soon. -- [[GitHub]](https://github.com/lm-sys/FastChat) [[Twitter]](https://twitter.com/lmsysorg) [[Discord]](https://discord.gg/h6kCZb72G7) - -### Terms of use -By using this service, users are required to agree to the following terms: The service is a research preview intended for non-commercial use only. It only provides limited safety measures and may generate offensive content. It must not be used for any illegal, harmful, violent, racist, or sexual purposes. **The service collects user dialogue data for future research.** -The demo works better on desktop devices with a wide screen. - -### Choose two models to chat with -| | | -| ---- | ---- | -| [Vicuna](https://vicuna.lmsys.org): a chat assistant fine-tuned from LLaMA on user-shared conversations by LMSYS. | [Koala](https://bair.berkeley.edu/blog/2023/04/03/koala/): a dialogue model for academic research by BAIR | -| [OpenAssistant (oasst)](https://open-assistant.io/): a chat-based assistant for everyone by LAION. | [Dolly](https://www.databricks.com/blog/2023/04/12/dolly-first-open-commercially-viable-instruction-tuned-llm): an instruction-tuned open large language model by Databricks. | -| [ChatGLM](https://chatglm.cn/blog): an open bilingual dialogue language model by Tsinghua University | [StableLM](https://github.com/stability-AI/stableLM/): Stability AI language models. | -| [Alpaca](https://crfm.stanford.edu/2023/03/13/alpaca.html): a model fine-tuned from LLaMA on instruction-following demonstrations by Stanford. | [LLaMA](https://arxiv.org/abs/2302.13971): open and efficient foundation language models by Meta. | -""" - - learn_more_markdown = """ -### License -The service is a research preview intended for non-commercial use only, subject to the model [License](https://github.com/facebookresearch/llama/blob/main/MODEL_CARD.md) of LLaMA, [Terms of Use](https://openai.com/policies/terms-of-use) of the data generated by OpenAI, and [Privacy Practices](https://chrome.google.com/webstore/detail/sharegpt-share-your-chatg/daiacboceoaocpibfodeljbdfacokfjb) of ShareGPT. Please contact us if you find any potential violation. -""" - - states = [gr.State() for _ in range(num_models)] - model_selectors = [None] * num_models - chatbots = [None] * num_models - - notice = gr.Markdown(notice_markdown, elem_id="notice_markdown") - - with gr.Box(elem_id="share-region"): - with gr.Row(): - for i in range(num_models): - with gr.Column(): - model_selectors[i] = gr.Dropdown( - choices=models, - value=models[i] if len(models) > i else "", - interactive=True, - show_label=False, - ).style(container=False) - - with gr.Row(): - for i in range(num_models): - label = "Model A" if i == 0 else "Model B" - with gr.Column(): - chatbots[i] = grChatbot(label=label, elem_id=f"chatbot{i}", - visible=False).style(height=550) - - with gr.Box() as button_row: - with gr.Row(): - leftvote_btn = gr.Button(value="👈 A is better", interactive=False) - tie_btn = gr.Button(value="🤝 Tie", interactive=False) - rightvote_btn = gr.Button(value="👉 B is better", interactive=False) - - with gr.Row(): - with gr.Column(scale=20): - textbox = gr.Textbox( - show_label=False, - placeholder="Enter text and press ENTER", - visible=False, - ).style(container=False) - with gr.Column(scale=1, min_width=50): - send_btn = gr.Button(value="Send", visible=False) - - with gr.Row() as button_row2: - regenerate_btn = gr.Button(value="🔄 Regenerate", interactive=False) - clear_btn = gr.Button(value="🗑️ Clear history", interactive=False) - share_btn = gr.Button(value="📷 Share") - - with gr.Accordion("Parameters", open=False, visible=True) as parameter_row: - temperature = gr.Slider( - minimum=0.0, - maximum=1.0, - value=0.7, - step=0.1, - interactive=True, - label="Temperature", - ) - max_output_tokens = gr.Slider( - minimum=0, - maximum=1024, - value=512, - step=64, - interactive=True, - label="Max output tokens", - ) - - gr.Markdown(learn_more_markdown) - - # Register listeners - btn_list = [leftvote_btn, rightvote_btn, tie_btn, regenerate_btn, clear_btn] - leftvote_btn.click( - leftvote_last_response, - states + model_selectors, - [textbox, leftvote_btn, rightvote_btn, tie_btn], - ) - rightvote_btn.click( - rightvote_last_response, - states + model_selectors, - [textbox, leftvote_btn, rightvote_btn, tie_btn], - ) - tie_btn.click( - tievote_last_response, - states + model_selectors, - [textbox, leftvote_btn, rightvote_btn, tie_btn], - ) - regenerate_btn.click( - regenerate, states, states + chatbots + [textbox] + btn_list - ).then( - http_bot_all, - states + model_selectors + [temperature, max_output_tokens], - states + chatbots + btn_list, - ) - clear_btn.click(clear_history, None, states + chatbots + [textbox] + btn_list) - - share_js=""" -function (a, b, c, d) { - const captureElement = document.querySelector('#share-region'); - html2canvas(captureElement) - .then(canvas => { - canvas.style.display = 'none' - document.body.appendChild(canvas) - return canvas - }) - .then(canvas => { - const image = canvas.toDataURL('image/png') - const a = document.createElement('a') - a.setAttribute('download', 'chatbot-arena.png') - a.setAttribute('href', image) - a.click() - canvas.remove() - }); - return [a, b, c, d]; -} -""" - share_btn.click(share_click, states + model_selectors, [], _js=share_js) - - for i in range(num_models): - model_selectors[i].change( - clear_history, None, states + chatbots + [textbox] + btn_list - ) - - textbox.submit( - add_text, states + [textbox], states + chatbots + [textbox] + btn_list - ).then( - http_bot_all, - states + model_selectors + [temperature, max_output_tokens], - states + chatbots + btn_list, - ) - send_btn.click( - add_text, states + [textbox], states + chatbots + [textbox] + btn_list - ).then( - http_bot_all, - states + model_selectors + [temperature, max_output_tokens], - states + chatbots + btn_list, - ) - - return ( - states, - model_selectors, - chatbots, - textbox, - send_btn, - button_row, - button_row2, - parameter_row, - ) - diff --git a/spaces/Juno360219/cloudqi-cqi_text_to_image_pt_v0/README.md b/spaces/Juno360219/cloudqi-cqi_text_to_image_pt_v0/README.md deleted file mode 100644 index c6cc054cd7fea45bcfdb0c3d0a0c4590c62656d9..0000000000000000000000000000000000000000 --- a/spaces/Juno360219/cloudqi-cqi_text_to_image_pt_v0/README.md +++ /dev/null @@ -1,20 +0,0 @@ ---- -title: Shiny for Python template -emoji: 🌍 -colorFrom: yellow -colorTo: indigo -sdk: docker -pinned: false -license: mit -duplicated_from: posit/shiny-for-python-template ---- - -This is a templated Space for [Shiny for Python](https://shiny.rstudio.com/py/). - -To get started with a new app do the following: - -1) Install Shiny with `pip install shiny` -2) Create a new app with `shiny create .` -3) Then run the app with `shiny run --reload` - -To learn more about this framework please see the [Documentation](https://shiny.rstudio.com/py/docs/overview.html). diff --git a/spaces/Kayson/InstructDiffusion/dataset/pose/pose.py b/spaces/Kayson/InstructDiffusion/dataset/pose/pose.py deleted file mode 100644 index beb9ec8afc0ff60f8e431bc27005cb271af495c6..0000000000000000000000000000000000000000 --- a/spaces/Kayson/InstructDiffusion/dataset/pose/pose.py +++ /dev/null @@ -1,760 +0,0 @@ -# ------------------------------------------------------------------------------ -# Copyright (c) Microsoft -# Licensed under the MIT License. -# Written by Bin Xiao (Bin.Xiao@microsoft.com) -# Modified by Zigang Geng (zigang@mail.ustc.edu.cn) -# ------------------------------------------------------------------------------ - -from __future__ import annotations - -import logging -import os -import json -import copy -import math -import random -from pathlib import Path -from typing import Any - -import cv2 -import numpy as np -import torch -import torchvision -from einops import rearrange -from PIL import Image -from torch.utils.data import Dataset -import torchvision.transforms as transforms -from pycocotools.coco import COCO - - -logger = logging.getLogger(__name__) - - -colors = { - 'red': (255, 0, 0), - 'green': (0, 255, 0), - 'blue': (0, 0, 255), - 'yellow': (255, 255, 0), - 'cyan': (0, 255, 255), - 'magenta': (255, 0, 255), - 'gray': (128, 128, 128), - 'white': (255, 255, 255), - 'black': (0, 0, 0)} - - -def readTXT(txt_path): - with open(txt_path, 'r') as f: - listInTXT = [line.strip() for line in f] - - return listInTXT - - -class PoseDataset(Dataset): - def __init__(self, root, image_set, is_train, max_prompt_num=5, min_prompt_num=1, - radius=10, size=256, transparency=0.0, sample_weight=1.0, transform=None): - - self.sample_weight = sample_weight - self.max_prompt_num = max_prompt_num - self.min_prompt_num = min_prompt_num - self.radius = radius - self.transparency = transparency - self.num_joints = 0 - self.pixel_std = 200 - self.flip_pairs = [] - self.parent_ids = [] - - self.keypoints_type = {} - - self.is_train = is_train - self.image_set = image_set - self.root = root - - self.scale_factor = 0.35 - self.rotation_factor = 45 - self.flip = True - self.num_joints_half_body = 8 - self.prob_half_body = 0.3 - - self.image_size = np.array((size, size)) - self.heatmap_size = np.array((size, size)) - - self.transform = transform - self.db = [] - - pose_diverse_prompt_path = 'dataset/prompt/prompt_pose.txt' - self.pose_diverse_prompt_list = [] - with open(pose_diverse_prompt_path) as f: - line = f.readline() - while line: - line = line.strip('\n') - self.pose_diverse_prompt_list.append(line) - line = f.readline() - - def _get_db(self): - raise NotImplementedError - - def evaluate(self, preds, output_dir, *args, **kwargs): - raise NotImplementedError - - def half_body_transform(self, joints, joints_vis): - upper_joints = [] - lower_joints = [] - for joint_id in range(self.num_joints): - if joints_vis[joint_id][0] > 0: - if joint_id in self.upper_body_ids: - upper_joints.append(joints[joint_id]) - else: - lower_joints.append(joints[joint_id]) - - if np.random.randn() < 0.5 and len(upper_joints) > 2: - selected_joints = upper_joints - else: - selected_joints = lower_joints \ - if len(lower_joints) > 2 else upper_joints - - if len(selected_joints) < 2: - return None, None - - selected_joints = np.array(selected_joints, dtype=np.float32) - center = selected_joints.mean(axis=0)[:2] - - left_top = np.amin(selected_joints, axis=0) - right_bottom = np.amax(selected_joints, axis=0) - - w = right_bottom[0] - left_top[0] - h = right_bottom[1] - left_top[1] - - if w > self.aspect_ratio * h: - h = w * 1.0 / self.aspect_ratio - elif w < self.aspect_ratio * h: - w = h * self.aspect_ratio - - scale = np.array( - [ - w * 1.0 / self.pixel_std, - h * 1.0 / self.pixel_std - ], - dtype=np.float32 - ) - - scale = scale * 1.5 - - return center, scale - - def __len__(self,): - return int(len(self.db) * self.sample_weight) - - def __getitem__(self, idx): - if self.sample_weight >= 1: - idx = idx % len(self.db) - else: - idx = int(idx / self.sample_weight) + random.randint(0, int(1 / self.sample_weight) - 1) - - db_rec = copy.deepcopy(self.db[idx]) - - image_file = db_rec['image'] - filename = db_rec['filename'] if 'filename' in db_rec else '' - imgnum = db_rec['imgnum'] if 'imgnum' in db_rec else '' - - data_numpy = cv2.imread( - image_file, cv2.IMREAD_COLOR | cv2.IMREAD_IGNORE_ORIENTATION - ) - data_numpy = cv2.cvtColor(data_numpy, cv2.COLOR_BGR2RGB) - - if data_numpy is None: - logger.error('=> fail to read {}'.format(image_file)) - raise ValueError('Fail to read {}'.format(image_file)) - - joints = db_rec['joints_3d'] - joints_vis = db_rec['joints_3d_vis'] - - c = db_rec['center'] - s = db_rec['scale'] - score = db_rec['score'] if 'score' in db_rec else 1 - r = 0 - - if self.is_train: - if (np.sum(joints_vis[:, 0]) > self.num_joints_half_body - and np.random.rand() < self.prob_half_body): - c_half_body, s_half_body = self.half_body_transform( - joints, joints_vis - ) - - if c_half_body is not None and s_half_body is not None: - c, s = c_half_body, s_half_body - - sf = self.scale_factor - rf = self.rotation_factor - s = s * np.clip(np.random.randn()*sf + 1, 1 - sf, 1 + sf) - r = np.clip(np.random.randn()*rf, -rf*2, rf*2) \ - if random.random() <= 0.6 else 0 - - if self.flip and random.random() <= 0.5: - data_numpy = data_numpy[:, ::-1, :] - joints, joints_vis = fliplr_joints( - joints, joints_vis, data_numpy.shape[1], self.flip_pairs) - c[0] = data_numpy.shape[1] - c[0] - 1 - - trans = get_affine_transform(c, s, r, self.image_size) - input = cv2.warpAffine( - data_numpy, - trans, - (int(self.image_size[0]), int(self.image_size[1])), - flags=cv2.INTER_LINEAR) - - if self.transform: - input = self.transform(input) - - for i in range(self.num_joints): - if joints_vis[i, 0] > 0.0: - joints[i, 0:2] = affine_transform(joints[i, 0:2], trans) - - target, prompt = self.generate_target(input, joints, joints_vis) - - # return Image.fromarray(input), Image.fromarray(target), prompt - - image_0 = rearrange(2 * torch.tensor(np.array(input)).float() / 255 - 1, "h w c -> c h w") - image_1 = rearrange(2 * torch.tensor(np.array(target)).float() / 255 - 1, "h w c -> c h w") - - return dict(edited=image_1, edit=dict(c_concat=image_0, c_crossattn=prompt)) - - def generate_target(self, input, joints, joints_vis): - ''' - :param input: [height, width, 3] - :param joints: [num_joints, 3] - :param joints_vis: [num_joints, 3] - :return: target - ''' - radius = self.radius - target = copy.deepcopy(input) - - joint_num = random.randint(self.min_prompt_num, self.max_prompt_num) - joint_ids = np.random.choice([i for i in range(self.num_joints)], joint_num, replace=False) - random_color_names = random.sample(list(colors.keys()), len(joint_ids)) - random_marker_names = ['circle' for i in range(len(joint_ids))] - - prompt = "" - - for color_idx, joint_id in enumerate(joint_ids): - feat_stride = self.image_size / self.heatmap_size - mu_x = int(joints[joint_id][0] / feat_stride[0] + 0.5) - mu_y = int(joints[joint_id][1] / feat_stride[1] + 0.5) - # Check that any part of the gaussian is in-bounds - ul = [int(mu_x - radius), int(mu_y - radius)] - br = [int(mu_x + radius + 1), int(mu_y + radius + 1)] - if ul[0] >= self.heatmap_size[0] or ul[1] >= self.heatmap_size[1] \ - or br[0] < 0 or br[1] < 0: - # If not, just return the image as is - joints_vis[joint_id][0] = 0 - continue - - marker_size = 2 * radius + 1 - g = np.zeros((marker_size, marker_size)) - x, y = np.indices((marker_size, marker_size)) - interval = int((marker_size - marker_size / math.sqrt(2)) // 2) - mask = (x - radius) ** 2 + (y - radius) ** 2 <= radius ** 2 + 1 - g[mask] = 1 - - # Usable gaussian range - g_x = max(0, -ul[0]), min(br[0], self.heatmap_size[0]) - ul[0] - g_y = max(0, -ul[1]), min(br[1], self.heatmap_size[1]) - ul[1] - # Image range - img_x = max(0, ul[0]), min(br[0], self.heatmap_size[0]) - img_y = max(0, ul[1]), min(br[1], self.heatmap_size[1]) - - v = joints_vis[joint_id][0] - random_color_name = random_color_names[color_idx] - random_color = colors[random_color_name] - - prompt += random.choice(self.pose_diverse_prompt_list).format( - color=random_color_name, - joint=self.keypoints_type[joint_id]) - - if v > 0.5: - target[img_y[0]:img_y[1], img_x[0]:img_x[1]][g[g_y[0]:g_y[1], g_x[0]:g_x[1]]>0] \ - = self.transparency*target[img_y[0]:img_y[1], img_x[0]:img_x[1]][g[g_y[0]:g_y[1], g_x[0]:g_x[1]]>0] \ - + (1-self.transparency)*np.array(random_color) - - return target, prompt - - -class COCODataset(PoseDataset): - def __init__(self, root, image_set, is_train, max_prompt_num=5, min_prompt_num=1, - radius=10, size=256, transparency=0.0, sample_weight=1.0, transform=None): - - super().__init__(root, image_set, is_train, max_prompt_num, min_prompt_num, - radius, size, transparency, sample_weight, transform) - - self.keypoints_type = { - 0: "nose", - 1: "left eye", - 2: "right eye", - 3: "left ear", - 4: "right ear", - 5: "left shoulder", - 6: "right shoulder", - 7: "left elbow", - 8: "right elbow", - 9: "left wrist", - 10: "right wrist", - 11: "left hip", - 12: "right hip", - 13: "left knee", - 14: "right knee", - 15: "left ankle", - 16: "right ankle" - } - - self.image_width = size - self.image_height = size - self.aspect_ratio = self.image_width * 1.0 / self.image_height - self.pixel_std = 200 - - self.coco = COCO(self._get_ann_file_keypoint()) - - # deal with class names - cats = [cat['name'] - for cat in self.coco.loadCats(self.coco.getCatIds())] - self.classes = ['__background__'] + cats - logger.info('=> classes: {}'.format(self.classes)) - self.num_classes = len(self.classes) - self._class_to_ind = dict(zip(self.classes, range(self.num_classes))) - self._class_to_coco_ind = dict(zip(cats, self.coco.getCatIds())) - self._coco_ind_to_class_ind = dict( - [ - (self._class_to_coco_ind[cls], self._class_to_ind[cls]) - for cls in self.classes[1:] - ] - ) - - # load image file names - self.image_set_index = self._load_image_set_index() - self.num_images = len(self.image_set_index) - logger.info('=> num_images: {}'.format(self.num_images)) - - self.num_joints = 17 - self.flip_pairs = [[1, 2], [3, 4], [5, 6], [7, 8], - [9, 10], [11, 12], [13, 14], [15, 16]] - self.parent_ids = None - self.upper_body_ids = (0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10) - self.lower_body_ids = (11, 12, 13, 14, 15, 16) - - if 'coco' in self.root: - self.db = self._get_db() - - logger.info('=> load {} samples'.format(len(self.db))) - - def _get_ann_file_keypoint(self): - """ self.root / annotations / person_keypoints_train2017.json """ - if 'coco' in self.root: - prefix = 'person_keypoints' \ - if 'test' not in self.image_set else 'image_info' - return os.path.join( - self.root, - 'annotations', - prefix + '_' + self.image_set + '.json' - ) - elif 'crowdpose' in self.root: - prefix = 'crowdpose' - return os.path.join( - self.root, - 'json', - prefix + '_' + self.image_set + '.json' - ) - elif 'aic' in self.root: - prefix = 'aic' - return os.path.join( - self.root, - 'annotations', - prefix + '_' + self.image_set + '.json' - ) - else: - raise ValueError('Please write the path for this new dataset.') - - def _load_image_set_index(self): - """ image id: int """ - image_ids = self.coco.getImgIds() - return image_ids - - def _get_db(self): - gt_db = self._load_coco_keypoint_annotations() - return gt_db - - def _load_coco_keypoint_annotations(self): - """ ground truth bbox and keypoints """ - gt_db = [] - for index in self.image_set_index: - gt_db.extend(self._load_coco_keypoint_annotation_kernal(index)) - return gt_db - - def _load_coco_keypoint_annotation_kernal(self, index): - """ - coco ann: [u'segmentation', u'area', u'iscrowd', u'image_id', u'bbox', u'category_id', u'id'] - iscrowd: - crowd instances are handled by marking their overlaps with all categories to -1 - and later excluded in training - bbox: - [x1, y1, w, h] - :param index: coco image id - :return: db entry - """ - im_ann = self.coco.loadImgs(index)[0] - width = im_ann['width'] - height = im_ann['height'] - - annIds = self.coco.getAnnIds(imgIds=index, iscrowd=False) - objs = self.coco.loadAnns(annIds) - - # sanitize bboxes - valid_objs = [] - for obj in objs: - x, y, w, h = obj['bbox'] - x1 = np.max((0, x)) - y1 = np.max((0, y)) - x2 = np.min((width - 1, x1 + np.max((0, w - 1)))) - y2 = np.min((height - 1, y1 + np.max((0, h - 1)))) - if 'crowdpose' in self.root: - obj['area'] = 1 - if obj['area'] > 0 and x2 >= x1 and y2 >= y1: - obj['clean_bbox'] = [x1, y1, x2-x1, y2-y1] - valid_objs.append(obj) - objs = valid_objs - - rec = [] - for obj in objs: - cls = self._coco_ind_to_class_ind[obj['category_id']] - if cls != 1: - continue - - # ignore objs without keypoints annotation - if max(obj['keypoints']) == 0: - continue - - joints_3d = np.zeros((self.num_joints, 3), dtype=np.float32) - joints_3d_vis = np.zeros((self.num_joints, 3), dtype=np.float32) - for ipt in range(self.num_joints): - joints_3d[ipt, 0] = obj['keypoints'][ipt * 3 + 0] - joints_3d[ipt, 1] = obj['keypoints'][ipt * 3 + 1] - joints_3d[ipt, 2] = 0 - t_vis = obj['keypoints'][ipt * 3 + 2] - if t_vis > 1: - t_vis = 1 - joints_3d_vis[ipt, 0] = t_vis - joints_3d_vis[ipt, 1] = t_vis - joints_3d_vis[ipt, 2] = 0 - - center, scale = self._box2cs(obj['clean_bbox'][:4]) - rec.append({ - 'image': self.image_path_from_index(index, im_ann), - 'center': center, - 'scale': scale, - 'joints_3d': joints_3d, - 'joints_3d_vis': joints_3d_vis, - 'filename': '', - 'imgnum': 0, - }) - - return rec - - def _box2cs(self, box): - x, y, w, h = box[:4] - return self._xywh2cs(x, y, w, h) - - def _xywh2cs(self, x, y, w, h): - center = np.zeros((2), dtype=np.float32) - center[0] = x + w * 0.5 - center[1] = y + h * 0.5 - - if w > self.aspect_ratio * h: - h = w * 1.0 / self.aspect_ratio - elif w < self.aspect_ratio * h: - w = h * self.aspect_ratio - scale = np.array( - [w * 1.0 / self.pixel_std, h * 1.0 / self.pixel_std], - dtype=np.float32) - if center[0] != -1: - scale = scale * 1.25 - - return center, scale - - def image_path_from_index(self, index, im_ann): - """ example: images / train2017 / 000000119993.jpg """ - if 'coco' in self.root: - file_name = '%012d.jpg' % index - if '2014' in self.image_set: - file_name = 'COCO_%s_' % self.image_set + file_name - - prefix = 'test2017' if 'test' in self.image_set else self.image_set - - data_name = prefix - - image_path = os.path.join( - self.root, 'images', data_name, file_name) - - return image_path - elif 'crowdpose' in self.root: - file_name = f'{index}.jpg' - - image_path = os.path.join( - self.root, 'images', file_name) - - return image_path - elif 'aic' in self.root: - file_name = im_ann["file_name"] - - image_path = os.path.join( - self.root, 'ai_challenger_keypoint_train_20170902', 'keypoint_train_images_20170902', file_name) - - return image_path - - -def flip_back(output_flipped, matched_parts): - ''' - ouput_flipped: numpy.ndarray(batch_size, num_joints, height, width) - ''' - assert output_flipped.ndim == 4,\ - 'output_flipped should be [batch_size, num_joints, height, width]' - - output_flipped = output_flipped[:, :, :, ::-1] - - for pair in matched_parts: - tmp = output_flipped[:, pair[0], :, :].copy() - output_flipped[:, pair[0], :, :] = output_flipped[:, pair[1], :, :] - output_flipped[:, pair[1], :, :] = tmp - - return output_flipped - - -def fliplr_joints(joints, joints_vis, width, matched_parts): - """ - flip coords - """ - # Flip horizontal - joints[:, 0] = width - joints[:, 0] - 1 - - # Change left-right parts - for pair in matched_parts: - joints[pair[0], :], joints[pair[1], :] = \ - joints[pair[1], :], joints[pair[0], :].copy() - joints_vis[pair[0], :], joints_vis[pair[1], :] = \ - joints_vis[pair[1], :], joints_vis[pair[0], :].copy() - - return joints*joints_vis, joints_vis - - -def get_affine_transform( - center, scale, rot, output_size, - shift=np.array([0, 0], dtype=np.float32), inv=0 -): - if not isinstance(scale, np.ndarray) and not isinstance(scale, list): - print(scale) - scale = np.array([scale, scale]) - - scale_tmp = scale * 200.0 - src_w = scale_tmp[0] - dst_w = output_size[0] - dst_h = output_size[1] - - rot_rad = np.pi * rot / 180 - src_dir = get_dir([0, src_w * -0.5], rot_rad) - dst_dir = np.array([0, dst_w * -0.5], np.float32) - - src = np.zeros((3, 2), dtype=np.float32) - dst = np.zeros((3, 2), dtype=np.float32) - src[0, :] = center + scale_tmp * shift - src[1, :] = center + src_dir + scale_tmp * shift - dst[0, :] = [dst_w * 0.5, dst_h * 0.5] - dst[1, :] = np.array([dst_w * 0.5, dst_h * 0.5]) + dst_dir - - src[2:, :] = get_3rd_point(src[0, :], src[1, :]) - dst[2:, :] = get_3rd_point(dst[0, :], dst[1, :]) - - if inv: - trans = cv2.getAffineTransform(np.float32(dst), np.float32(src)) - else: - trans = cv2.getAffineTransform(np.float32(src), np.float32(dst)) - - return trans - - -def affine_transform(pt, t): - new_pt = np.array([pt[0], pt[1], 1.]).T - new_pt = np.dot(t, new_pt) - return new_pt[:2] - - -def get_3rd_point(a, b): - direct = a - b - return b + np.array([-direct[1], direct[0]], dtype=np.float32) - - -def get_dir(src_point, rot_rad): - sn, cs = np.sin(rot_rad), np.cos(rot_rad) - - src_result = [0, 0] - src_result[0] = src_point[0] * cs - src_point[1] * sn - src_result[1] = src_point[0] * sn + src_point[1] * cs - - return src_result - - -class CrowdPoseDataset(COCODataset): - def __init__(self, root, image_set, is_train, max_prompt_num=5, min_prompt_num=1, - radius=10, size=256, transparency=0.0, sample_weight=1.0, transform=None): - - super().__init__(root, image_set, is_train, max_prompt_num, min_prompt_num, - radius, size, transparency, sample_weight, transform) - - self.keypoints_type = { - 0: 'left_shoulder', - 1: 'right_shoulder', - 2: 'left_elbow', - 3: 'right_elbow', - 4: 'left_wrist', - 5: 'right_wrist', - 6: 'left_hip', - 7: 'right_hip', - 8: 'left_knee', - 9: 'right_knee', - 10: 'left_ankle', - 11: 'right_ankle', - 12: 'top_head', - 13: 'neck' - } - - self.num_joints = 14 - self.prob_half_body = -1 - self.flip_pairs = [[0, 1], [2, 3], [4, 5], [6, 7], [8, 9], [10, 11]] - self.parent_ids = None - self.upper_body_ids = (0, 1, 2, 3, 4, 5, 12, 13) - self.lower_body_ids = (6, 7, 8, 9, 10, 11) - - self.db = self._get_db() - - logger.info('=> load {} samples'.format(len(self.db))) - - -class AICDataset(COCODataset): - def __init__(self, root, image_set, is_train, max_prompt_num=5, min_prompt_num=1, - radius=10, size=256, transparency=0.0, sample_weight=1.0, transform=None): - super().__init__(root, image_set, is_train, max_prompt_num, min_prompt_num, - radius, size, transparency, sample_weight, transform) - - self.keypoints_type = { - 0: "right_shoulder", - 1: "right_elbow", - 2: "right_wrist", - 3: "left_shoulder", - 4: "left_elbow", - 5: "left_wrist", - 6: "right_hip", - 7: "right_knee", - 8: "right_ankle", - 9: "left_hip", - 10: "left_knee", - 11: "left_ankle", - 12: "head_top", - 13: "neck" - } - - self.num_joints = 14 - self.prob_half_body = -1 - self.flip_pairs = [[0, 3], [1, 4], [2, 5], [6, 9], [7, 10], [8, 11]] - self.parent_ids = None - self.upper_body_ids = (0, 1, 2, 3, 4, 5, 12, 13) - self.lower_body_ids = (6, 7, 8, 9, 10, 11) - - self.db = self._get_db() - - logger.info('=> load {} samples'.format(len(self.db))) - - -class MPIIDataset(PoseDataset): - def __init__(self, root, image_set, is_train, max_prompt_num=5, min_prompt_num=1, - radius=10, size=256, transparency=0.0, sample_weight=1.0, transform=None): - super().__init__(root, image_set, is_train, max_prompt_num, min_prompt_num, - radius, size, transparency, sample_weight, transform) - - self.keypoints_type = { - 0: 'right_ankle', - 1: 'right_knee', - 2: 'right_hip', - 3: 'left_hip', - 4: 'left_knee', - 5: 'left_ankle', - 6: 'pelvis', - 7: 'thorax', - 8: 'upper_neck', - 9: 'head_top', - 10: 'right_wrist', - 11: 'right_elbow', - 12: 'right_shoulder', - 13: 'left_shoulder', - 14: 'left_elbow', - 15: 'left_wrist' - } - - self.data_format = 'jpg' - self.num_joints = 16 - self.prob_half_body = -1 - self.flip_pairs = [[0, 5], [1, 4], [2, 3], [10, 15], [11, 14], [12, 13]] - self.parent_ids = None - self.upper_body_ids = (7, 8, 9, 10, 11, 12, 13, 14, 15) - self.lower_body_ids = (0, 1, 2, 3, 4, 5, 6) - - self.db = self._get_db() - - logger.info('=> load {} samples'.format(len(self.db))) - - def _get_db(self): - # create train/val split - file_name = os.path.join( - self.root, 'annot', self.image_set+'.json' - ) - with open(file_name) as anno_file: - anno = json.load(anno_file) - - gt_db = [] - for a in anno: - image_name = a['image'] - - c = np.array(a['center'], dtype=np.float32) - s = np.array([a['scale'], a['scale']], dtype=np.float32) - - # Adjust center/scale slightly to avoid cropping limbs - if c[0] != -1: - c[1] = c[1] + 15 * s[1] - s = s * 1.25 - - # MPII uses matlab format, index is based 1, - # we should first convert to 0-based index - c = c - 1 - - joints_3d = np.zeros((self.num_joints, 3), dtype=np.float32) - joints_3d_vis = np.zeros((self.num_joints, 3), dtype=np.float32) - if self.image_set != 'test': - joints = np.array(a['joints']) - joints[:, 0:2] = joints[:, 0:2] - 1 - joints_vis = np.array(a['joints_vis']) - assert len(joints) == self.num_joints, \ - 'joint num diff: {} vs {}'.format(len(joints), - self.num_joints) - - joints_3d[:, 0:2] = joints[:, 0:2] - joints_3d_vis[:, 0] = joints_vis[:] - joints_3d_vis[:, 1] = joints_vis[:] - - image_dir = 'images.zip@' if self.data_format == 'zip' else 'images' - gt_db.append( - { - 'image': os.path.join(self.root, image_dir, image_name), - 'center': c, - 'scale': s, - 'joints_3d': joints_3d, - 'joints_3d_vis': joints_3d_vis, - 'filename': '', - 'imgnum': 0, - } - ) - - return gt_db diff --git a/spaces/Kevin676/AutoGPT/autogpt/agent/__init__.py b/spaces/Kevin676/AutoGPT/autogpt/agent/__init__.py deleted file mode 100644 index e928af2205b1c52d19dc89ec4246e8c1d2c20e3f..0000000000000000000000000000000000000000 --- a/spaces/Kevin676/AutoGPT/autogpt/agent/__init__.py +++ /dev/null @@ -1,4 +0,0 @@ -from autogpt.agent.agent import Agent -from autogpt.agent.agent_manager import AgentManager - -__all__ = ["Agent", "AgentManager"] diff --git a/spaces/Kevin676/AutoGPT/autogpt/memory/weaviate.py b/spaces/Kevin676/AutoGPT/autogpt/memory/weaviate.py deleted file mode 100644 index 5408e9a97aa3594ad443448cfc31f2546a01eb09..0000000000000000000000000000000000000000 --- a/spaces/Kevin676/AutoGPT/autogpt/memory/weaviate.py +++ /dev/null @@ -1,127 +0,0 @@ -import uuid - -import weaviate -from weaviate import Client -from weaviate.embedded import EmbeddedOptions -from weaviate.util import generate_uuid5 - -from autogpt.config import Config -from autogpt.memory.base import MemoryProviderSingleton, get_ada_embedding - - -def default_schema(weaviate_index): - return { - "class": weaviate_index, - "properties": [ - { - "name": "raw_text", - "dataType": ["text"], - "description": "original text for the embedding", - } - ], - } - - -class WeaviateMemory(MemoryProviderSingleton): - def __init__(self, cfg): - auth_credentials = self._build_auth_credentials(cfg) - - url = f"{cfg.weaviate_protocol}://{cfg.weaviate_host}:{cfg.weaviate_port}" - - if cfg.use_weaviate_embedded: - self.client = Client( - embedded_options=EmbeddedOptions( - hostname=cfg.weaviate_host, - port=int(cfg.weaviate_port), - persistence_data_path=cfg.weaviate_embedded_path, - ) - ) - - print( - f"Weaviate Embedded running on: {url} with persistence path: {cfg.weaviate_embedded_path}" - ) - else: - self.client = Client(url, auth_client_secret=auth_credentials) - - self.index = WeaviateMemory.format_classname(cfg.memory_index) - self._create_schema() - - @staticmethod - def format_classname(index): - # weaviate uses capitalised index names - # The python client uses the following code to format - # index names before the corresponding class is created - if len(index) == 1: - return index.capitalize() - return index[0].capitalize() + index[1:] - - def _create_schema(self): - schema = default_schema(self.index) - if not self.client.schema.contains(schema): - self.client.schema.create_class(schema) - - def _build_auth_credentials(self, cfg): - if cfg.weaviate_username and cfg.weaviate_password: - return weaviate.AuthClientPassword( - cfg.weaviate_username, cfg.weaviate_password - ) - if cfg.weaviate_api_key: - return weaviate.AuthApiKey(api_key=cfg.weaviate_api_key) - else: - return None - - def add(self, data): - vector = get_ada_embedding(data) - - doc_uuid = generate_uuid5(data, self.index) - data_object = {"raw_text": data} - - with self.client.batch as batch: - batch.add_data_object( - uuid=doc_uuid, - data_object=data_object, - class_name=self.index, - vector=vector, - ) - - return f"Inserting data into memory at uuid: {doc_uuid}:\n data: {data}" - - def get(self, data): - return self.get_relevant(data, 1) - - def clear(self): - self.client.schema.delete_all() - - # weaviate does not yet have a neat way to just remove the items in an index - # without removing the entire schema, therefore we need to re-create it - # after a call to delete_all - self._create_schema() - - return "Obliterated" - - def get_relevant(self, data, num_relevant=5): - query_embedding = get_ada_embedding(data) - try: - results = ( - self.client.query.get(self.index, ["raw_text"]) - .with_near_vector({"vector": query_embedding, "certainty": 0.7}) - .with_limit(num_relevant) - .do() - ) - - if len(results["data"]["Get"][self.index]) > 0: - return [ - str(item["raw_text"]) for item in results["data"]["Get"][self.index] - ] - else: - return [] - - except Exception as err: - print(f"Unexpected error {err=}, {type(err)=}") - return [] - - def get_stats(self): - result = self.client.query.aggregate(self.index).with_meta_count().do() - class_data = result["data"]["Aggregate"][self.index] - - return class_data[0]["meta"] if class_data else {} diff --git a/spaces/Kevin676/ChatGPT-with-Voice-Cloning-in-Chinese/ppg_extractor/encoder/subsampling.py b/spaces/Kevin676/ChatGPT-with-Voice-Cloning-in-Chinese/ppg_extractor/encoder/subsampling.py deleted file mode 100644 index e754126b2ec1f2d914206ec35ec026c7b6add17f..0000000000000000000000000000000000000000 --- a/spaces/Kevin676/ChatGPT-with-Voice-Cloning-in-Chinese/ppg_extractor/encoder/subsampling.py +++ /dev/null @@ -1,218 +0,0 @@ -#!/usr/bin/env python3 -# -*- coding: utf-8 -*- - -# Copyright 2019 Shigeki Karita -# Apache 2.0 (http://www.apache.org/licenses/LICENSE-2.0) - -"""Subsampling layer definition.""" -import logging -import torch - -from espnet.nets.pytorch_backend.transformer.embedding import PositionalEncoding - - -class Conv2dSubsampling(torch.nn.Module): - """Convolutional 2D subsampling (to 1/4 length or 1/2 length). - - :param int idim: input dim - :param int odim: output dim - :param flaot dropout_rate: dropout rate - :param torch.nn.Module pos_enc: custom position encoding layer - - """ - - def __init__(self, idim, odim, dropout_rate, pos_enc=None, - subsample_by_2=False, - ): - """Construct an Conv2dSubsampling object.""" - super(Conv2dSubsampling, self).__init__() - self.subsample_by_2 = subsample_by_2 - if subsample_by_2: - self.conv = torch.nn.Sequential( - torch.nn.Conv2d(1, odim, kernel_size=5, stride=1, padding=2), - torch.nn.ReLU(), - torch.nn.Conv2d(odim, odim, kernel_size=4, stride=2, padding=1), - torch.nn.ReLU(), - ) - self.out = torch.nn.Sequential( - torch.nn.Linear(odim * (idim // 2), odim), - pos_enc if pos_enc is not None else PositionalEncoding(odim, dropout_rate), - ) - else: - self.conv = torch.nn.Sequential( - torch.nn.Conv2d(1, odim, kernel_size=4, stride=2, padding=1), - torch.nn.ReLU(), - torch.nn.Conv2d(odim, odim, kernel_size=4, stride=2, padding=1), - torch.nn.ReLU(), - ) - self.out = torch.nn.Sequential( - torch.nn.Linear(odim * (idim // 4), odim), - pos_enc if pos_enc is not None else PositionalEncoding(odim, dropout_rate), - ) - - def forward(self, x, x_mask): - """Subsample x. - - :param torch.Tensor x: input tensor - :param torch.Tensor x_mask: input mask - :return: subsampled x and mask - :rtype Tuple[torch.Tensor, torch.Tensor] - - """ - x = x.unsqueeze(1) # (b, c, t, f) - x = self.conv(x) - b, c, t, f = x.size() - x = self.out(x.transpose(1, 2).contiguous().view(b, t, c * f)) - if x_mask is None: - return x, None - if self.subsample_by_2: - return x, x_mask[:, :, ::2] - else: - return x, x_mask[:, :, ::2][:, :, ::2] - - def __getitem__(self, key): - """Subsample x. - - When reset_parameters() is called, if use_scaled_pos_enc is used, - return the positioning encoding. - - """ - if key != -1: - raise NotImplementedError("Support only `-1` (for `reset_parameters`).") - return self.out[key] - - -class Conv2dNoSubsampling(torch.nn.Module): - """Convolutional 2D without subsampling. - - :param int idim: input dim - :param int odim: output dim - :param flaot dropout_rate: dropout rate - :param torch.nn.Module pos_enc: custom position encoding layer - - """ - - def __init__(self, idim, odim, dropout_rate, pos_enc=None): - """Construct an Conv2dSubsampling object.""" - super().__init__() - logging.info("Encoder does not do down-sample on mel-spectrogram.") - self.conv = torch.nn.Sequential( - torch.nn.Conv2d(1, odim, kernel_size=5, stride=1, padding=2), - torch.nn.ReLU(), - torch.nn.Conv2d(odim, odim, kernel_size=5, stride=1, padding=2), - torch.nn.ReLU(), - ) - self.out = torch.nn.Sequential( - torch.nn.Linear(odim * idim, odim), - pos_enc if pos_enc is not None else PositionalEncoding(odim, dropout_rate), - ) - - def forward(self, x, x_mask): - """Subsample x. - - :param torch.Tensor x: input tensor - :param torch.Tensor x_mask: input mask - :return: subsampled x and mask - :rtype Tuple[torch.Tensor, torch.Tensor] - - """ - x = x.unsqueeze(1) # (b, c, t, f) - x = self.conv(x) - b, c, t, f = x.size() - x = self.out(x.transpose(1, 2).contiguous().view(b, t, c * f)) - if x_mask is None: - return x, None - return x, x_mask - - def __getitem__(self, key): - """Subsample x. - - When reset_parameters() is called, if use_scaled_pos_enc is used, - return the positioning encoding. - - """ - if key != -1: - raise NotImplementedError("Support only `-1` (for `reset_parameters`).") - return self.out[key] - - -class Conv2dSubsampling6(torch.nn.Module): - """Convolutional 2D subsampling (to 1/6 length). - - :param int idim: input dim - :param int odim: output dim - :param flaot dropout_rate: dropout rate - - """ - - def __init__(self, idim, odim, dropout_rate): - """Construct an Conv2dSubsampling object.""" - super(Conv2dSubsampling6, self).__init__() - self.conv = torch.nn.Sequential( - torch.nn.Conv2d(1, odim, 3, 2), - torch.nn.ReLU(), - torch.nn.Conv2d(odim, odim, 5, 3), - torch.nn.ReLU(), - ) - self.out = torch.nn.Sequential( - torch.nn.Linear(odim * (((idim - 1) // 2 - 2) // 3), odim), - PositionalEncoding(odim, dropout_rate), - ) - - def forward(self, x, x_mask): - """Subsample x. - - :param torch.Tensor x: input tensor - :param torch.Tensor x_mask: input mask - :return: subsampled x and mask - :rtype Tuple[torch.Tensor, torch.Tensor] - """ - x = x.unsqueeze(1) # (b, c, t, f) - x = self.conv(x) - b, c, t, f = x.size() - x = self.out(x.transpose(1, 2).contiguous().view(b, t, c * f)) - if x_mask is None: - return x, None - return x, x_mask[:, :, :-2:2][:, :, :-4:3] - - -class Conv2dSubsampling8(torch.nn.Module): - """Convolutional 2D subsampling (to 1/8 length). - - :param int idim: input dim - :param int odim: output dim - :param flaot dropout_rate: dropout rate - - """ - - def __init__(self, idim, odim, dropout_rate): - """Construct an Conv2dSubsampling object.""" - super(Conv2dSubsampling8, self).__init__() - self.conv = torch.nn.Sequential( - torch.nn.Conv2d(1, odim, 3, 2), - torch.nn.ReLU(), - torch.nn.Conv2d(odim, odim, 3, 2), - torch.nn.ReLU(), - torch.nn.Conv2d(odim, odim, 3, 2), - torch.nn.ReLU(), - ) - self.out = torch.nn.Sequential( - torch.nn.Linear(odim * ((((idim - 1) // 2 - 1) // 2 - 1) // 2), odim), - PositionalEncoding(odim, dropout_rate), - ) - - def forward(self, x, x_mask): - """Subsample x. - - :param torch.Tensor x: input tensor - :param torch.Tensor x_mask: input mask - :return: subsampled x and mask - :rtype Tuple[torch.Tensor, torch.Tensor] - """ - x = x.unsqueeze(1) # (b, c, t, f) - x = self.conv(x) - b, c, t, f = x.size() - x = self.out(x.transpose(1, 2).contiguous().view(b, t, c * f)) - if x_mask is None: - return x, None - return x, x_mask[:, :, :-2:2][:, :, :-2:2][:, :, :-2:2] diff --git a/spaces/KingBlaze1227/PC-PICKERS/README.md b/spaces/KingBlaze1227/PC-PICKERS/README.md deleted file mode 100644 index 1453b7a93e22cf243f0d811f768c39c53d44ff6e..0000000000000000000000000000000000000000 --- a/spaces/KingBlaze1227/PC-PICKERS/README.md +++ /dev/null @@ -1,11 +0,0 @@ ---- -title: PC PICKERS -emoji: 🐠 -colorFrom: pink -colorTo: yellow -sdk: static -pinned: false -license: cc ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Kreaols/ChuanhuChatGPT/assets/custom.css b/spaces/Kreaols/ChuanhuChatGPT/assets/custom.css deleted file mode 100644 index 22108488886cfc8d7772214dd9b83727b3fca6a3..0000000000000000000000000000000000000000 --- a/spaces/Kreaols/ChuanhuChatGPT/assets/custom.css +++ /dev/null @@ -1,468 +0,0 @@ -:root { - --chatbot-color-light: #000000; - --chatbot-color-dark: #FFFFFF; - --chatbot-background-color-light: #F3F3F3; - --chatbot-background-color-dark: #121111; - --message-user-background-color-light: #95EC69; - --message-user-background-color-dark: #26B561; - --message-bot-background-color-light: #FFFFFF; - --message-bot-background-color-dark: #2C2C2C; -} - -#app_title { - font-weight: var(--prose-header-text-weight); - font-size: var(--text-xxl); - line-height: 1.3; - text-align: left; - margin-top: 6px; - white-space: nowrap; -} -#description { - text-align: center; - margin: 32px 0 4px 0; -} - -/* gradio的页脚信息 */ -footer { - /* display: none !important; */ - margin-top: .2em !important; - font-size: 85%; -} -#footer { - text-align: center; -} -#footer div { - display: inline-block; -} -#footer .versions{ - font-size: 85%; - opacity: 0.60; -} - -#float_display { - position: absolute; - max-height: 30px; -} -/* user_info */ -#user_info { - white-space: nowrap; - position: absolute; left: 8em; top: .2em; - z-index: var(--layer-2); - box-shadow: var(--block-shadow); - border: none; border-radius: var(--block-label-radius); - background: var(--color-accent); - padding: var(--block-label-padding); - font-size: var(--block-label-text-size); line-height: var(--line-sm); - width: auto; min-height: 30px!important; - opacity: 1; - transition: opacity 0.3s ease-in-out; -} -#user_info .wrap { - opacity: 0; -} -#user_info p { - color: white; - font-weight: var(--block-label-text-weight); -} -#user_info.hideK { - opacity: 0; - transition: opacity 1s ease-in-out; -} - -/* status_display */ -#status_display { - display: flex; - min-height: 2em; - align-items: flex-end; - justify-content: flex-end; -} -#status_display p { - font-size: .85em; - font-family: ui-monospace, "SF Mono", "SFMono-Regular", "Menlo", "Consolas", "Liberation Mono", "Microsoft Yahei UI", "Microsoft Yahei", monospace; - /* Windows下中文的monospace会fallback为新宋体,实在太丑,这里折中使用微软雅黑 */ - color: var(--body-text-color-subdued); -} - -#status_display { - transition: all 0.6s; -} -#chuanhu_chatbot { - transition: height 0.3s ease; -} - -/* usage_display */ -.insert_block { - position: relative; - margin: 0; - padding: .5em 1em; - box-shadow: var(--block-shadow); - border-width: var(--block-border-width); - border-color: var(--block-border-color); - border-radius: var(--block-radius); - background: var(--block-background-fill); - width: 100%; - line-height: var(--line-sm); - min-height: 2em; -} -#usage_display p, #usage_display span { - margin: 0; - font-size: .85em; - color: var(--body-text-color-subdued); -} -.progress-bar { - background-color: var(--input-background-fill);; - margin: .5em 0 !important; - height: 20px; - border-radius: 10px; - overflow: hidden; -} -.progress { - background-color: var(--block-title-background-fill); - height: 100%; - border-radius: 10px; - text-align: right; - transition: width 0.5s ease-in-out; -} -.progress-text { - /* color: white; */ - color: var(--color-accent) !important; - font-size: 1em !important; - font-weight: bold; - padding-right: 10px; - line-height: 20px; -} - -.apSwitch { - top: 2px; - display: inline-block; - height: 24px; - position: relative; - width: 48px; - border-radius: 12px; -} -.apSwitch input { - display: none !important; -} -.apSlider { - background-color: var(--neutral-200); - bottom: 0; - cursor: pointer; - left: 0; - position: absolute; - right: 0; - top: 0; - transition: .4s; - font-size: 18px; - border-radius: 12px; -} -.apSlider::before { - bottom: -1.5px; - left: 1px; - position: absolute; - transition: .4s; - content: "🌞"; -} -input:checked + .apSlider { - background-color: var(--primary-600); -} -input:checked + .apSlider::before { - transform: translateX(23px); - content:"🌚"; -} - -/* Override Slider Styles (for webkit browsers like Safari and Chrome) - * 好希望这份提案能早日实现 https://github.com/w3c/csswg-drafts/issues/4410 - * 进度滑块在各个平台还是太不统一了 - */ -input[type="range"] { - -webkit-appearance: none; - height: 4px; - background: var(--input-background-fill); - border-radius: 5px; - background-image: linear-gradient(var(--primary-500),var(--primary-500)); - background-size: 0% 100%; - background-repeat: no-repeat; -} -input[type="range"]::-webkit-slider-thumb { - -webkit-appearance: none; - height: 20px; - width: 20px; - border-radius: 50%; - border: solid 0.5px #ddd; - background-color: white; - cursor: ew-resize; - box-shadow: var(--input-shadow); - transition: background-color .1s ease; -} -input[type="range"]::-webkit-slider-thumb:hover { - background: var(--neutral-50); -} -input[type=range]::-webkit-slider-runnable-track { - -webkit-appearance: none; - box-shadow: none; - border: none; - background: transparent; -} - -#submit_btn, #cancel_btn { - height: 42px !important; -} -#submit_btn::before { - content: url("data:image/svg+xml, %3Csvg width='21px' height='20px' viewBox='0 0 21 20' version='1.1' xmlns='http://www.w3.org/2000/svg' xmlns:xlink='http://www.w3.org/1999/xlink'%3E %3Cg id='page' stroke='none' stroke-width='1' fill='none' fill-rule='evenodd'%3E %3Cg id='send' transform='translate(0.435849, 0.088463)' fill='%23FFFFFF' fill-rule='nonzero'%3E %3Cpath d='M0.579148261,0.0428666046 C0.301105539,-0.0961547561 -0.036517765,0.122307382 0.0032026237,0.420210298 L1.4927172,18.1553639 C1.5125774,18.4334066 1.79062012,18.5922882 2.04880264,18.4929872 L8.24518329,15.8913017 L11.6412765,19.7441794 C11.8597387,19.9825018 12.2370824,19.8832008 12.3165231,19.5852979 L13.9450591,13.4882182 L19.7839562,11.0255541 C20.0619989,10.8865327 20.0818591,10.4694687 19.7839562,10.3105871 L0.579148261,0.0428666046 Z M11.6138902,17.0883151 L9.85385903,14.7195502 L0.718169621,0.618812241 L12.69945,12.9346347 L11.6138902,17.0883151 Z' id='shape'%3E%3C/path%3E %3C/g%3E %3C/g%3E %3C/svg%3E"); - height: 21px; -} -#cancel_btn::before { - content: url("data:image/svg+xml,%3Csvg width='21px' height='21px' viewBox='0 0 21 21' version='1.1' xmlns='http://www.w3.org/2000/svg' xmlns:xlink='http://www.w3.org/1999/xlink'%3E %3Cg id='pg' stroke='none' stroke-width='1' fill='none' fill-rule='evenodd'%3E %3Cpath d='M10.2072007,20.088463 C11.5727865,20.088463 12.8594566,19.8259823 14.067211,19.3010209 C15.2749653,18.7760595 16.3386126,18.0538087 17.2581528,17.1342685 C18.177693,16.2147282 18.8982283,15.1527965 19.4197586,13.9484733 C19.9412889,12.7441501 20.202054,11.4557644 20.202054,10.0833163 C20.202054,8.71773046 19.9395733,7.43106036 19.4146119,6.22330603 C18.8896505,5.01555169 18.1673997,3.95018885 17.2478595,3.0272175 C16.3283192,2.10424615 15.2646719,1.3837109 14.0569176,0.865611739 C12.8491633,0.34751258 11.5624932,0.088463 10.1969073,0.088463 C8.83132146,0.088463 7.54636692,0.34751258 6.34204371,0.865611739 C5.1377205,1.3837109 4.07407321,2.10424615 3.15110186,3.0272175 C2.22813051,3.95018885 1.5058797,5.01555169 0.984349419,6.22330603 C0.46281914,7.43106036 0.202054,8.71773046 0.202054,10.0833163 C0.202054,11.4557644 0.4645347,12.7441501 0.9894961,13.9484733 C1.5144575,15.1527965 2.23670831,16.2147282 3.15624854,17.1342685 C4.07578877,18.0538087 5.1377205,18.7760595 6.34204371,19.3010209 C7.54636692,19.8259823 8.83475258,20.088463 10.2072007,20.088463 Z M10.2072007,18.2562448 C9.07493099,18.2562448 8.01471483,18.0452309 7.0265522,17.6232031 C6.03838956,17.2011753 5.17031614,16.6161693 4.42233192,15.8681851 C3.6743477,15.1202009 3.09105726,14.2521274 2.67246059,13.2639648 C2.25386392,12.2758022 2.04456558,11.215586 2.04456558,10.0833163 C2.04456558,8.95104663 2.25386392,7.89083047 2.67246059,6.90266784 C3.09105726,5.9145052 3.6743477,5.04643178 4.42233192,4.29844756 C5.17031614,3.55046334 6.036674,2.9671729 7.02140552,2.54857623 C8.00613703,2.12997956 9.06463763,1.92068122 10.1969073,1.92068122 C11.329177,1.92068122 12.3911087,2.12997956 13.3827025,2.54857623 C14.3742962,2.9671729 15.2440852,3.55046334 15.9920694,4.29844756 C16.7400537,5.04643178 17.3233441,5.9145052 17.7419408,6.90266784 C18.1605374,7.89083047 18.3698358,8.95104663 18.3698358,10.0833163 C18.3698358,11.215586 18.1605374,12.2758022 17.7419408,13.2639648 C17.3233441,14.2521274 16.7400537,15.1202009 15.9920694,15.8681851 C15.2440852,16.6161693 14.3760118,17.2011753 13.3878492,17.6232031 C12.3996865,18.0452309 11.3394704,18.2562448 10.2072007,18.2562448 Z M7.65444721,13.6242324 L12.7496608,13.6242324 C13.0584616,13.6242324 13.3003556,13.5384544 13.4753427,13.3668984 C13.6503299,13.1953424 13.7378234,12.9585951 13.7378234,12.6566565 L13.7378234,7.49968276 C13.7378234,7.19774418 13.6503299,6.96099688 13.4753427,6.78944087 C13.3003556,6.61788486 13.0584616,6.53210685 12.7496608,6.53210685 L7.65444721,6.53210685 C7.33878414,6.53210685 7.09345904,6.61788486 6.91847191,6.78944087 C6.74348478,6.96099688 6.65599121,7.19774418 6.65599121,7.49968276 L6.65599121,12.6566565 C6.65599121,12.9585951 6.74348478,13.1953424 6.91847191,13.3668984 C7.09345904,13.5384544 7.33878414,13.6242324 7.65444721,13.6242324 Z' id='shape' fill='%23FF3B30' fill-rule='nonzero'%3E%3C/path%3E %3C/g%3E %3C/svg%3E"); - height: 21px; -} -/* list */ -ol:not(.options), ul:not(.options) { - padding-inline-start: 2em !important; -} - -/* 亮色(默认) */ -#chuanhu_chatbot { - background-color: var(--chatbot-background-color-light) !important; - color: var(--chatbot-color-light) !important; -} -[data-testid = "bot"] { - background-color: var(--message-bot-background-color-light) !important; -} -[data-testid = "user"] { - background-color: var(--message-user-background-color-light) !important; -} -/* 暗色 */ -.dark #chuanhu_chatbot { - background-color: var(--chatbot-background-color-dark) !important; - color: var(--chatbot-color-dark) !important; -} -.dark [data-testid = "bot"] { - background-color: var(--message-bot-background-color-dark) !important; -} -.dark [data-testid = "user"] { - background-color: var(--message-user-background-color-dark) !important; -} - -/* 屏幕宽度大于等于500px的设备 */ -/* update on 2023.4.8: 高度的细致调整已写入JavaScript */ -@media screen and (min-width: 500px) { - #chuanhu_chatbot { - height: calc(100vh - 200px); - } - #chuanhu_chatbot .wrap { - max-height: calc(100vh - 200px - var(--line-sm)*1rem - 2*var(--block-label-margin) ); - } -} -/* 屏幕宽度小于500px的设备 */ -@media screen and (max-width: 499px) { - #chuanhu_chatbot { - height: calc(100vh - 140px); - } - #chuanhu_chatbot .wrap { - max-height: calc(100vh - 140px - var(--line-sm)*1rem - 2*var(--block-label-margin) ); - } - [data-testid = "bot"] { - max-width: 95% !important; - } - #app_title h1{ - letter-spacing: -1px; font-size: 22px; - } -} -#chuanhu_chatbot .wrap { - overflow-x: hidden; -} -/* 对话气泡 */ -.message { - border-radius: var(--radius-xl) !important; - border: none; - padding: var(--spacing-xl) !important; - font-size: var(--text-md) !important; - line-height: var(--line-md) !important; - min-height: calc(var(--text-md)*var(--line-md) + 2*var(--spacing-xl)); - min-width: calc(var(--text-md)*var(--line-md) + 2*var(--spacing-xl)); -} -[data-testid = "bot"] { - max-width: 85%; - border-bottom-left-radius: 0 !important; -} -[data-testid = "user"] { - max-width: 85%; - width: auto !important; - border-bottom-right-radius: 0 !important; -} - -.message.user p { - white-space: pre-wrap; -} -.message .user-message { - display: block; - padding: 0 !important; - white-space: pre-wrap; -} - -.message .md-message p { - margin-top: 0.6em !important; - margin-bottom: 0.6em !important; -} -.message .md-message p:first-child { margin-top: 0 !important; } -.message .md-message p:last-of-type { margin-bottom: 0 !important; } - -.message .md-message { - display: block; - padding: 0 !important; -} -.message .raw-message p { - margin:0 !important; -} -.message .raw-message { - display: block; - padding: 0 !important; - white-space: pre-wrap; -} -.raw-message.hideM, .md-message.hideM { - display: none; -} - -/* custom buttons */ -.chuanhu-btn { - border-radius: 5px; - /* background-color: #E6E6E6 !important; */ - color: rgba(120, 120, 120, 0.64) !important; - padding: 4px !important; - position: absolute; - right: -22px; - cursor: pointer !important; - transition: color .2s ease, background-color .2s ease; -} -.chuanhu-btn:hover { - background-color: rgba(167, 167, 167, 0.25) !important; - color: unset !important; -} -.chuanhu-btn:active { - background-color: rgba(167, 167, 167, 0.5) !important; -} -.chuanhu-btn:focus { - outline: none; -} -.copy-bot-btn { - /* top: 18px; */ - bottom: 0; -} -.toggle-md-btn { - /* top: 0; */ - bottom: 20px; -} -.copy-code-btn { - position: relative; - float: right; - font-size: 1em; - cursor: pointer; -} - -.message-wrap>div img{ - border-radius: 10px !important; -} - -/* history message */ -.wrap>.history-message { - padding: 10px !important; -} -.history-message { - /* padding: 0 !important; */ - opacity: 80%; - display: flex; - flex-direction: column; -} -.history-message>.history-message { - padding: 0 !important; -} -.history-message>.message-wrap { - padding: 0 !important; - margin-bottom: 16px; -} -.history-message>.message { - margin-bottom: 16px; -} -.wrap>.history-message::after { - content: ""; - display: block; - height: 2px; - background-color: var(--body-text-color-subdued); - margin-bottom: 10px; - margin-top: -10px; - clear: both; -} -.wrap>.history-message>:last-child::after { - content: "仅供查看"; - display: block; - text-align: center; - color: var(--body-text-color-subdued); - font-size: 0.8em; -} - -/* 表格 */ -table { - margin: 1em 0; - border-collapse: collapse; - empty-cells: show; -} -td,th { - border: 1.2px solid var(--border-color-primary) !important; - padding: 0.2em; -} -thead { - background-color: rgba(175,184,193,0.2); -} -thead th { - padding: .5em .2em; -} -/* 行内代码 */ -.message :not(pre) code { - display: inline; - white-space: break-spaces; - font-family: var(--font-mono); - border-radius: 6px; - margin: 0 2px 0 2px; - padding: .2em .4em .1em .4em; - background-color: rgba(175,184,193,0.2); -} -/* 代码块 */ -.message pre, -.message pre[class*=language-] { - color: #fff; - overflow-x: auto; - overflow-y: hidden; - margin: .8em 1em 1em 0em !important; - padding: var(--spacing-xl) 1.2em !important; - border-radius: var(--radius-lg) !important; -} -.message pre code, -.message pre code[class*=language-] { - color: #fff; - padding: 0; - margin: 0; - background-color: unset; - text-shadow: none; - font-family: var(--font-mono); -} -/* 覆盖 gradio 丑陋的复制按钮样式 */ -pre button[title="copy"] { - border-radius: 5px; - transition: background-color .2s ease; -} -pre button[title="copy"]:hover { - background-color: #333232; -} -pre button .check { - color: #fff !important; - background: var(--neutral-950) !important; -} - -/* 覆盖prism.css */ -.language-css .token.string, -.style .token.string, -.token.entity, -.token.operator, -.token.url { - background: none !important; -} diff --git a/spaces/KyanChen/RSPrompter/mmdet/__init__.py b/spaces/KyanChen/RSPrompter/mmdet/__init__.py deleted file mode 100644 index 9f6140e121bc140896a7f432465651bfb1111575..0000000000000000000000000000000000000000 --- a/spaces/KyanChen/RSPrompter/mmdet/__init__.py +++ /dev/null @@ -1,27 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import mmcv -import mmengine -from mmengine.utils import digit_version - -from .version import __version__, version_info - -mmcv_minimum_version = '2.0' -mmcv_maximum_version = '2.5.0' -mmcv_version = digit_version(mmcv.__version__) - -mmengine_minimum_version = '0.7.0' -mmengine_maximum_version = '1.5.0' -mmengine_version = digit_version(mmengine.__version__) - -assert (mmcv_version >= digit_version(mmcv_minimum_version) - and mmcv_version < digit_version(mmcv_maximum_version)), \ - f'MMCV=={mmcv.__version__} is used but incompatible. ' \ - f'Please install mmcv>={mmcv_minimum_version}, <{mmcv_maximum_version}.' - -assert (mmengine_version >= digit_version(mmengine_minimum_version) - and mmengine_version < digit_version(mmengine_maximum_version)), \ - f'MMEngine=={mmengine.__version__} is used but incompatible. ' \ - f'Please install mmengine>={mmengine_minimum_version}, ' \ - f'<{mmengine_maximum_version}.' - -__all__ = ['__version__', 'version_info', 'digit_version'] diff --git a/spaces/LabelStudio/LabelStudio/Dockerfile b/spaces/LabelStudio/LabelStudio/Dockerfile deleted file mode 100644 index 9ba913c6937a5238dd32d654197330a4bbf6f63e..0000000000000000000000000000000000000000 --- a/spaces/LabelStudio/LabelStudio/Dockerfile +++ /dev/null @@ -1,127 +0,0 @@ -FROM heartexlabs/label-studio:hf-latest - -################################################################################ -# -# How to Disable Public Account Creation -# -------------------------------------- -# By default this space allows for the unrestricted creation of new accounts -# will full access to all projects and data. This is great for trying out -# Label Studio and collaborating on projects, but you may want to restrict -# access to your space to only authorized users. Uncomment the following line -# to disable public account creation for this space. -# -# ENV LABEL_STUDIO_DISABLE_SIGNUP_WITHOUT_LINK=true -# -# Set secrets in your space to create an inital user, and log in with your -# provided username and password. Do not set these in your Dockerfile, as they -# globally visible on a public space. -# -# LABEL_STUDIO_USERNAME -# LABEL_STUDIO_PASSWORD -# -# You will need to provide new users with an invitation link to join the space. -# -################################################################################ - -################################################################################ -# -# How to Enable Persistent Storage for Label Studio in Hugging Face Spaces -# ------------------------------------------------------------------------ -# -# By default this space stores all project configuration and data annotations -# in local storage with sqlite. If the space is reset, all configuration and -# annotation data in the space will be lost. You can enable configuration -# persistence through one of two methods: -# -# 1) Enabling Hugging Face Persistent Storage for saving project and annotation -# settings, as well as local task storage. -# 2) Connecting an external Postgres database for saving project and annotation -# settings, and cloud by connecting cloud storage for tasks. -# -################################################################################ - -################################################################################ -# -# How to Enable Hugging Face Persistent Storage for Label Studio -# -------------------------------------------------------------- -# -# In the Hugging Face Label Studio Space settings, select the appropriate -# Persistent Storage tier. Note that Persistent Storage is a paid add-on. -# By default, persistent storage is mounted to /data. In your Space settings, -# set the following variables: -# -# LABEL_STUDIO_BASE_DATA_DIR=/data -# ENV STORAGE_PERSISTENCE=1 -# -# Your space will restart. NOTE: if you have existing settings and data, -# they will be lost in this first restart. Data and setting will only be -# preserved on subsequent restarts of the space. -# -################################################################################ - -################################################################################ -# -# How to Enable Configuration Persistence with Postgres -# ----------------------------------------------------- -# -# Set the following secret variables to match your own hosted instance of -# Postgres. We strongly recommend setting these as secrets to prevent leaking -# information about your database service to the public in your spaces -# definition. -# -# ENV DJANGO_DB=default -# ENV POSTGRE_NAME= -# ENV POSTGRE_PORT= -# ENV POSTGRE_USER= -# ENV POSTGRE_PASSWORD= -# ENV POSTGRE_PORT= -# ENV POSTGRE_HOST= -# -# Uncomment the following line or set the following Space variable to remove -# the warning about ephemeral storage -# -# ENV STORAGE_PERSISTENCE=1 -# -# Note that you will need to connect cloud storage to host data items that you -# want to annotate, as local storage will not be preserved across a space reset. -# -# -# How to Enable Cloud Storage -# --------------------------- -# By default the only data storage enabled for this space is local. In the case -# of a space reset, all data will be lost. To enable permanent storage, you -# must enable a cloud storage connector. We also strongly recommend enabling -# configuration persistence to preserve project data, annotations, and user -# settings. Choose the appropriate cloud connector and configure the secrets -# for it. -# -# Amazon S3 -# ========= -# STORAGE_TYPE=s3 -# STORAGE_AWS_ACCESS_KEY_ID="" -# STORAGE_AWS_SECRET_ACCESS_KEY="" -# STORAGE_AWS_BUCKET_NAME="" -# STORAGE_AWS_REGION_NAME="" -# STORAGE_AWS_FOLDER="" -# -# Google Cloud Storage -# ==================== -# -# STORAGE_TYPE=gcs -# STORAGE_GCS_BUCKET_NAME="" -# STORAGE_GCS_PROJECT_ID="" -# STORAGE_GCS_FOLDER="" -# GOOGLE_APPLICATION_CREDENTIALS="/opt/heartex/secrets/key.json" -# -# Azure Blob Storage -# ================== -# -# STORAGE_TYPE=azure -# STORAGE_AZURE_ACCOUNT_NAME="" -# STORAGE_AZURE_ACCOUNT_KEY="" -# STORAGE_AZURE_CONTAINER_NAME="" -# STORAGE_AZURE_FOLDER="" -# -################################################################################ - -CMD exec label-studio --host=$SPACE_HOST diff --git a/spaces/LaynzKunz/Aesthetic_RVC_Inference_HF/lib/infer/infer_pack/models.py b/spaces/LaynzKunz/Aesthetic_RVC_Inference_HF/lib/infer/infer_pack/models.py deleted file mode 100644 index d898604960f129fc37f464ee3669bb61cfa8f614..0000000000000000000000000000000000000000 --- a/spaces/LaynzKunz/Aesthetic_RVC_Inference_HF/lib/infer/infer_pack/models.py +++ /dev/null @@ -1,1142 +0,0 @@ -import math -import torch -from torch import nn -from torch.nn import functional as F -from lib.infer.infer_pack import modules -from lib.infer.infer_pack import attentions -from lib.infer.infer_pack.commons import get_padding -from torch.nn import Conv1d, ConvTranspose1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm -from lib.infer.infer_pack.commons import init_weights -import numpy as np -from lib.infer.infer_pack import commons - - -class TextEncoder256(nn.Module): - def __init__( - self, - out_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - f0=True, - ): - super().__init__() - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.emb_phone = nn.Linear(256, hidden_channels) - self.lrelu = nn.LeakyReLU(0.1, inplace=True) - if f0 == True: - self.emb_pitch = nn.Embedding(256, hidden_channels) # pitch 256 - self.encoder = attentions.Encoder( - hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout - ) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, phone, pitch, lengths): - if pitch == None: - x = self.emb_phone(phone) - else: - x = self.emb_phone(phone) + self.emb_pitch(pitch) - x = x * math.sqrt(self.hidden_channels) # [b, t, h] - x = self.lrelu(x) - x = torch.transpose(x, 1, -1) # [b, h, t] - x_mask = torch.unsqueeze(commons.sequence_mask(lengths, x.size(2)), 1).to( - x.dtype - ) - x = self.encoder(x * x_mask, x_mask) - stats = self.proj(x) * x_mask - - m, logs = torch.split(stats, self.out_channels, dim=1) - return m, logs, x_mask - - -class TextEncoder768(nn.Module): - def __init__( - self, - out_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - f0=True, - ): - super().__init__() - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.emb_phone = nn.Linear(768, hidden_channels) - self.lrelu = nn.LeakyReLU(0.1, inplace=True) - if f0 == True: - self.emb_pitch = nn.Embedding(256, hidden_channels) # pitch 256 - self.encoder = attentions.Encoder( - hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout - ) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, phone, pitch, lengths): - if pitch == None: - x = self.emb_phone(phone) - else: - x = self.emb_phone(phone) + self.emb_pitch(pitch) - x = x * math.sqrt(self.hidden_channels) # [b, t, h] - x = self.lrelu(x) - x = torch.transpose(x, 1, -1) # [b, h, t] - x_mask = torch.unsqueeze(commons.sequence_mask(lengths, x.size(2)), 1).to( - x.dtype - ) - x = self.encoder(x * x_mask, x_mask) - stats = self.proj(x) * x_mask - - m, logs = torch.split(stats, self.out_channels, dim=1) - return m, logs, x_mask - - -class ResidualCouplingBlock(nn.Module): - def __init__( - self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - n_flows=4, - gin_channels=0, - ): - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.n_flows = n_flows - self.gin_channels = gin_channels - - self.flows = nn.ModuleList() - for i in range(n_flows): - self.flows.append( - modules.ResidualCouplingLayer( - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=gin_channels, - mean_only=True, - ) - ) - self.flows.append(modules.Flip()) - - def forward(self, x, x_mask, g=None, reverse=False): - if not reverse: - for flow in self.flows: - x, _ = flow(x, x_mask, g=g, reverse=reverse) - else: - for flow in reversed(self.flows): - x = flow(x, x_mask, g=g, reverse=reverse) - return x - - def remove_weight_norm(self): - for i in range(self.n_flows): - self.flows[i * 2].remove_weight_norm() - - -class PosteriorEncoder(nn.Module): - def __init__( - self, - in_channels, - out_channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=0, - ): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - - self.pre = nn.Conv1d(in_channels, hidden_channels, 1) - self.enc = modules.WN( - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=gin_channels, - ) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, x, x_lengths, g=None): - x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to( - x.dtype - ) - x = self.pre(x) * x_mask - x = self.enc(x, x_mask, g=g) - stats = self.proj(x) * x_mask - m, logs = torch.split(stats, self.out_channels, dim=1) - z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask - return z, m, logs, x_mask - - def remove_weight_norm(self): - self.enc.remove_weight_norm() - - -class Generator(torch.nn.Module): - def __init__( - self, - initial_channel, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=0, - ): - super(Generator, self).__init__() - self.num_kernels = len(resblock_kernel_sizes) - self.num_upsamples = len(upsample_rates) - self.conv_pre = Conv1d( - initial_channel, upsample_initial_channel, 7, 1, padding=3 - ) - resblock = modules.ResBlock1 if resblock == "1" else modules.ResBlock2 - - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)): - self.ups.append( - weight_norm( - ConvTranspose1d( - upsample_initial_channel // (2**i), - upsample_initial_channel // (2 ** (i + 1)), - k, - u, - padding=(k - u) // 2, - ) - ) - ) - - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = upsample_initial_channel // (2 ** (i + 1)) - for j, (k, d) in enumerate( - zip(resblock_kernel_sizes, resblock_dilation_sizes) - ): - self.resblocks.append(resblock(ch, k, d)) - - self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False) - self.ups.apply(init_weights) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1) - - def forward(self, x, g=None): - x = self.conv_pre(x) - if g is not None: - x = x + self.cond(g) - - for i in range(self.num_upsamples): - x = F.leaky_relu(x, modules.LRELU_SLOPE) - x = self.ups[i](x) - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i * self.num_kernels + j](x) - else: - xs += self.resblocks[i * self.num_kernels + j](x) - x = xs / self.num_kernels - x = F.leaky_relu(x) - x = self.conv_post(x) - x = torch.tanh(x) - - return x - - def remove_weight_norm(self): - for l in self.ups: - remove_weight_norm(l) - for l in self.resblocks: - l.remove_weight_norm() - - -class SineGen(torch.nn.Module): - """Definition of sine generator - SineGen(samp_rate, harmonic_num = 0, - sine_amp = 0.1, noise_std = 0.003, - voiced_threshold = 0, - flag_for_pulse=False) - samp_rate: sampling rate in Hz - harmonic_num: number of harmonic overtones (default 0) - sine_amp: amplitude of sine-wavefrom (default 0.1) - noise_std: std of Gaussian noise (default 0.003) - voiced_thoreshold: F0 threshold for U/V classification (default 0) - flag_for_pulse: this SinGen is used inside PulseGen (default False) - Note: when flag_for_pulse is True, the first time step of a voiced - segment is always sin(np.pi) or cos(0) - """ - - def __init__( - self, - samp_rate, - harmonic_num=0, - sine_amp=0.1, - noise_std=0.003, - voiced_threshold=0, - flag_for_pulse=False, - ): - super(SineGen, self).__init__() - self.sine_amp = sine_amp - self.noise_std = noise_std - self.harmonic_num = harmonic_num - self.dim = self.harmonic_num + 1 - self.sampling_rate = samp_rate - self.voiced_threshold = voiced_threshold - - def _f02uv(self, f0): - # generate uv signal - uv = torch.ones_like(f0) - uv = uv * (f0 > self.voiced_threshold) - if uv.device.type == "privateuseone": # for DirectML - uv = uv.float() - return uv - - def forward(self, f0, upp): - """sine_tensor, uv = forward(f0) - input F0: tensor(batchsize=1, length, dim=1) - f0 for unvoiced steps should be 0 - output sine_tensor: tensor(batchsize=1, length, dim) - output uv: tensor(batchsize=1, length, 1) - """ - with torch.no_grad(): - f0 = f0[:, None].transpose(1, 2) - f0_buf = torch.zeros(f0.shape[0], f0.shape[1], self.dim, device=f0.device) - # fundamental component - f0_buf[:, :, 0] = f0[:, :, 0] - for idx in np.arange(self.harmonic_num): - f0_buf[:, :, idx + 1] = f0_buf[:, :, 0] * ( - idx + 2 - ) # idx + 2: the (idx+1)-th overtone, (idx+2)-th harmonic - rad_values = (f0_buf / self.sampling_rate) % 1 ###%1意味着n_har的乘积无法后处理优化 - rand_ini = torch.rand( - f0_buf.shape[0], f0_buf.shape[2], device=f0_buf.device - ) - rand_ini[:, 0] = 0 - rad_values[:, 0, :] = rad_values[:, 0, :] + rand_ini - tmp_over_one = torch.cumsum(rad_values, 1) # % 1 #####%1意味着后面的cumsum无法再优化 - tmp_over_one *= upp - tmp_over_one = F.interpolate( - tmp_over_one.transpose(2, 1), - scale_factor=upp, - mode="linear", - align_corners=True, - ).transpose(2, 1) - rad_values = F.interpolate( - rad_values.transpose(2, 1), scale_factor=upp, mode="nearest" - ).transpose( - 2, 1 - ) ####### - tmp_over_one %= 1 - tmp_over_one_idx = (tmp_over_one[:, 1:, :] - tmp_over_one[:, :-1, :]) < 0 - cumsum_shift = torch.zeros_like(rad_values) - cumsum_shift[:, 1:, :] = tmp_over_one_idx * -1.0 - sine_waves = torch.sin( - torch.cumsum(rad_values + cumsum_shift, dim=1) * 2 * np.pi - ) - sine_waves = sine_waves * self.sine_amp - uv = self._f02uv(f0) - uv = F.interpolate( - uv.transpose(2, 1), scale_factor=upp, mode="nearest" - ).transpose(2, 1) - noise_amp = uv * self.noise_std + (1 - uv) * self.sine_amp / 3 - noise = noise_amp * torch.randn_like(sine_waves) - sine_waves = sine_waves * uv + noise - return sine_waves, uv, noise - - -class SourceModuleHnNSF(torch.nn.Module): - """SourceModule for hn-nsf - SourceModule(sampling_rate, harmonic_num=0, sine_amp=0.1, - add_noise_std=0.003, voiced_threshod=0) - sampling_rate: sampling_rate in Hz - harmonic_num: number of harmonic above F0 (default: 0) - sine_amp: amplitude of sine source signal (default: 0.1) - add_noise_std: std of additive Gaussian noise (default: 0.003) - note that amplitude of noise in unvoiced is decided - by sine_amp - voiced_threshold: threhold to set U/V given F0 (default: 0) - Sine_source, noise_source = SourceModuleHnNSF(F0_sampled) - F0_sampled (batchsize, length, 1) - Sine_source (batchsize, length, 1) - noise_source (batchsize, length 1) - uv (batchsize, length, 1) - """ - - def __init__( - self, - sampling_rate, - harmonic_num=0, - sine_amp=0.1, - add_noise_std=0.003, - voiced_threshod=0, - is_half=True, - ): - super(SourceModuleHnNSF, self).__init__() - - self.sine_amp = sine_amp - self.noise_std = add_noise_std - self.is_half = is_half - # to produce sine waveforms - self.l_sin_gen = SineGen( - sampling_rate, harmonic_num, sine_amp, add_noise_std, voiced_threshod - ) - - # to merge source harmonics into a single excitation - self.l_linear = torch.nn.Linear(harmonic_num + 1, 1) - self.l_tanh = torch.nn.Tanh() - - def forward(self, x, upp=None): - sine_wavs, uv, _ = self.l_sin_gen(x, upp) - if self.is_half: - sine_wavs = sine_wavs.half() - sine_merge = self.l_tanh(self.l_linear(sine_wavs)) - return sine_merge, None, None # noise, uv - - -class GeneratorNSF(torch.nn.Module): - def __init__( - self, - initial_channel, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels, - sr, - is_half=False, - ): - super(GeneratorNSF, self).__init__() - self.num_kernels = len(resblock_kernel_sizes) - self.num_upsamples = len(upsample_rates) - - self.f0_upsamp = torch.nn.Upsample(scale_factor=np.prod(upsample_rates)) - self.m_source = SourceModuleHnNSF( - sampling_rate=sr, harmonic_num=0, is_half=is_half - ) - self.noise_convs = nn.ModuleList() - self.conv_pre = Conv1d( - initial_channel, upsample_initial_channel, 7, 1, padding=3 - ) - resblock = modules.ResBlock1 if resblock == "1" else modules.ResBlock2 - - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)): - c_cur = upsample_initial_channel // (2 ** (i + 1)) - self.ups.append( - weight_norm( - ConvTranspose1d( - upsample_initial_channel // (2**i), - upsample_initial_channel // (2 ** (i + 1)), - k, - u, - padding=(k - u) // 2, - ) - ) - ) - if i + 1 < len(upsample_rates): - stride_f0 = np.prod(upsample_rates[i + 1 :]) - self.noise_convs.append( - Conv1d( - 1, - c_cur, - kernel_size=stride_f0 * 2, - stride=stride_f0, - padding=stride_f0 // 2, - ) - ) - else: - self.noise_convs.append(Conv1d(1, c_cur, kernel_size=1)) - - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = upsample_initial_channel // (2 ** (i + 1)) - for j, (k, d) in enumerate( - zip(resblock_kernel_sizes, resblock_dilation_sizes) - ): - self.resblocks.append(resblock(ch, k, d)) - - self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False) - self.ups.apply(init_weights) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1) - - self.upp = np.prod(upsample_rates) - - def forward(self, x, f0, g=None): - har_source, noi_source, uv = self.m_source(f0, self.upp) - har_source = har_source.transpose(1, 2) - x = self.conv_pre(x) - if g is not None: - x = x + self.cond(g) - - for i in range(self.num_upsamples): - x = F.leaky_relu(x, modules.LRELU_SLOPE) - x = self.ups[i](x) - x_source = self.noise_convs[i](har_source) - x = x + x_source - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i * self.num_kernels + j](x) - else: - xs += self.resblocks[i * self.num_kernels + j](x) - x = xs / self.num_kernels - x = F.leaky_relu(x) - x = self.conv_post(x) - x = torch.tanh(x) - return x - - def remove_weight_norm(self): - for l in self.ups: - remove_weight_norm(l) - for l in self.resblocks: - l.remove_weight_norm() - - -sr2sr = { - "32k": 32000, - "40k": 40000, - "48k": 48000, -} - - -class SynthesizerTrnMs256NSFsid(nn.Module): - def __init__( - self, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - spk_embed_dim, - gin_channels, - sr, - **kwargs - ): - super().__init__() - if type(sr) == type("strr"): - sr = sr2sr[sr] - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.gin_channels = gin_channels - # self.hop_length = hop_length# - self.spk_embed_dim = spk_embed_dim - self.enc_p = TextEncoder256( - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - ) - self.dec = GeneratorNSF( - inter_channels, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=gin_channels, - sr=sr, - is_half=kwargs["is_half"], - ) - self.enc_q = PosteriorEncoder( - spec_channels, - inter_channels, - hidden_channels, - 5, - 1, - 16, - gin_channels=gin_channels, - ) - self.flow = ResidualCouplingBlock( - inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels - ) - self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels) - print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim) - - def remove_weight_norm(self): - self.dec.remove_weight_norm() - self.flow.remove_weight_norm() - self.enc_q.remove_weight_norm() - - def forward( - self, phone, phone_lengths, pitch, pitchf, y, y_lengths, ds - ): # 这里ds是id,[bs,1] - # print(1,pitch.shape)#[bs,t] - g = self.emb_g(ds).unsqueeze(-1) # [b, 256, 1]##1是t,广播的 - m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths) - z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g) - z_p = self.flow(z, y_mask, g=g) - z_slice, ids_slice = commons.rand_slice_segments( - z, y_lengths, self.segment_size - ) - # print(-1,pitchf.shape,ids_slice,self.segment_size,self.hop_length,self.segment_size//self.hop_length) - pitchf = commons.slice_segments2(pitchf, ids_slice, self.segment_size) - # print(-2,pitchf.shape,z_slice.shape) - o = self.dec(z_slice, pitchf, g=g) - return o, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q) - - def infer(self, phone, phone_lengths, pitch, nsff0, sid, rate=None): - g = self.emb_g(sid).unsqueeze(-1) - m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths) - z_p = (m_p + torch.exp(logs_p) * torch.randn_like(m_p) * 0.66666) * x_mask - if rate: - head = int(z_p.shape[2] * rate) - z_p = z_p[:, :, -head:] - x_mask = x_mask[:, :, -head:] - nsff0 = nsff0[:, -head:] - z = self.flow(z_p, x_mask, g=g, reverse=True) - o = self.dec(z * x_mask, nsff0, g=g) - return o, x_mask, (z, z_p, m_p, logs_p) - - -class SynthesizerTrnMs768NSFsid(nn.Module): - def __init__( - self, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - spk_embed_dim, - gin_channels, - sr, - **kwargs - ): - super().__init__() - if type(sr) == type("strr"): - sr = sr2sr[sr] - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.gin_channels = gin_channels - # self.hop_length = hop_length# - self.spk_embed_dim = spk_embed_dim - self.enc_p = TextEncoder768( - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - ) - self.dec = GeneratorNSF( - inter_channels, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=gin_channels, - sr=sr, - is_half=kwargs["is_half"], - ) - self.enc_q = PosteriorEncoder( - spec_channels, - inter_channels, - hidden_channels, - 5, - 1, - 16, - gin_channels=gin_channels, - ) - self.flow = ResidualCouplingBlock( - inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels - ) - self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels) - print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim) - - def remove_weight_norm(self): - self.dec.remove_weight_norm() - self.flow.remove_weight_norm() - self.enc_q.remove_weight_norm() - - def forward( - self, phone, phone_lengths, pitch, pitchf, y, y_lengths, ds - ): # 这里ds是id,[bs,1] - # print(1,pitch.shape)#[bs,t] - g = self.emb_g(ds).unsqueeze(-1) # [b, 256, 1]##1是t,广播的 - m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths) - z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g) - z_p = self.flow(z, y_mask, g=g) - z_slice, ids_slice = commons.rand_slice_segments( - z, y_lengths, self.segment_size - ) - # print(-1,pitchf.shape,ids_slice,self.segment_size,self.hop_length,self.segment_size//self.hop_length) - pitchf = commons.slice_segments2(pitchf, ids_slice, self.segment_size) - # print(-2,pitchf.shape,z_slice.shape) - o = self.dec(z_slice, pitchf, g=g) - return o, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q) - - def infer(self, phone, phone_lengths, pitch, nsff0, sid, rate=None): - g = self.emb_g(sid).unsqueeze(-1) - m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths) - z_p = (m_p + torch.exp(logs_p) * torch.randn_like(m_p) * 0.66666) * x_mask - if rate: - head = int(z_p.shape[2] * rate) - z_p = z_p[:, :, -head:] - x_mask = x_mask[:, :, -head:] - nsff0 = nsff0[:, -head:] - z = self.flow(z_p, x_mask, g=g, reverse=True) - o = self.dec(z * x_mask, nsff0, g=g) - return o, x_mask, (z, z_p, m_p, logs_p) - - -class SynthesizerTrnMs256NSFsid_nono(nn.Module): - def __init__( - self, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - spk_embed_dim, - gin_channels, - sr=None, - **kwargs - ): - super().__init__() - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.gin_channels = gin_channels - # self.hop_length = hop_length# - self.spk_embed_dim = spk_embed_dim - self.enc_p = TextEncoder256( - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - f0=False, - ) - self.dec = Generator( - inter_channels, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=gin_channels, - ) - self.enc_q = PosteriorEncoder( - spec_channels, - inter_channels, - hidden_channels, - 5, - 1, - 16, - gin_channels=gin_channels, - ) - self.flow = ResidualCouplingBlock( - inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels - ) - self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels) - print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim) - - def remove_weight_norm(self): - self.dec.remove_weight_norm() - self.flow.remove_weight_norm() - self.enc_q.remove_weight_norm() - - def forward(self, phone, phone_lengths, y, y_lengths, ds): # 这里ds是id,[bs,1] - g = self.emb_g(ds).unsqueeze(-1) # [b, 256, 1]##1是t,广播的 - m_p, logs_p, x_mask = self.enc_p(phone, None, phone_lengths) - z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g) - z_p = self.flow(z, y_mask, g=g) - z_slice, ids_slice = commons.rand_slice_segments( - z, y_lengths, self.segment_size - ) - o = self.dec(z_slice, g=g) - return o, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q) - - def infer(self, phone, phone_lengths, sid, rate=None): - g = self.emb_g(sid).unsqueeze(-1) - m_p, logs_p, x_mask = self.enc_p(phone, None, phone_lengths) - z_p = (m_p + torch.exp(logs_p) * torch.randn_like(m_p) * 0.66666) * x_mask - if rate: - head = int(z_p.shape[2] * rate) - z_p = z_p[:, :, -head:] - x_mask = x_mask[:, :, -head:] - z = self.flow(z_p, x_mask, g=g, reverse=True) - o = self.dec(z * x_mask, g=g) - return o, x_mask, (z, z_p, m_p, logs_p) - - -class SynthesizerTrnMs768NSFsid_nono(nn.Module): - def __init__( - self, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - spk_embed_dim, - gin_channels, - sr=None, - **kwargs - ): - super().__init__() - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.gin_channels = gin_channels - # self.hop_length = hop_length# - self.spk_embed_dim = spk_embed_dim - self.enc_p = TextEncoder768( - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - f0=False, - ) - self.dec = Generator( - inter_channels, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=gin_channels, - ) - self.enc_q = PosteriorEncoder( - spec_channels, - inter_channels, - hidden_channels, - 5, - 1, - 16, - gin_channels=gin_channels, - ) - self.flow = ResidualCouplingBlock( - inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels - ) - self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels) - print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim) - - def remove_weight_norm(self): - self.dec.remove_weight_norm() - self.flow.remove_weight_norm() - self.enc_q.remove_weight_norm() - - def forward(self, phone, phone_lengths, y, y_lengths, ds): # 这里ds是id,[bs,1] - g = self.emb_g(ds).unsqueeze(-1) # [b, 256, 1]##1是t,广播的 - m_p, logs_p, x_mask = self.enc_p(phone, None, phone_lengths) - z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g) - z_p = self.flow(z, y_mask, g=g) - z_slice, ids_slice = commons.rand_slice_segments( - z, y_lengths, self.segment_size - ) - o = self.dec(z_slice, g=g) - return o, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q) - - def infer(self, phone, phone_lengths, sid, rate=None): - g = self.emb_g(sid).unsqueeze(-1) - m_p, logs_p, x_mask = self.enc_p(phone, None, phone_lengths) - z_p = (m_p + torch.exp(logs_p) * torch.randn_like(m_p) * 0.66666) * x_mask - if rate: - head = int(z_p.shape[2] * rate) - z_p = z_p[:, :, -head:] - x_mask = x_mask[:, :, -head:] - z = self.flow(z_p, x_mask, g=g, reverse=True) - o = self.dec(z * x_mask, g=g) - return o, x_mask, (z, z_p, m_p, logs_p) - - -class MultiPeriodDiscriminator(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(MultiPeriodDiscriminator, self).__init__() - periods = [2, 3, 5, 7, 11, 17] - # periods = [3, 5, 7, 11, 17, 23, 37] - - discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)] - discs = discs + [ - DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods - ] - self.discriminators = nn.ModuleList(discs) - - def forward(self, y, y_hat): - y_d_rs = [] # - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for i, d in enumerate(self.discriminators): - y_d_r, fmap_r = d(y) - y_d_g, fmap_g = d(y_hat) - # for j in range(len(fmap_r)): - # print(i,j,y.shape,y_hat.shape,fmap_r[j].shape,fmap_g[j].shape) - y_d_rs.append(y_d_r) - y_d_gs.append(y_d_g) - fmap_rs.append(fmap_r) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - - -class MultiPeriodDiscriminatorV2(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(MultiPeriodDiscriminatorV2, self).__init__() - # periods = [2, 3, 5, 7, 11, 17] - periods = [2, 3, 5, 7, 11, 17, 23, 37] - - discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)] - discs = discs + [ - DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods - ] - self.discriminators = nn.ModuleList(discs) - - def forward(self, y, y_hat): - y_d_rs = [] # - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for i, d in enumerate(self.discriminators): - y_d_r, fmap_r = d(y) - y_d_g, fmap_g = d(y_hat) - # for j in range(len(fmap_r)): - # print(i,j,y.shape,y_hat.shape,fmap_r[j].shape,fmap_g[j].shape) - y_d_rs.append(y_d_r) - y_d_gs.append(y_d_g) - fmap_rs.append(fmap_r) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - - -class DiscriminatorS(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(DiscriminatorS, self).__init__() - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList( - [ - norm_f(Conv1d(1, 16, 15, 1, padding=7)), - norm_f(Conv1d(16, 64, 41, 4, groups=4, padding=20)), - norm_f(Conv1d(64, 256, 41, 4, groups=16, padding=20)), - norm_f(Conv1d(256, 1024, 41, 4, groups=64, padding=20)), - norm_f(Conv1d(1024, 1024, 41, 4, groups=256, padding=20)), - norm_f(Conv1d(1024, 1024, 5, 1, padding=2)), - ] - ) - self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1)) - - def forward(self, x): - fmap = [] - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class DiscriminatorP(torch.nn.Module): - def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False): - super(DiscriminatorP, self).__init__() - self.period = period - self.use_spectral_norm = use_spectral_norm - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList( - [ - norm_f( - Conv2d( - 1, - 32, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 32, - 128, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 128, - 512, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 512, - 1024, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 1024, - 1024, - (kernel_size, 1), - 1, - padding=(get_padding(kernel_size, 1), 0), - ) - ), - ] - ) - self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0))) - - def forward(self, x): - fmap = [] - - # 1d to 2d - b, c, t = x.shape - if t % self.period != 0: # pad first - n_pad = self.period - (t % self.period) - x = F.pad(x, (0, n_pad), "reflect") - t = t + n_pad - x = x.view(b, c, t // self.period, self.period) - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap \ No newline at end of file diff --git a/spaces/LaynzKunz/Aesthetic_RVC_Inference_HF/lib/tools/server.py b/spaces/LaynzKunz/Aesthetic_RVC_Inference_HF/lib/tools/server.py deleted file mode 100644 index 0b6110f0779f2f0e6c1804abca6c3990975732ce..0000000000000000000000000000000000000000 --- a/spaces/LaynzKunz/Aesthetic_RVC_Inference_HF/lib/tools/server.py +++ /dev/null @@ -1,373 +0,0 @@ -from flask import Flask, request, Response -import os -import sys -import requests -import shutil -import subprocess -import wget -import signal -from bs4 import BeautifulSoup -import logging -import click - - -app = Flask(__name__) - -# Disable flask starting message -log = logging.getLogger('werkzeug') -log.setLevel(logging.ERROR) - -def secho(text, file=None, nl=None, err=None, color=None, **styles): - pass - -def echo(text, file=None, nl=None, err=None, color=None, **styles): - pass - -click.echo = echo -click.secho = secho - -# Get the current directory path -now_dir = os.path.dirname(os.path.abspath(__file__)) - -# Go up two levels in the directory hierarchy -for _ in range(2): - now_dir = os.path.dirname(now_dir) - -# Add now_dir to sys.path so Python can find modules in that location -sys.path.append(now_dir) - -from assets.i18n.i18n import I18nAuto -i18n = I18nAuto() - -# Use the code from the resources module but with some changes -def find_folder_parent(search_dir, folder_name): - for dirpath, dirnames, filenames in os.walk(search_dir): - if folder_name in dirnames: - return os.path.abspath(dirpath) - return None - -def get_mediafire_download_link(url): - response = requests.get(url) - response.raise_for_status() - soup = BeautifulSoup(response.text, 'html.parser') - download_button = soup.find('a', {'class': 'input popsok', 'aria-label': 'Download file'}) - if download_button: - download_link = download_button.get('href') - return download_link - else: - return None - -def download_from_url(url): - file_path = find_folder_parent(now_dir, "assets") - print(file_path) - zips_path = os.path.join(file_path, "assets", "zips") - print(zips_path) - os.makedirs(zips_path, exist_ok=True) - if url != "": - print(i18n("Downloading the file: ") + f"{url}") - if "drive.google.com" in url: - if "file/d/" in url: - file_id = url.split("file/d/")[1].split("/")[0] - elif "id=" in url: - file_id = url.split("id=")[1].split("&")[0] - else: - return None - - if file_id: - os.chdir(zips_path) - result = subprocess.run( - ["gdown", f"https://drive.google.com/uc?id={file_id}", "--fuzzy"], - capture_output=True, - text=True, - encoding="utf-8", - ) - if ( - "Too many users have viewed or downloaded this file recently" - in str(result.stderr) - ): - return "too much use" - if "Cannot retrieve the public link of the file." in str(result.stderr): - return "private link" - print(result.stderr) - - elif "/blob/" in url or "/resolve/" in url: - os.chdir(zips_path) - if "/blob/" in url: - url = url.replace("/blob/", "/resolve/") - - response = requests.get(url, stream=True) - if response.status_code == 200: - file_name = url.split("/")[-1] - file_name = file_name.replace("%20", "_") - total_size_in_bytes = int(response.headers.get('content-length', 0)) - block_size = 1024 # 1 Kibibyte - progress_bar_length = 50 - progress = 0 - with open(os.path.join(zips_path, file_name), 'wb') as file: - for data in response.iter_content(block_size): - file.write(data) - progress += len(data) - progress_percent = int((progress / total_size_in_bytes) * 100) - num_dots = int((progress / total_size_in_bytes) * progress_bar_length) - progress_bar = "[" + "." * num_dots + " " * (progress_bar_length - num_dots) + "]" - print(f"{progress_percent}% {progress_bar} {progress}/{total_size_in_bytes} ", end="\r") - if progress_percent == 100: - print("\n") - else: - os.chdir(file_path) - return None - elif "mega.nz" in url: - if "#!" in url: - file_id = url.split("#!")[1].split("!")[0] - elif "file/" in url: - file_id = url.split("file/")[1].split("/")[0] - else: - return None - if file_id: - print("Mega.nz is unsupported due mega.py deprecation") - elif "/tree/main" in url: - response = requests.get(url) - soup = BeautifulSoup(response.content, "html.parser") - temp_url = "" - for link in soup.find_all("a", href=True): - if link["href"].endswith(".zip"): - temp_url = link["href"] - break - if temp_url: - url = temp_url - url = url.replace("blob", "resolve") - if "huggingface.co" not in url: - url = "https://huggingface.co" + url - - wget.download(url) - else: - print("No .zip file found on the page.") - elif "cdn.discordapp.com" in url: - file = requests.get(url) - os.chdir("./assets/zips") - if file.status_code == 200: - name = url.split("/") - with open( - os.path.join(name[-1]), "wb" - ) as newfile: - newfile.write(file.content) - else: - return None - elif "pixeldrain.com" in url: - try: - file_id = url.split("pixeldrain.com/u/")[1] - os.chdir(zips_path) - print(file_id) - response = requests.get(f"https://pixeldrain.com/api/file/{file_id}") - if response.status_code == 200: - file_name = ( - response.headers.get("Content-Disposition") - .split("filename=")[-1] - .strip('";') - ) - os.makedirs(zips_path, exist_ok=True) - with open(os.path.join(zips_path, file_name), "wb") as newfile: - newfile.write(response.content) - os.chdir(file_path) - return "downloaded" - else: - os.chdir(file_path) - return None - except Exception as e: - print(e) - os.chdir(file_path) - return None - elif "mediafire.com" in url: - download_link = get_mediafire_download_link(url) - if download_link: - os.chdir(zips_path) - wget.download(download_link) - else: - return None - elif "www.weights.gg" in url: - #Pls weights creator dont fix this because yes. c: - url_parts = url.split("/") - weights_gg_index = url_parts.index("www.weights.gg") - if weights_gg_index != -1 and weights_gg_index < len(url_parts) - 1: - model_part = "/".join(url_parts[weights_gg_index + 1:]) - if "models" in model_part: - model_part = model_part.split("models/")[-1] - print(model_part) - if model_part: - download_url = f"https://www.weights.gg/es/models/{model_part}" - response = requests.get(download_url) - if response.status_code == 200: - soup = BeautifulSoup(response.text, "html.parser") - button_link = soup.find("a", class_="bg-black text-white px-3 py-2 rounded-lg flex items-center gap-1") - if button_link: - download_link = button_link["href"] - result = download_from_url(download_link) - if result == "downloaded": - return "downloaded" - else: - return None - else: - return None - else: - return None - else: - return None - else: - return None - else: - return None - else: - os.chdir(zips_path) - wget.download(url) - - # Fix points in the zips - for currentPath, _, zipFiles in os.walk(zips_path): - for Files in zipFiles: - filePart = Files.split(".") - extensionFile = filePart[len(filePart) - 1] - filePart.pop() - nameFile = "_".join(filePart) - realPath = os.path.join(currentPath, Files) - os.rename(realPath, nameFile + "." + extensionFile) - - os.chdir(file_path) - print(i18n("Full download")) - return "downloaded" - else: - return None - -def extract_and_show_progress(zipfile_path, unzips_path): - try: - # Use shutil because zipfile not working - shutil.unpack_archive(zipfile_path, unzips_path) - return True - except Exception as e: - print(f"Error al descomprimir {zipfile_path}: {e}") - return False - - -@app.route('/download/', methods=['GET']) - -def load_downloaded_model(url): - parent_path = find_folder_parent(now_dir, "assets") - response = requests.get(url) - response.raise_for_status() - try: - zips_path = os.path.join(parent_path, "assets", "zips") - unzips_path = os.path.join(parent_path, "assets", "unzips") - weights_path = os.path.join(parent_path, "logs", "weights") - logs_dir = "" - - if os.path.exists(zips_path): - shutil.rmtree(zips_path) - if os.path.exists(unzips_path): - shutil.rmtree(unzips_path) - - os.mkdir(zips_path) - os.mkdir(unzips_path) - - download_file = download_from_url(url) - if not download_file: - print(i18n("The file could not be downloaded.")) - elif download_file == "downloaded": - print(i18n("It has been downloaded successfully.")) - elif download_file == "too much use": - raise Exception( - i18n("Too many users have recently viewed or downloaded this file") - ) - elif download_file == "private link": - raise Exception(i18n("Cannot get file from this private link")) - - for filename in os.listdir(zips_path): - if filename.endswith(".zip"): - zipfile_path = os.path.join(zips_path, filename) - print(i18n("Proceeding with the extraction...")) - model_name = os.path.basename(zipfile_path) - logs_dir = os.path.join( - parent_path, - "logs", - os.path.normpath(str(model_name).replace(".zip", "")), - ) - - success = extract_and_show_progress(zipfile_path, unzips_path) - if success: - print(f"Extracción exitosa: {model_name}") - else: - print(f"Fallo en la extracción: {model_name}") - else: - print(i18n("Unzip error.")) - return "" - - index_file = False - model_file = False - - for path, subdirs, files in os.walk(unzips_path): - for item in files: - item_path = os.path.join(path, item) - if not "G_" in item and not "D_" in item and item.endswith(".pth"): - model_file = True - model_name = item.replace(".pth", "") - logs_dir = os.path.join(parent_path, "logs", model_name) - if os.path.exists(logs_dir): - shutil.rmtree(logs_dir) - os.mkdir(logs_dir) - if not os.path.exists(weights_path): - os.mkdir(weights_path) - if os.path.exists(os.path.join(weights_path, item)): - os.remove(os.path.join(weights_path, item)) - if os.path.exists(item_path): - shutil.move(item_path, weights_path) - - if not model_file and not os.path.exists(logs_dir): - os.mkdir(logs_dir) - for path, subdirs, files in os.walk(unzips_path): - for item in files: - item_path = os.path.join(path, item) - if item.startswith("added_") and item.endswith(".index"): - index_file = True - if os.path.exists(item_path): - if os.path.exists(os.path.join(logs_dir, item)): - os.remove(os.path.join(logs_dir, item)) - shutil.move(item_path, logs_dir) - if item.startswith("total_fea.npy") or item.startswith("events."): - if os.path.exists(item_path): - if os.path.exists(os.path.join(logs_dir, item)): - os.remove(os.path.join(logs_dir, item)) - shutil.move(item_path, logs_dir) - - result = "" - if model_file: - if index_file: - print(i18n("The model works for inference, and has the .index file.")) - else: - print( - i18n( - "The model works for inference, but it doesn't have the .index file." - ) - ) - - if not index_file and not model_file: - print(i18n("No relevant file was found to upload.")) - - os.chdir(parent_path) - return result - except Exception as e: - os.chdir(parent_path) - if "too much use" in str(e): - print(i18n("Too many users have recently viewed or downloaded this file")) - elif "private link" in str(e): - print(i18n("Cannot get file from this private link")) - else: - print(i18n("An error occurred downloading")) - print(e) - finally: - os.chdir(parent_path) - -@app.route('/shutdown', methods=['POST']) -def shoutdown(): - print("This flask server is shutting down please close the window") - pid = os.getpid() - os.kill(pid, signal.SIGTERM) - -if __name__ == '__main__': - app.run(host='localhost', port=8000) diff --git a/spaces/Lianjd/stock_dashboard/backtrader/brokers/oandabroker.py b/spaces/Lianjd/stock_dashboard/backtrader/brokers/oandabroker.py deleted file mode 100644 index 6f050507a887bab754fcbbf7aca7f41271b72736..0000000000000000000000000000000000000000 --- a/spaces/Lianjd/stock_dashboard/backtrader/brokers/oandabroker.py +++ /dev/null @@ -1,357 +0,0 @@ -#!/usr/bin/env python -# -*- coding: utf-8; py-indent-offset:4 -*- -############################################################################### -# -# Copyright (C) 2015-2020 Daniel Rodriguez -# -# This program is free software: you can redistribute it and/or modify -# it under the terms of the GNU General Public License as published by -# the Free Software Foundation, either version 3 of the License, or -# (at your option) any later version. -# -# This program is distributed in the hope that it will be useful, -# but WITHOUT ANY WARRANTY; without even the implied warranty of -# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the -# GNU General Public License for more details. -# -# You should have received a copy of the GNU General Public License -# along with this program. If not, see . -# -############################################################################### -from __future__ import (absolute_import, division, print_function, - unicode_literals) - -import collections -from copy import copy -from datetime import date, datetime, timedelta -import threading - -from backtrader.feed import DataBase -from backtrader import (TimeFrame, num2date, date2num, BrokerBase, - Order, BuyOrder, SellOrder, OrderBase, OrderData) -from backtrader.utils.py3 import bytes, with_metaclass, MAXFLOAT -from backtrader.metabase import MetaParams -from backtrader.comminfo import CommInfoBase -from backtrader.position import Position -from backtrader.stores import oandastore -from backtrader.utils import AutoDict, AutoOrderedDict -from backtrader.comminfo import CommInfoBase - - -class OandaCommInfo(CommInfoBase): - def getvaluesize(self, size, price): - # In real life the margin approaches the price - return abs(size) * price - - def getoperationcost(self, size, price): - '''Returns the needed amount of cash an operation would cost''' - # Same reasoning as above - return abs(size) * price - - -class MetaOandaBroker(BrokerBase.__class__): - def __init__(cls, name, bases, dct): - '''Class has already been created ... register''' - # Initialize the class - super(MetaOandaBroker, cls).__init__(name, bases, dct) - oandastore.OandaStore.BrokerCls = cls - - -class OandaBroker(with_metaclass(MetaOandaBroker, BrokerBase)): - '''Broker implementation for Oanda. - - This class maps the orders/positions from Oanda to the - internal API of ``backtrader``. - - Params: - - - ``use_positions`` (default:``True``): When connecting to the broker - provider use the existing positions to kickstart the broker. - - Set to ``False`` during instantiation to disregard any existing - position - ''' - params = ( - ('use_positions', True), - ('commission', OandaCommInfo(mult=1.0, stocklike=False)), - ) - - def __init__(self, **kwargs): - super(OandaBroker, self).__init__() - - self.o = oandastore.OandaStore(**kwargs) - - self.orders = collections.OrderedDict() # orders by order id - self.notifs = collections.deque() # holds orders which are notified - - self.opending = collections.defaultdict(list) # pending transmission - self.brackets = dict() # confirmed brackets - - self.startingcash = self.cash = 0.0 - self.startingvalue = self.value = 0.0 - self.positions = collections.defaultdict(Position) - - def start(self): - super(OandaBroker, self).start() - self.o.start(broker=self) - self.startingcash = self.cash = cash = self.o.get_cash() - self.startingvalue = self.value = self.o.get_value() - - if self.p.use_positions: - for p in self.o.get_positions(): - print('position for instrument:', p['instrument']) - is_sell = p['side'] == 'sell' - size = p['units'] - if is_sell: - size = -size - price = p['avgPrice'] - self.positions[p['instrument']] = Position(size, price) - - def data_started(self, data): - pos = self.getposition(data) - - if pos.size < 0: - order = SellOrder(data=data, - size=pos.size, price=pos.price, - exectype=Order.Market, - simulated=True) - - order.addcomminfo(self.getcommissioninfo(data)) - order.execute(0, pos.size, pos.price, - 0, 0.0, 0.0, - pos.size, 0.0, 0.0, - 0.0, 0.0, - pos.size, pos.price) - - order.completed() - self.notify(order) - - elif pos.size > 0: - order = BuyOrder(data=data, - size=pos.size, price=pos.price, - exectype=Order.Market, - simulated=True) - - order.addcomminfo(self.getcommissioninfo(data)) - order.execute(0, pos.size, pos.price, - 0, 0.0, 0.0, - pos.size, 0.0, 0.0, - 0.0, 0.0, - pos.size, pos.price) - - order.completed() - self.notify(order) - - def stop(self): - super(OandaBroker, self).stop() - self.o.stop() - - def getcash(self): - # This call cannot block if no answer is available from oanda - self.cash = cash = self.o.get_cash() - return cash - - def getvalue(self, datas=None): - self.value = self.o.get_value() - return self.value - - def getposition(self, data, clone=True): - # return self.o.getposition(data._dataname, clone=clone) - pos = self.positions[data._dataname] - if clone: - pos = pos.clone() - - return pos - - def orderstatus(self, order): - o = self.orders[order.ref] - return o.status - - def _submit(self, oref): - order = self.orders[oref] - order.submit(self) - self.notify(order) - for o in self._bracketnotif(order): - o.submit(self) - self.notify(o) - - def _reject(self, oref): - order = self.orders[oref] - order.reject(self) - self.notify(order) - self._bracketize(order, cancel=True) - - def _accept(self, oref): - order = self.orders[oref] - order.accept() - self.notify(order) - for o in self._bracketnotif(order): - o.accept(self) - self.notify(o) - - def _cancel(self, oref): - order = self.orders[oref] - order.cancel() - self.notify(order) - self._bracketize(order, cancel=True) - - def _expire(self, oref): - order = self.orders[oref] - order.expire() - self.notify(order) - self._bracketize(order, cancel=True) - - def _bracketnotif(self, order): - pref = getattr(order.parent, 'ref', order.ref) # parent ref or self - br = self.brackets.get(pref, None) # to avoid recursion - return br[-2:] if br is not None else [] - - def _bracketize(self, order, cancel=False): - pref = getattr(order.parent, 'ref', order.ref) # parent ref or self - br = self.brackets.pop(pref, None) # to avoid recursion - if br is None: - return - - if not cancel: - if len(br) == 3: # all 3 orders in place, parent was filled - br = br[1:] # discard index 0, parent - for o in br: - o.activate() # simulate activate for children - self.brackets[pref] = br # not done - reinsert children - - elif len(br) == 2: # filling a children - oidx = br.index(order) # find index to filled (0 or 1) - self._cancel(br[1 - oidx].ref) # cancel remaining (1 - 0 -> 1) - else: - # Any cancellation cancel the others - for o in br: - if o.alive(): - self._cancel(o.ref) - - def _fill(self, oref, size, price, ttype, **kwargs): - order = self.orders[oref] - - if not order.alive(): # can be a bracket - pref = getattr(order.parent, 'ref', order.ref) - if pref not in self.brackets: - msg = ('Order fill received for {}, with price {} and size {} ' - 'but order is no longer alive and is not a bracket. ' - 'Unknown situation') - msg.format(order.ref, price, size) - self.put_notification(msg, order, price, size) - return - - # [main, stopside, takeside], neg idx to array are -3, -2, -1 - if ttype == 'STOP_LOSS_FILLED': - order = self.brackets[pref][-2] - elif ttype == 'TAKE_PROFIT_FILLED': - order = self.brackets[pref][-1] - else: - msg = ('Order fill received for {}, with price {} and size {} ' - 'but order is no longer alive and is a bracket. ' - 'Unknown situation') - msg.format(order.ref, price, size) - self.put_notification(msg, order, price, size) - return - - data = order.data - pos = self.getposition(data, clone=False) - psize, pprice, opened, closed = pos.update(size, price) - - comminfo = self.getcommissioninfo(data) - - closedvalue = closedcomm = 0.0 - openedvalue = openedcomm = 0.0 - margin = pnl = 0.0 - - order.execute(data.datetime[0], size, price, - closed, closedvalue, closedcomm, - opened, openedvalue, openedcomm, - margin, pnl, - psize, pprice) - - if order.executed.remsize: - order.partial() - self.notify(order) - else: - order.completed() - self.notify(order) - self._bracketize(order) - - def _transmit(self, order): - oref = order.ref - pref = getattr(order.parent, 'ref', oref) # parent ref or self - - if order.transmit: - if oref != pref: # children order - # Put parent in orders dict, but add stopside and takeside - # to order creation. Return the takeside order, to have 3s - takeside = order # alias for clarity - parent, stopside = self.opending.pop(pref) - for o in parent, stopside, takeside: - self.orders[o.ref] = o # write them down - - self.brackets[pref] = [parent, stopside, takeside] - self.o.order_create(parent, stopside, takeside) - return takeside # parent was already returned - - else: # Parent order, which is not being transmitted - self.orders[order.ref] = order - return self.o.order_create(order) - - # Not transmitting - self.opending[pref].append(order) - return order - - def buy(self, owner, data, - size, price=None, plimit=None, - exectype=None, valid=None, tradeid=0, oco=None, - trailamount=None, trailpercent=None, - parent=None, transmit=True, - **kwargs): - - order = BuyOrder(owner=owner, data=data, - size=size, price=price, pricelimit=plimit, - exectype=exectype, valid=valid, tradeid=tradeid, - trailamount=trailamount, trailpercent=trailpercent, - parent=parent, transmit=transmit) - - order.addinfo(**kwargs) - order.addcomminfo(self.getcommissioninfo(data)) - return self._transmit(order) - - def sell(self, owner, data, - size, price=None, plimit=None, - exectype=None, valid=None, tradeid=0, oco=None, - trailamount=None, trailpercent=None, - parent=None, transmit=True, - **kwargs): - - order = SellOrder(owner=owner, data=data, - size=size, price=price, pricelimit=plimit, - exectype=exectype, valid=valid, tradeid=tradeid, - trailamount=trailamount, trailpercent=trailpercent, - parent=parent, transmit=transmit) - - order.addinfo(**kwargs) - order.addcomminfo(self.getcommissioninfo(data)) - return self._transmit(order) - - def cancel(self, order): - o = self.orders[order.ref] - if order.status == Order.Cancelled: # already cancelled - return - - return self.o.order_cancel(order) - - def notify(self, order): - self.notifs.append(order.clone()) - - def get_notification(self): - if not self.notifs: - return None - - return self.notifs.popleft() - - def next(self): - self.notifs.append(None) # mark notification boundary diff --git a/spaces/Lianjd/stock_dashboard/backtrader/indicators/momentum.py b/spaces/Lianjd/stock_dashboard/backtrader/indicators/momentum.py deleted file mode 100644 index 8ed440af1ea27a5b2cfbc9402129845bc86afb14..0000000000000000000000000000000000000000 --- a/spaces/Lianjd/stock_dashboard/backtrader/indicators/momentum.py +++ /dev/null @@ -1,126 +0,0 @@ -#!/usr/bin/env python -# -*- coding: utf-8; py-indent-offset:4 -*- -############################################################################### -# -# Copyright (C) 2015-2020 Daniel Rodriguez -# -# This program is free software: you can redistribute it and/or modify -# it under the terms of the GNU General Public License as published by -# the Free Software Foundation, either version 3 of the License, or -# (at your option) any later version. -# -# This program is distributed in the hope that it will be useful, -# but WITHOUT ANY WARRANTY; without even the implied warranty of -# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the -# GNU General Public License for more details. -# -# You should have received a copy of the GNU General Public License -# along with this program. If not, see . -# -############################################################################### -from __future__ import (absolute_import, division, print_function, - unicode_literals) - -from . import Indicator - - -class Momentum(Indicator): - ''' - Measures the change in price by calculating the difference between the - current price and the price from a given period ago - - - Formula: - - momentum = data - data_period - - See: - - http://en.wikipedia.org/wiki/Momentum_(technical_analysis) - ''' - lines = ('momentum',) - params = (('period', 12),) - plotinfo = dict(plothlines=[0.0]) - - def __init__(self): - self.l.momentum = self.data - self.data(-self.p.period) - super(Momentum, self).__init__() - - -class MomentumOscillator(Indicator): - ''' - Measures the ratio of change in prices over a period - - Formula: - - mosc = 100 * (data / data_period) - - See: - - http://ta.mql4.com/indicators/oscillators/momentum - ''' - alias = ('MomentumOsc',) - - # Named output lines - lines = ('momosc',) - - # Accepted parameters (and defaults) - - params = (('period', 12), - ('band', 100.0)) - - def _plotlabel(self): - plabels = [self.p.period] - return plabels - - def _plotinit(self): - self.plotinfo.plothlines = [self.p.band] - - def __init__(self): - self.l.momosc = 100.0 * (self.data / self.data(-self.p.period)) - super(MomentumOscillator, self).__init__() - - -class RateOfChange(Indicator): - ''' - Measures the ratio of change in prices over a period - - Formula: - - roc = (data - data_period) / data_period - - See: - - http://en.wikipedia.org/wiki/Momentum_(technical_analysis) - ''' - alias = ('ROC',) - - # Named output lines - lines = ('roc',) - - # Accepted parameters (and defaults) - - params = (('period', 12),) - - def __init__(self): - dperiod = self.data(-self.p.period) - self.l.roc = (self.data - dperiod) / dperiod - super(RateOfChange, self).__init__() - - -class RateOfChange100(Indicator): - ''' - Measures the ratio of change in prices over a period with base 100 - - This is for example how ROC is defined in stockcharts - - Formula: - - roc = 100 * (data - data_period) / data_period - - See: - - http://stockcharts.com/school/doku.php?id=chart_school:technical_indicators:rate_of_change_roc_and_momentum - - ''' - alias = ('ROC100',) - - # Named output lines - lines = ('roc100',) - - # Accepted parameters (and defaults) - params = (('period', 12),) - - def __init__(self): - self.l.roc100 = 100.0 * ROC(self.data, period=self.p.period) - super(RateOfChange100, self).__init__() diff --git a/spaces/Manjushri/MusicGen/audiocraft/modules/conv.py b/spaces/Manjushri/MusicGen/audiocraft/modules/conv.py deleted file mode 100644 index 972938ab84712eb06e1b10cea25444eee51d6637..0000000000000000000000000000000000000000 --- a/spaces/Manjushri/MusicGen/audiocraft/modules/conv.py +++ /dev/null @@ -1,245 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import math -import typing as tp -import warnings - -import torch -from torch import nn -from torch.nn import functional as F -from torch.nn.utils import spectral_norm, weight_norm - - -CONV_NORMALIZATIONS = frozenset(['none', 'weight_norm', 'spectral_norm', - 'time_group_norm']) - - -def apply_parametrization_norm(module: nn.Module, norm: str = 'none'): - assert norm in CONV_NORMALIZATIONS - if norm == 'weight_norm': - return weight_norm(module) - elif norm == 'spectral_norm': - return spectral_norm(module) - else: - # We already check was in CONV_NORMALIZATION, so any other choice - # doesn't need reparametrization. - return module - - -def get_norm_module(module: nn.Module, causal: bool = False, norm: str = 'none', **norm_kwargs): - """Return the proper normalization module. If causal is True, this will ensure the returned - module is causal, or return an error if the normalization doesn't support causal evaluation. - """ - assert norm in CONV_NORMALIZATIONS - if norm == 'time_group_norm': - if causal: - raise ValueError("GroupNorm doesn't support causal evaluation.") - assert isinstance(module, nn.modules.conv._ConvNd) - return nn.GroupNorm(1, module.out_channels, **norm_kwargs) - else: - return nn.Identity() - - -def get_extra_padding_for_conv1d(x: torch.Tensor, kernel_size: int, stride: int, - padding_total: int = 0) -> int: - """See `pad_for_conv1d`. - """ - length = x.shape[-1] - n_frames = (length - kernel_size + padding_total) / stride + 1 - ideal_length = (math.ceil(n_frames) - 1) * stride + (kernel_size - padding_total) - return ideal_length - length - - -def pad_for_conv1d(x: torch.Tensor, kernel_size: int, stride: int, padding_total: int = 0): - """Pad for a convolution to make sure that the last window is full. - Extra padding is added at the end. This is required to ensure that we can rebuild - an output of the same length, as otherwise, even with padding, some time steps - might get removed. - For instance, with total padding = 4, kernel size = 4, stride = 2: - 0 0 1 2 3 4 5 0 0 # (0s are padding) - 1 2 3 # (output frames of a convolution, last 0 is never used) - 0 0 1 2 3 4 5 0 # (output of tr. conv., but pos. 5 is going to get removed as padding) - 1 2 3 4 # once you removed padding, we are missing one time step ! - """ - extra_padding = get_extra_padding_for_conv1d(x, kernel_size, stride, padding_total) - return F.pad(x, (0, extra_padding)) - - -def pad1d(x: torch.Tensor, paddings: tp.Tuple[int, int], mode: str = 'constant', value: float = 0.): - """Tiny wrapper around F.pad, just to allow for reflect padding on small input. - If this is the case, we insert extra 0 padding to the right before the reflection happen. - """ - length = x.shape[-1] - padding_left, padding_right = paddings - assert padding_left >= 0 and padding_right >= 0, (padding_left, padding_right) - if mode == 'reflect': - max_pad = max(padding_left, padding_right) - extra_pad = 0 - if length <= max_pad: - extra_pad = max_pad - length + 1 - x = F.pad(x, (0, extra_pad)) - padded = F.pad(x, paddings, mode, value) - end = padded.shape[-1] - extra_pad - return padded[..., :end] - else: - return F.pad(x, paddings, mode, value) - - -def unpad1d(x: torch.Tensor, paddings: tp.Tuple[int, int]): - """Remove padding from x, handling properly zero padding. Only for 1d! - """ - padding_left, padding_right = paddings - assert padding_left >= 0 and padding_right >= 0, (padding_left, padding_right) - assert (padding_left + padding_right) <= x.shape[-1] - end = x.shape[-1] - padding_right - return x[..., padding_left: end] - - -class NormConv1d(nn.Module): - """Wrapper around Conv1d and normalization applied to this conv - to provide a uniform interface across normalization approaches. - """ - def __init__(self, *args, causal: bool = False, norm: str = 'none', - norm_kwargs: tp.Dict[str, tp.Any] = {}, **kwargs): - super().__init__() - self.conv = apply_parametrization_norm(nn.Conv1d(*args, **kwargs), norm) - self.norm = get_norm_module(self.conv, causal, norm, **norm_kwargs) - self.norm_type = norm - - def forward(self, x): - x = self.conv(x) - x = self.norm(x) - return x - - -class NormConv2d(nn.Module): - """Wrapper around Conv2d and normalization applied to this conv - to provide a uniform interface across normalization approaches. - """ - def __init__(self, *args, norm: str = 'none', norm_kwargs: tp.Dict[str, tp.Any] = {}, **kwargs): - super().__init__() - self.conv = apply_parametrization_norm(nn.Conv2d(*args, **kwargs), norm) - self.norm = get_norm_module(self.conv, causal=False, norm=norm, **norm_kwargs) - self.norm_type = norm - - def forward(self, x): - x = self.conv(x) - x = self.norm(x) - return x - - -class NormConvTranspose1d(nn.Module): - """Wrapper around ConvTranspose1d and normalization applied to this conv - to provide a uniform interface across normalization approaches. - """ - def __init__(self, *args, causal: bool = False, norm: str = 'none', - norm_kwargs: tp.Dict[str, tp.Any] = {}, **kwargs): - super().__init__() - self.convtr = apply_parametrization_norm(nn.ConvTranspose1d(*args, **kwargs), norm) - self.norm = get_norm_module(self.convtr, causal, norm, **norm_kwargs) - self.norm_type = norm - - def forward(self, x): - x = self.convtr(x) - x = self.norm(x) - return x - - -class NormConvTranspose2d(nn.Module): - """Wrapper around ConvTranspose2d and normalization applied to this conv - to provide a uniform interface across normalization approaches. - """ - def __init__(self, *args, norm: str = 'none', norm_kwargs: tp.Dict[str, tp.Any] = {}, **kwargs): - super().__init__() - self.convtr = apply_parametrization_norm(nn.ConvTranspose2d(*args, **kwargs), norm) - self.norm = get_norm_module(self.convtr, causal=False, norm=norm, **norm_kwargs) - - def forward(self, x): - x = self.convtr(x) - x = self.norm(x) - return x - - -class StreamableConv1d(nn.Module): - """Conv1d with some builtin handling of asymmetric or causal padding - and normalization. - """ - def __init__(self, in_channels: int, out_channels: int, - kernel_size: int, stride: int = 1, dilation: int = 1, - groups: int = 1, bias: bool = True, causal: bool = False, - norm: str = 'none', norm_kwargs: tp.Dict[str, tp.Any] = {}, - pad_mode: str = 'reflect'): - super().__init__() - # warn user on unusual setup between dilation and stride - if stride > 1 and dilation > 1: - warnings.warn('StreamableConv1d has been initialized with stride > 1 and dilation > 1' - f' (kernel_size={kernel_size} stride={stride}, dilation={dilation}).') - self.conv = NormConv1d(in_channels, out_channels, kernel_size, stride, - dilation=dilation, groups=groups, bias=bias, causal=causal, - norm=norm, norm_kwargs=norm_kwargs) - self.causal = causal - self.pad_mode = pad_mode - - def forward(self, x): - B, C, T = x.shape - kernel_size = self.conv.conv.kernel_size[0] - stride = self.conv.conv.stride[0] - dilation = self.conv.conv.dilation[0] - kernel_size = (kernel_size - 1) * dilation + 1 # effective kernel size with dilations - padding_total = kernel_size - stride - extra_padding = get_extra_padding_for_conv1d(x, kernel_size, stride, padding_total) - if self.causal: - # Left padding for causal - x = pad1d(x, (padding_total, extra_padding), mode=self.pad_mode) - else: - # Asymmetric padding required for odd strides - padding_right = padding_total // 2 - padding_left = padding_total - padding_right - x = pad1d(x, (padding_left, padding_right + extra_padding), mode=self.pad_mode) - return self.conv(x) - - -class StreamableConvTranspose1d(nn.Module): - """ConvTranspose1d with some builtin handling of asymmetric or causal padding - and normalization. - """ - def __init__(self, in_channels: int, out_channels: int, - kernel_size: int, stride: int = 1, causal: bool = False, - norm: str = 'none', trim_right_ratio: float = 1., - norm_kwargs: tp.Dict[str, tp.Any] = {}): - super().__init__() - self.convtr = NormConvTranspose1d(in_channels, out_channels, kernel_size, stride, - causal=causal, norm=norm, norm_kwargs=norm_kwargs) - self.causal = causal - self.trim_right_ratio = trim_right_ratio - assert self.causal or self.trim_right_ratio == 1., \ - "`trim_right_ratio` != 1.0 only makes sense for causal convolutions" - assert self.trim_right_ratio >= 0. and self.trim_right_ratio <= 1. - - def forward(self, x): - kernel_size = self.convtr.convtr.kernel_size[0] - stride = self.convtr.convtr.stride[0] - padding_total = kernel_size - stride - - y = self.convtr(x) - - # We will only trim fixed padding. Extra padding from `pad_for_conv1d` would be - # removed at the very end, when keeping only the right length for the output, - # as removing it here would require also passing the length at the matching layer - # in the encoder. - if self.causal: - # Trim the padding on the right according to the specified ratio - # if trim_right_ratio = 1.0, trim everything from right - padding_right = math.ceil(padding_total * self.trim_right_ratio) - padding_left = padding_total - padding_right - y = unpad1d(y, (padding_left, padding_right)) - else: - # Asymmetric padding required for odd strides - padding_right = padding_total // 2 - padding_left = padding_total - padding_right - y = unpad1d(y, (padding_left, padding_right)) - return y diff --git a/spaces/Marshalls/testmtd/analysis/aistplusplus_api/demos/run_vis.py b/spaces/Marshalls/testmtd/analysis/aistplusplus_api/demos/run_vis.py deleted file mode 100644 index 55b824fb520d1d5923890d67239b1d4c5ae99119..0000000000000000000000000000000000000000 --- a/spaces/Marshalls/testmtd/analysis/aistplusplus_api/demos/run_vis.py +++ /dev/null @@ -1,97 +0,0 @@ -# coding=utf-8 -# Copyright 2020 The Google AI Perception Team Authors. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -"""Test code for running visualizer.""" -import os - -from absl import app -from absl import flags -from aist_plusplus.loader import AISTDataset -from aist_plusplus.visualizer import plot_on_video -from smplx import SMPL -import torch - -FLAGS = flags.FLAGS -flags.DEFINE_string( - 'anno_dir', - '/usr/local/google/home/ruilongli/data/public/aist_plusplus_final/', - 'input local dictionary for AIST++ annotations.') -flags.DEFINE_string( - 'video_dir', - '/usr/local/google/home/ruilongli/data/AIST_plusplus/refined_10M_all_video/', - 'input local dictionary for AIST Dance Videos.') -flags.DEFINE_string( - 'smpl_dir', - '/usr/local/google/home/ruilongli/data/SMPL/', - 'input local dictionary that stores SMPL data.') -flags.DEFINE_string( - 'video_name', - 'gWA_sFM_c01_d27_mWA2_ch21', - 'input video name to be visualized.') -flags.DEFINE_string( - 'save_dir', - '/usr/local/google/home/ruilongli/data/public/aist_plusplus_final/tmp/', - 'output local dictionary that stores AIST++ visualization.') -flags.DEFINE_enum( - 'mode', '2D', ['2D', '3D', 'SMPL'], - 'visualize 3D or 2D keypoints, or SMPL joints on image plane.') - - -def main(_): - # Parsing data info. - aist_dataset = AISTDataset(FLAGS.anno_dir) - video_path = os.path.join(FLAGS.video_dir, f'{FLAGS.video_name}.mp4') - seq_name, view = AISTDataset.get_seq_name(FLAGS.video_name) - view_idx = AISTDataset.VIEWS.index(view) - - # Parsing keypoints. - if FLAGS.mode == '2D': # raw keypoints detection results. - keypoints2d, _, _ = AISTDataset.load_keypoint2d( - aist_dataset.keypoint2d_dir, seq_name) - keypoints2d = keypoints2d[view_idx, :, :, 0:2] - - elif FLAGS.mode == '3D': # 3D keypoints with temporal optimization. - keypoints3d = AISTDataset.load_keypoint3d( - aist_dataset.keypoint3d_dir, seq_name, use_optim=True) - nframes, njoints, _ = keypoints3d.shape - env_name = aist_dataset.mapping_seq2env[seq_name] - cgroup = AISTDataset.load_camera_group(aist_dataset.camera_dir, env_name) - keypoints2d = cgroup.project(keypoints3d) - keypoints2d = keypoints2d.reshape(9, nframes, njoints, 2)[view_idx] - - elif FLAGS.mode == 'SMPL': # SMPL joints - smpl_poses, smpl_scaling, smpl_trans = AISTDataset.load_motion( - aist_dataset.motion_dir, seq_name) - smpl = SMPL(model_path=FLAGS.smpl_dir, gender='MALE', batch_size=1) - keypoints3d = smpl.forward( - global_orient=torch.from_numpy(smpl_poses[:, 0:1]).float(), - body_pose=torch.from_numpy(smpl_poses[:, 1:]).float(), - transl=torch.from_numpy(smpl_trans).float(), - scaling=torch.from_numpy(smpl_scaling.reshape(1, 1)).float(), - ).joints.detach().numpy() - - nframes, njoints, _ = keypoints3d.shape - env_name = aist_dataset.mapping_seq2env[seq_name] - cgroup = AISTDataset.load_camera_group(aist_dataset.camera_dir, env_name) - keypoints2d = cgroup.project(keypoints3d) - keypoints2d = keypoints2d.reshape(9, nframes, njoints, 2)[view_idx] - - # Visualize. - os.makedirs(FLAGS.save_dir, exist_ok=True) - save_path = os.path.join(FLAGS.save_dir, f'{FLAGS.video_name}.mp4') - plot_on_video(keypoints2d, video_path, save_path, fps=60) - - -if __name__ == '__main__': - app.run(main) diff --git a/spaces/Martlgap/LiveFaceID/README.md b/spaces/Martlgap/LiveFaceID/README.md deleted file mode 100644 index 6de87596d67944a0ad909ae7bf93951c8a640213..0000000000000000000000000000000000000000 --- a/spaces/Martlgap/LiveFaceID/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: LiveFaceID -emoji: 🐢 -colorFrom: gray -colorTo: pink -sdk: streamlit -sdk_version: 1.19.0 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/MisterZee/PIFu-Clothed-Human-Digitization/PIFu/lib/model/BasePIFuNet.py b/spaces/MisterZee/PIFu-Clothed-Human-Digitization/PIFu/lib/model/BasePIFuNet.py deleted file mode 100644 index cb8423ea7120b09d0627bab40a90bf8ce7d13e14..0000000000000000000000000000000000000000 --- a/spaces/MisterZee/PIFu-Clothed-Human-Digitization/PIFu/lib/model/BasePIFuNet.py +++ /dev/null @@ -1,76 +0,0 @@ -import torch -import torch.nn as nn -import torch.nn.functional as F - -from ..geometry import index, orthogonal, perspective - -class BasePIFuNet(nn.Module): - def __init__(self, - projection_mode='orthogonal', - error_term=nn.MSELoss(), - ): - """ - :param projection_mode: - Either orthogonal or perspective. - It will call the corresponding function for projection. - :param error_term: - nn Loss between the predicted [B, Res, N] and the label [B, Res, N] - """ - super(BasePIFuNet, self).__init__() - self.name = 'base' - - self.error_term = error_term - - self.index = index - self.projection = orthogonal if projection_mode == 'orthogonal' else perspective - - self.preds = None - self.labels = None - - def forward(self, points, images, calibs, transforms=None): - ''' - :param points: [B, 3, N] world space coordinates of points - :param images: [B, C, H, W] input images - :param calibs: [B, 3, 4] calibration matrices for each image - :param transforms: Optional [B, 2, 3] image space coordinate transforms - :return: [B, Res, N] predictions for each point - ''' - self.filter(images) - self.query(points, calibs, transforms) - return self.get_preds() - - def filter(self, images): - ''' - Filter the input images - store all intermediate features. - :param images: [B, C, H, W] input images - ''' - None - - def query(self, points, calibs, transforms=None, labels=None): - ''' - Given 3D points, query the network predictions for each point. - Image features should be pre-computed before this call. - store all intermediate features. - query() function may behave differently during training/testing. - :param points: [B, 3, N] world space coordinates of points - :param calibs: [B, 3, 4] calibration matrices for each image - :param transforms: Optional [B, 2, 3] image space coordinate transforms - :param labels: Optional [B, Res, N] gt labeling - :return: [B, Res, N] predictions for each point - ''' - None - - def get_preds(self): - ''' - Get the predictions from the last query - :return: [B, Res, N] network prediction for the last query - ''' - return self.preds - - def get_error(self): - ''' - Get the network loss from the last query - :return: loss term - ''' - return self.error_term(self.preds, self.labels) diff --git a/spaces/Mountchicken/MAERec-Gradio/mmocr/structures/textrecog_data_sample.py b/spaces/Mountchicken/MAERec-Gradio/mmocr/structures/textrecog_data_sample.py deleted file mode 100644 index f40572b0282dd82d1bc67734dcfe52c0073fe5d4..0000000000000000000000000000000000000000 --- a/spaces/Mountchicken/MAERec-Gradio/mmocr/structures/textrecog_data_sample.py +++ /dev/null @@ -1,82 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from mmengine.structures import BaseDataElement, LabelData - - -class TextRecogDataSample(BaseDataElement): - """A data structure interface of MMOCR for text recognition. They are used - as interfaces between different components. - - The attributes in ``TextRecogDataSample`` are divided into two parts: - - - ``gt_text``(LabelData): Ground truth text. - - ``pred_text``(LabelData): predictions text. - - Examples: - >>> import torch - >>> import numpy as np - >>> from mmengine.structures import LabelData - >>> from mmocr.data import TextRecogDataSample - >>> # gt_text - >>> data_sample = TextRecogDataSample() - >>> img_meta = dict(img_shape=(800, 1196, 3), - ... pad_shape=(800, 1216, 3)) - >>> gt_text = LabelData(metainfo=img_meta) - >>> gt_text.item = 'mmocr' - >>> data_sample.gt_text = gt_text - >>> assert 'img_shape' in data_sample.gt_text.metainfo_keys() - >>> print(data_sample) - - ) at 0x7f21fb1b9880> - >>> # pred_text - >>> pred_text = LabelData(metainfo=img_meta) - >>> pred_text.item = 'mmocr' - >>> data_sample = TextRecogDataSample(pred_text=pred_text) - >>> assert 'pred_text' in data_sample - >>> data_sample = TextRecogDataSample() - >>> gt_text_data = dict(item='mmocr') - >>> gt_text = LabelData(**gt_text_data) - >>> data_sample.gt_text = gt_text - >>> assert 'gt_text' in data_sample - >>> assert 'item' in data_sample.gt_text - """ - - @property - def gt_text(self) -> LabelData: - """LabelData: ground truth text. - """ - return self._gt_text - - @gt_text.setter - def gt_text(self, value: LabelData) -> None: - """gt_text setter.""" - self.set_field(value, '_gt_text', dtype=LabelData) - - @gt_text.deleter - def gt_text(self) -> None: - """gt_text deleter.""" - del self._gt_text - - @property - def pred_text(self) -> LabelData: - """LabelData: prediction text. - """ - return self._pred_text - - @pred_text.setter - def pred_text(self, value: LabelData) -> None: - """pred_text setter.""" - self.set_field(value, '_pred_text', dtype=LabelData) - - @pred_text.deleter - def pred_text(self) -> None: - """pred_text deleter.""" - del self._pred_text diff --git a/spaces/Nee001/bing0/src/lib/hooks/use-copy-to-clipboard.tsx b/spaces/Nee001/bing0/src/lib/hooks/use-copy-to-clipboard.tsx deleted file mode 100644 index 62f7156dca246c46b213151af003a3a177977ccf..0000000000000000000000000000000000000000 --- a/spaces/Nee001/bing0/src/lib/hooks/use-copy-to-clipboard.tsx +++ /dev/null @@ -1,33 +0,0 @@ -'use client' - -import * as React from 'react' - -export interface useCopyToClipboardProps { - timeout?: number -} - -export function useCopyToClipboard({ - timeout = 2000 -}: useCopyToClipboardProps) { - const [isCopied, setIsCopied] = React.useState(false) - - const copyToClipboard = (value: string) => { - if (typeof window === 'undefined' || !navigator.clipboard?.writeText) { - return - } - - if (!value) { - return - } - - navigator.clipboard.writeText(value).then(() => { - setIsCopied(true) - - setTimeout(() => { - setIsCopied(false) - }, timeout) - }) - } - - return { isCopied, copyToClipboard } -} diff --git a/spaces/NoorAzam/model4/app.py b/spaces/NoorAzam/model4/app.py deleted file mode 100644 index 93416c9454473157cb6838da3f4771fb04d89c81..0000000000000000000000000000000000000000 --- a/spaces/NoorAzam/model4/app.py +++ /dev/null @@ -1,8 +0,0 @@ -import gradio as gr - -def greet(name): - return "Hello " + name + "!" - -demo = gr.Interface(fn=greet, inputs="text", outputs="text") - -demo.launch() \ No newline at end of file diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/textless_nlp/gslm/unit2speech/glow.py b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/textless_nlp/gslm/unit2speech/glow.py deleted file mode 100644 index 7a7696403d505afdf0f1606f8220801b0f46152f..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/textless_nlp/gslm/unit2speech/glow.py +++ /dev/null @@ -1,311 +0,0 @@ -# ***************************************************************************** -# Copyright (c) 2018, NVIDIA CORPORATION. All rights reserved. -# -# Redistribution and use in source and binary forms, with or without -# modification, are permitted provided that the following conditions are met: -# * Redistributions of source code must retain the above copyright -# notice, this list of conditions and the following disclaimer. -# * Redistributions in binary form must reproduce the above copyright -# notice, this list of conditions and the following disclaimer in the -# documentation and/or other materials provided with the distribution. -# * Neither the name of the NVIDIA CORPORATION nor the -# names of its contributors may be used to endorse or promote products -# derived from this software without specific prior written permission. -# -# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND -# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED -# WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE -# DISCLAIMED. IN NO EVENT SHALL NVIDIA CORPORATION BE LIABLE FOR ANY -# DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES -# (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; -# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND -# ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT -# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS -# SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. -# -# ***************************************************************************** -import copy -import torch -from torch.autograd import Variable -import torch.nn.functional as F - - -@torch.jit.script -def fused_add_tanh_sigmoid_multiply(input_a, input_b, n_channels): - n_channels_int = n_channels[0] - in_act = input_a+input_b - t_act = torch.tanh(in_act[:, :n_channels_int, :]) - s_act = torch.sigmoid(in_act[:, n_channels_int:, :]) - acts = t_act * s_act - return acts - - -class WaveGlowLoss(torch.nn.Module): - def __init__(self, sigma=1.0): - super(WaveGlowLoss, self).__init__() - self.sigma = sigma - - def forward(self, model_output): - z, log_s_list, log_det_W_list = model_output - for i, log_s in enumerate(log_s_list): - if i == 0: - log_s_total = torch.sum(log_s) - log_det_W_total = log_det_W_list[i] - else: - log_s_total = log_s_total + torch.sum(log_s) - log_det_W_total += log_det_W_list[i] - - loss = torch.sum(z*z)/(2*self.sigma*self.sigma) - log_s_total - log_det_W_total - return loss/(z.size(0)*z.size(1)*z.size(2)) - - -class Invertible1x1Conv(torch.nn.Module): - """ - The layer outputs both the convolution, and the log determinant - of its weight matrix. If reverse=True it does convolution with - inverse - """ - def __init__(self, c): - super(Invertible1x1Conv, self).__init__() - self.conv = torch.nn.Conv1d(c, c, kernel_size=1, stride=1, padding=0, - bias=False) - - # Sample a random orthonormal matrix to initialize weights - W = torch.qr(torch.FloatTensor(c, c).normal_())[0] - - # Ensure determinant is 1.0 not -1.0 - if torch.det(W) < 0: - W[:,0] = -1*W[:,0] - W = W.view(c, c, 1) - self.conv.weight.data = W - - def forward(self, z, reverse=False): - # shape - batch_size, group_size, n_of_groups = z.size() - - W = self.conv.weight.squeeze() - - if reverse: - if not hasattr(self, 'W_inverse'): - # Reverse computation - W_inverse = W.float().inverse() - W_inverse = Variable(W_inverse[..., None]) - if z.type() == 'torch.cuda.HalfTensor': - W_inverse = W_inverse.half() - self.W_inverse = W_inverse - z = F.conv1d(z, self.W_inverse, bias=None, stride=1, padding=0) - return z - else: - # Forward computation - log_det_W = batch_size * n_of_groups * torch.logdet(W) - z = self.conv(z) - return z, log_det_W - - -class WN(torch.nn.Module): - """ - This is the WaveNet like layer for the affine coupling. The primary difference - from WaveNet is the convolutions need not be causal. There is also no dilation - size reset. The dilation only doubles on each layer - """ - def __init__(self, n_in_channels, n_mel_channels, n_layers, n_channels, - kernel_size): - super(WN, self).__init__() - assert(kernel_size % 2 == 1) - assert(n_channels % 2 == 0) - self.n_layers = n_layers - self.n_channels = n_channels - self.in_layers = torch.nn.ModuleList() - self.res_skip_layers = torch.nn.ModuleList() - - start = torch.nn.Conv1d(n_in_channels, n_channels, 1) - start = torch.nn.utils.weight_norm(start, name='weight') - self.start = start - - # Initializing last layer to 0 makes the affine coupling layers - # do nothing at first. This helps with training stability - end = torch.nn.Conv1d(n_channels, 2*n_in_channels, 1) - end.weight.data.zero_() - end.bias.data.zero_() - self.end = end - - cond_layer = torch.nn.Conv1d(n_mel_channels, 2*n_channels*n_layers, 1) - self.cond_layer = torch.nn.utils.weight_norm(cond_layer, name='weight') - - for i in range(n_layers): - dilation = 2 ** i - padding = int((kernel_size*dilation - dilation)/2) - in_layer = torch.nn.Conv1d(n_channels, 2*n_channels, kernel_size, - dilation=dilation, padding=padding) - in_layer = torch.nn.utils.weight_norm(in_layer, name='weight') - self.in_layers.append(in_layer) - - - # last one is not necessary - if i < n_layers - 1: - res_skip_channels = 2*n_channels - else: - res_skip_channels = n_channels - res_skip_layer = torch.nn.Conv1d(n_channels, res_skip_channels, 1) - res_skip_layer = torch.nn.utils.weight_norm(res_skip_layer, name='weight') - self.res_skip_layers.append(res_skip_layer) - - def forward(self, forward_input): - audio, spect = forward_input - audio = self.start(audio) - output = torch.zeros_like(audio) - n_channels_tensor = torch.IntTensor([self.n_channels]) - - spect = self.cond_layer(spect) - - for i in range(self.n_layers): - spect_offset = i*2*self.n_channels - acts = fused_add_tanh_sigmoid_multiply( - self.in_layers[i](audio), - spect[:,spect_offset:spect_offset+2*self.n_channels,:], - n_channels_tensor) - - res_skip_acts = self.res_skip_layers[i](acts) - if i < self.n_layers - 1: - audio = audio + res_skip_acts[:,:self.n_channels,:] - output = output + res_skip_acts[:,self.n_channels:,:] - else: - output = output + res_skip_acts - - return self.end(output) - - -class WaveGlow(torch.nn.Module): - def __init__(self, n_mel_channels, n_flows, n_group, n_early_every, - n_early_size, WN_config): - super(WaveGlow, self).__init__() - - self.upsample = torch.nn.ConvTranspose1d(n_mel_channels, - n_mel_channels, - 1024, stride=256) - assert(n_group % 2 == 0) - self.n_flows = n_flows - self.n_group = n_group - self.n_early_every = n_early_every - self.n_early_size = n_early_size - self.WN = torch.nn.ModuleList() - self.convinv = torch.nn.ModuleList() - - n_half = int(n_group/2) - - # Set up layers with the right sizes based on how many dimensions - # have been output already - n_remaining_channels = n_group - for k in range(n_flows): - if k % self.n_early_every == 0 and k > 0: - n_half = n_half - int(self.n_early_size/2) - n_remaining_channels = n_remaining_channels - self.n_early_size - self.convinv.append(Invertible1x1Conv(n_remaining_channels)) - self.WN.append(WN(n_half, n_mel_channels*n_group, **WN_config)) - self.n_remaining_channels = n_remaining_channels # Useful during inference - - def forward(self, forward_input): - """ - forward_input[0] = mel_spectrogram: batch x n_mel_channels x frames - forward_input[1] = audio: batch x time - """ - spect, audio = forward_input - - # Upsample spectrogram to size of audio - spect = self.upsample(spect) - assert(spect.size(2) >= audio.size(1)) - if spect.size(2) > audio.size(1): - spect = spect[:, :, :audio.size(1)] - - spect = spect.unfold(2, self.n_group, self.n_group).permute(0, 2, 1, 3) - spect = spect.contiguous().view(spect.size(0), spect.size(1), -1).permute(0, 2, 1) - - audio = audio.unfold(1, self.n_group, self.n_group).permute(0, 2, 1) - output_audio = [] - log_s_list = [] - log_det_W_list = [] - - for k in range(self.n_flows): - if k % self.n_early_every == 0 and k > 0: - output_audio.append(audio[:,:self.n_early_size,:]) - audio = audio[:,self.n_early_size:,:] - - audio, log_det_W = self.convinv[k](audio) - log_det_W_list.append(log_det_W) - - n_half = int(audio.size(1)/2) - audio_0 = audio[:,:n_half,:] - audio_1 = audio[:,n_half:,:] - - output = self.WN[k]((audio_0, spect)) - log_s = output[:, n_half:, :] - b = output[:, :n_half, :] - audio_1 = torch.exp(log_s)*audio_1 + b - log_s_list.append(log_s) - - audio = torch.cat([audio_0, audio_1],1) - - output_audio.append(audio) - return torch.cat(output_audio,1), log_s_list, log_det_W_list - - def infer(self, spect, sigma=1.0): - spect = self.upsample(spect) - # trim conv artifacts. maybe pad spec to kernel multiple - time_cutoff = self.upsample.kernel_size[0] - self.upsample.stride[0] - spect = spect[:, :, :-time_cutoff] - - spect = spect.unfold(2, self.n_group, self.n_group).permute(0, 2, 1, 3) - spect = spect.contiguous().view(spect.size(0), spect.size(1), -1).permute(0, 2, 1) - - if spect.type() == 'torch.cuda.HalfTensor': - audio = torch.cuda.HalfTensor(spect.size(0), - self.n_remaining_channels, - spect.size(2)).normal_() - else: - audio = torch.cuda.FloatTensor(spect.size(0), - self.n_remaining_channels, - spect.size(2)).normal_() - - audio = torch.autograd.Variable(sigma*audio) - - for k in reversed(range(self.n_flows)): - n_half = int(audio.size(1)/2) - audio_0 = audio[:,:n_half,:] - audio_1 = audio[:,n_half:,:] - - output = self.WN[k]((audio_0, spect)) - - s = output[:, n_half:, :] - b = output[:, :n_half, :] - audio_1 = (audio_1 - b)/torch.exp(s) - audio = torch.cat([audio_0, audio_1],1) - - audio = self.convinv[k](audio, reverse=True) - - if k % self.n_early_every == 0 and k > 0: - if spect.type() == 'torch.cuda.HalfTensor': - z = torch.cuda.HalfTensor(spect.size(0), self.n_early_size, spect.size(2)).normal_() - else: - z = torch.cuda.FloatTensor(spect.size(0), self.n_early_size, spect.size(2)).normal_() - audio = torch.cat((sigma*z, audio),1) - - audio = audio.permute(0,2,1).contiguous().view(audio.size(0), -1).data - return audio - - @staticmethod - def remove_weightnorm(model): - waveglow = model - for WN in waveglow.WN: - WN.start = torch.nn.utils.remove_weight_norm(WN.start) - WN.in_layers = remove(WN.in_layers) - WN.cond_layer = torch.nn.utils.remove_weight_norm(WN.cond_layer) - WN.res_skip_layers = remove(WN.res_skip_layers) - return waveglow - - -def remove(conv_list): - new_conv_list = torch.nn.ModuleList() - for old_conv in conv_list: - old_conv = torch.nn.utils.remove_weight_norm(old_conv) - new_conv_list.append(old_conv) - return new_conv_list diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/benchmark/dummy_model.py b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/benchmark/dummy_model.py deleted file mode 100644 index ff26e4fe655d8e8d7f9942c4bd3df7cd267405fb..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/benchmark/dummy_model.py +++ /dev/null @@ -1,96 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import torch.nn as nn -import torch.nn.functional as F -from fairseq.data import Dictionary -from fairseq.models import ( - FairseqDecoder, - FairseqLanguageModel, - register_model, - register_model_architecture, -) - - -@register_model("dummy_model") -class DummyModel(FairseqLanguageModel): - def __init__(self, args, encoder): - super().__init__(encoder) - self.args = args - - @staticmethod - def add_args(parser): - parser.add_argument("--num-layers", type=int, default=24) - parser.add_argument("--embed-dim", type=int, default=1024) - - @classmethod - def build_model(cls, args, task): - encoder = DummyEncoder( - num_embed=len(task.target_dictionary), - embed_dim=args.embed_dim, - num_layers=args.num_layers, - ) - return cls(args, encoder) - - def forward(self, src_tokens, masked_tokens=None, **kwargs): - return self.decoder(src_tokens, masked_tokens=masked_tokens) - - -class DummyEncoder(FairseqDecoder): - def __init__(self, num_embed=50000, embed_dim=1024, num_layers=24): - super().__init__(Dictionary()) - self.embed = nn.Embedding( - num_embeddings=num_embed, embedding_dim=embed_dim, padding_idx=0 - ) - self.layers_a = nn.ModuleList( - [ - nn.Sequential( - nn.LayerNorm(embed_dim), - nn.Linear(embed_dim, 3 * embed_dim), # q, k, v input projection - nn.Linear(3 * embed_dim, embed_dim), # skip self-attention - nn.Linear(embed_dim, embed_dim), # output projection - nn.Dropout(), - ) - for i in range(num_layers) - ] - ) - self.layers_b = nn.ModuleList( - [ - nn.Sequential( - nn.LayerNorm(embed_dim), - nn.Linear(embed_dim, 4 * embed_dim), # FFN - nn.ReLU(), - nn.Linear(4 * embed_dim, embed_dim), # FFN - nn.Dropout(0.1), - ) - for i in range(num_layers) - ] - ) - self.out_proj = nn.Linear(embed_dim, num_embed) - - def forward(self, tokens, masked_tokens=None): - x = self.embed(tokens) - for layer_a, layer_b in zip(self.layers_a, self.layers_b): - x = x + layer_a(x) - x = x + layer_b(x) - x = self.out_proj(x) - if masked_tokens is not None: - x = x[masked_tokens] - return (x,) - - def max_positions(self): - return 1024 - - def get_normalized_probs(self, net_output, log_probs, sample=None): - logits = net_output[0].float() - if log_probs: - return F.log_softmax(logits, dim=-1) - else: - return F.softmax(logits, dim=-1) - - -@register_model_architecture("dummy_model", "dummy_model") -def base_architecture(args): - pass diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/linformer/linformer_src/models/__init__.py b/spaces/OFA-Sys/OFA-vqa/fairseq/examples/linformer/linformer_src/models/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/speech_text_joint_to_text/__init__.py b/spaces/OFA-Sys/OFA-vqa/fairseq/examples/speech_text_joint_to_text/__init__.py deleted file mode 100644 index 239d2e69f9a235095dee1ea7b3a94164a77273f5..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/speech_text_joint_to_text/__init__.py +++ /dev/null @@ -1,6 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from . import tasks, criterions, models # noqa diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/textless_nlp/gslm/ulm/sample.py b/spaces/OFA-Sys/OFA-vqa/fairseq/examples/textless_nlp/gslm/ulm/sample.py deleted file mode 100644 index 77302a6894cacf07588cf34fb1e695dc519d7df5..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/textless_nlp/gslm/ulm/sample.py +++ /dev/null @@ -1,174 +0,0 @@ -#!/usr/bin/env python3 -u -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. -""" -Sample from a trained LM; hacked fairseq-interactive -""" -from collections import namedtuple -import os -import ast -import numpy as np - -from fairseq import checkpoint_utils, options, tasks, utils - -import tqdm - -Batch = namedtuple('Batch', 'ids src_tokens src_lengths') -Translation = namedtuple('Translation', 'src_str hypos pos_scores alignments') - - -def make_batches(lines, args, task, max_positions): - tokens = [ - task.source_dictionary.encode_line( - src_str, add_if_not_exist=False - ).long() - for src_str in lines - ] - lengths = [t.numel() for t in tokens] - itr = task.get_batch_iterator( - dataset=task.build_dataset_for_inference(tokens, lengths), - max_tokens=args.dataset.max_tokens, - max_sentences=args.dataset.batch_size, - max_positions=max_positions, - ignore_invalid_inputs=args.dataset.skip_invalid_size_inputs_valid_test - ).next_epoch_itr(shuffle=False) - for batch in itr: - yield Batch( - ids=batch['id'], - src_tokens=batch['net_input']['src_tokens'], src_lengths=batch['net_input']['src_lengths'], - ) - - -def main(args): - arg_prompts = args.prompts - arg_output = args.output - arg_debug = args.debug - arg_sample_size = args.samples_per_prompt - - try: - from fairseq.dataclass.utils import convert_namespace_to_omegaconf - args = convert_namespace_to_omegaconf(args) - except: - pass - - # if args.max_tokens is None and args.max_sentences is None: - if args.common.seed is not None: - np.random.seed(args.common.seed) - utils.set_torch_seed(args.common.seed) - - if args.generation.sampling: - args.generation.nbest = args.generation.beam = arg_sample_size - - task = tasks.setup_task(args.task) - - overrides = ast.literal_eval(args.common_eval.model_overrides) - - models, _model_args = checkpoint_utils.load_model_ensemble( - args.common_eval.path.split(os.pathsep), - arg_overrides=overrides, - task=task, - suffix=getattr(args, "checkpoint_suffix", ""), - ) - - # Set dictionaries - src_dict = task.source_dictionary - tgt_dict = task.target_dictionary - - # Optimize ensemble for generation - for model in models: - model.prepare_for_inference_(args) - model.cuda() - - # Load alignment dictionary for unknown word replacement - # (None if no unknown word replacement, empty if no path to align dictionary) - align_dict = utils.load_align_dict(args.generation.replace_unk) - - max_positions = utils.resolve_max_positions( - task.max_positions(), - *[model.max_positions() for model in models] - ) - - output_file = open(arg_output, 'w') - - with open(arg_prompts, 'r') as fin: - lines = fin.readlines() - - split = [x.split('|', 1) for x in lines] - seq_id = [x[0] for x in split] - prompts = [x[1] for x in split] - - if args.generation.prefix_size >= 0: - prompts = [' '.join(l.split()[:args.generation.prefix_size]) - for l in prompts] - - if arg_debug: - prompts = prompts[:10] - - generator = task.build_generator(models, args.generation) - - start_id = 0 - pbar = tqdm.tqdm(total=len(prompts)) - for batch in make_batches(prompts, args, task, max_positions): - src_tokens = batch.src_tokens - src_lengths = batch.src_lengths - src_tokens = src_tokens.cuda() - src_lengths = src_lengths.cuda() - - sample = { - 'net_input': { - 'src_tokens': src_tokens, - 'src_lengths': src_lengths, - }, - } - - results = [] - translations = task.inference_step(generator, models, sample) - for i, (id, hypos) in enumerate(zip(batch.ids.tolist(), translations)): - src_tokens_i = utils.strip_pad(src_tokens[i], tgt_dict.pad()) - results.append((i + start_id, src_tokens_i, hypos)) - - # sort output to match input order - for id, src_tokens, hypos in sorted(results, key=lambda x: x[0]): - if src_dict is not None: - src_str = src_dict.string( - src_tokens, args.common_eval.post_process) - - # Process top predictions - for hypo_id, hypo in enumerate(hypos): - _hypo_tokens, hypo_str, _alignment = utils.post_process_prediction( - hypo_tokens=hypo['tokens'].int().cpu(), - src_str=src_str, - alignment=hypo['alignment'], - align_dict=align_dict, - tgt_dict=tgt_dict, - remove_bpe=args.common_eval.post_process, - ) - - detok_hypo_str = hypo_str - utterance = detok_hypo_str - print(f'{seq_id[id]}__{hypo_id}|{utterance}', file=output_file) - pbar.update(1) - start_id += len(results) - - # output_file.close() - - -def cli_main(): - parser = options.get_interactive_generation_parser() - parser.add_argument('--prompts', type=str, default=None, required=True) - parser.add_argument('--output', type=str, default=None, required=True) - parser.add_argument('--debug', action='store_true') - parser.add_argument('--samples-per-prompt', type=int, default=1) - - args = options.parse_args_and_arch(parser) - - np.random.seed(args.seed) - utils.set_torch_seed(args.seed) - - main(args) - - -if __name__ == '__main__': - cli_main() diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/models/nat/nonautoregressive_ensembles.py b/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/models/nat/nonautoregressive_ensembles.py deleted file mode 100644 index 705a04fb49658c91114a26efd411b4653c65b943..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/models/nat/nonautoregressive_ensembles.py +++ /dev/null @@ -1,253 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import math - -import torch -import torch.nn.functional as F -from fairseq.models.nat import ( - _apply_del_words, - _apply_ins_masks, - _apply_ins_words, - _fill, - _skip, - _skip_encoder_out, -) - - -class _EnsembleModelEncoder(object): - def __init__(self, models): - self.models = models - - def reorder_encoder_out(self, encoder_outs, new_order): - encoder_outs = [ - model.encoder.reorder_encoder_out(encoder_out, new_order) - for model, encoder_out in zip(self.models, encoder_outs) - ] - return encoder_outs - - -class BasicEnsembleModel(torch.nn.Module): - """A wrapper around an ensemble of models.""" - - def __init__(self, models): - super().__init__() - self.models = torch.nn.ModuleList(models) - self.bos = self.models[0].decoder.dictionary.bos() - self.eos = self.models[0].decoder.dictionary.eos() - self.pad = self.models[0].decoder.dictionary.pad() - self.unk = self.models[0].decoder.dictionary.unk() - self.encoder = _EnsembleModelEncoder(self.models) - - def has_encoder(self): - return hasattr(self.models[0], "encoder") - - def max_decoder_positions(self): - return min(m.max_decoder_positions() for m in self.models) - - @torch.no_grad() - def forward_encoder(self, encoder_input): - if not self.has_encoder(): - return None - return [model.forward_encoder(encoder_input) for model in self.models] - - @torch.no_grad() - def forward_decoder(self, *inputs): - raise NotImplementedError - - def initialize_output_tokens(self, *inputs): - raise NotImplementedError - - -class EnsembleLevT(BasicEnsembleModel): - """A wrapper around an ensemble of models.""" - - def __init__(self, models): - super().__init__(models) - - @torch.no_grad() - def forward_decoder( - self, decoder_out, encoder_outs, eos_penalty=0.0, max_ratio=None, **kwargs - ): - # LevT ensembling - # A pipeline of three steps: deletion, placeholder, and word insertion. - # We need to average scores in each step in a pipeline way because of dependence. - # deletion - output_tokens = decoder_out.output_tokens - output_scores = decoder_out.output_scores - attn = decoder_out.attn - - bsz = output_tokens.size(0) - if max_ratio is None: - max_lens = output_tokens.new().fill_(255) - else: - if not encoder_outs[0]["encoder_padding_mask"]: - src_lens = ( - encoder_outs[0]["encoder_out"][0].new(bsz) - .fill_(encoder_outs[0]["encoder_out"][0].size(1)) - ) - else: - src_lens = (~encoder_outs[0]["encoder_padding_mask"][0]).sum(1) - max_lens = (src_lens * max_ratio).clamp(min=10).long() - - # delete words - # do not delete tokens if it is - can_del_word = output_tokens.ne(self.pad).sum(1) > 2 - if can_del_word.sum() != 0: # we cannot delete, skip - output_tokens, output_scores, attn = self.forward_word_del( - encoder_outs, - output_tokens, - output_scores, - attn, - can_del_word, - ) - - # insert placeholders - can_ins_mask = output_tokens.ne(self.pad).sum(1) < max_lens - if can_ins_mask.sum() != 0: - output_tokens, output_scores = self.forward_mask_ins( - encoder_outs, - output_tokens, - output_scores, - can_ins_mask, - eos_penalty, - max_lens, - ) - - # insert words - can_ins_word = output_tokens.eq(self.unk).sum(1) > 0 - if can_ins_word.sum() != 0: - output_tokens, output_scores, attn = self.forward_word_ins( - encoder_outs, - output_tokens, - output_scores, - attn, - can_ins_word, - ) - - # delete some unnecessary paddings - cut_off = output_tokens.ne(self.pad).sum(1).max() - output_tokens = output_tokens[:, :cut_off] - output_scores = output_scores[:, :cut_off] - attn = None if attn is None else attn[:, :cut_off, :] - return decoder_out._replace( - output_tokens=output_tokens, - output_scores=output_scores, - attn=attn, - history=None, - ) - - def forward_word_del( - self, encoder_outs, output_tokens, output_scores, attn, can_del_word - ): - word_del_score_avg = [] - word_del_attn_avg = [] - for model, encoder_out in zip(self.models, encoder_outs): - word_del_out, word_del_attn = model.decoder.forward_word_del( - _skip(output_tokens, can_del_word), - _skip_encoder_out(model.encoder, encoder_out, can_del_word), - ) - word_del_score = F.log_softmax(word_del_out, 2) - word_del_score_avg.append(word_del_score) - word_del_attn_avg.append(word_del_attn) - word_del_score_avg = torch.logsumexp( - torch.stack(word_del_score_avg, dim=0), dim=0 - ) - math.log(len(self.models)) - word_del_pred = word_del_score_avg.max(-1)[1].bool() - if word_del_attn_avg[0] is not None: - word_del_attn_avg = torch.stack(word_del_attn_avg, dim=0) / len(self.models) - else: - word_del_attn_avg = None - - _tokens, _scores, _attn = _apply_del_words( - output_tokens[can_del_word], - output_scores[can_del_word], - word_del_attn_avg, - word_del_pred, - self.pad, - self.bos, - self.eos, - ) - output_tokens = _fill(output_tokens, can_del_word, _tokens, self.pad) - output_scores = _fill(output_scores, can_del_word, _scores, 0) - attn = _fill(attn, can_del_word, _attn, 0.0) - return output_tokens, output_scores, attn - - def forward_mask_ins( - self, - encoder_outs, - output_tokens, - output_scores, - can_ins_mask, - eos_penalty, - max_lens, - ): - mask_ins_score_avg = [] - for model, encoder_out in zip(self.models, encoder_outs): - mask_ins_out, _ = model.decoder.forward_mask_ins( - _skip(output_tokens, can_ins_mask), - _skip_encoder_out(model.encoder, encoder_out, can_ins_mask), - ) - mask_ins_score = F.log_softmax(mask_ins_out, 2) - if eos_penalty > 0.0: - mask_ins_score[:, :, 0] -= eos_penalty - mask_ins_score_avg.append(mask_ins_score) - mask_ins_score_avg = torch.logsumexp( - torch.stack(mask_ins_score_avg, dim=0), dim=0 - ) - math.log(len(self.models)) - mask_ins_pred = mask_ins_score_avg.max(-1)[1] - mask_ins_pred = torch.min( - mask_ins_pred, max_lens[can_ins_mask, None].expand_as(mask_ins_pred) - ) - _tokens, _scores = _apply_ins_masks( - output_tokens[can_ins_mask], - output_scores[can_ins_mask], - mask_ins_pred, - self.pad, - self.unk, - self.eos, - ) - output_tokens = _fill(output_tokens, can_ins_mask, _tokens, self.pad) - output_scores = _fill(output_scores, can_ins_mask, _scores, 0) - return output_tokens, output_scores - - def forward_word_ins( - self, encoder_outs, output_tokens, output_scores, attn, can_ins_word - ): - word_ins_score_avg = [] - word_ins_attn_avg = [] - for model, encoder_out in zip(self.models, encoder_outs): - word_ins_out, word_ins_attn = model.decoder.forward_word_ins( - _skip(output_tokens, can_ins_word), - _skip_encoder_out(model.encoder, encoder_out, can_ins_word), - ) - word_ins_score = F.log_softmax(word_ins_out, 2) - word_ins_score_avg.append(word_ins_score) - word_ins_attn_avg.append(word_ins_attn) - word_ins_score_avg = torch.logsumexp( - torch.stack(word_ins_score_avg, dim=0), dim=0 - ) - math.log(len(self.models)) - if word_ins_attn_avg[0] is not None: - word_ins_attn_avg = torch.stack(word_ins_attn_avg, dim=0) / len(self.models) - else: - word_ins_attn_avg = None - word_ins_score_max, word_ins_pred = word_ins_score_avg.max(-1) - - _tokens, _scores = _apply_ins_words( - output_tokens[can_ins_word], - output_scores[can_ins_word], - word_ins_pred, - word_ins_score_max, - self.unk, - ) - - output_tokens = _fill(output_tokens, can_ins_word, _tokens, self.pad) - output_scores = _fill(output_scores, can_ins_word, _scores, 0) - attn = _fill(attn, can_ins_word, word_ins_attn, 0.0) - return output_tokens, output_scores, attn - - def initialize_output_tokens(self, encoder_outs, src_tokens): - # LevT doesn't do length prediction. - return self.models[0].initialize_output_tokens(encoder_outs[0], src_tokens) diff --git a/spaces/OIUGLK/bingo/tests/parse.ts b/spaces/OIUGLK/bingo/tests/parse.ts deleted file mode 100644 index 92940fe6315f1d7cb2b267ba5e5a7e26460a1de3..0000000000000000000000000000000000000000 --- a/spaces/OIUGLK/bingo/tests/parse.ts +++ /dev/null @@ -1,13 +0,0 @@ -import { promises as fs } from 'fs' -import { join } from 'path' -import { parseHeadersFromCurl } from '@/lib/utils' - -(async () => { - const content = await fs.readFile(join(__dirname, './fixtures/curl.txt'), 'utf-8') - const headers = parseHeadersFromCurl(content) - console.log(headers) - - const cmdContent = await fs.readFile(join(__dirname, './fixtures/cmd.txt'), 'utf-8') - const cmdHeaders = parseHeadersFromCurl(cmdContent) - console.log(cmdHeaders) -})() diff --git a/spaces/ORI-Muchim/NahidaTTS/models.py b/spaces/ORI-Muchim/NahidaTTS/models.py deleted file mode 100644 index fe004e94bbe9074ec736f14325268f4515a53420..0000000000000000000000000000000000000000 --- a/spaces/ORI-Muchim/NahidaTTS/models.py +++ /dev/null @@ -1,540 +0,0 @@ -import math -import torch -from torch import nn -from torch.nn import functional as F - -import commons -import modules -import attentions -import monotonic_align - -from torch.nn import Conv1d, ConvTranspose1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm -from commons import init_weights, get_padding - - -class StochasticDurationPredictor(nn.Module): - def __init__(self, in_channels, filter_channels, kernel_size, p_dropout, n_flows=4, gin_channels=0): - super().__init__() - filter_channels = in_channels # it needs to be removed from future version. - self.in_channels = in_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.n_flows = n_flows - self.gin_channels = gin_channels - - self.log_flow = modules.Log() - self.flows = nn.ModuleList() - self.flows.append(modules.ElementwiseAffine(2)) - for i in range(n_flows): - self.flows.append(modules.ConvFlow(2, filter_channels, kernel_size, n_layers=3)) - self.flows.append(modules.Flip()) - - self.post_pre = nn.Conv1d(1, filter_channels, 1) - self.post_proj = nn.Conv1d(filter_channels, filter_channels, 1) - self.post_convs = modules.DDSConv(filter_channels, kernel_size, n_layers=3, p_dropout=p_dropout) - self.post_flows = nn.ModuleList() - self.post_flows.append(modules.ElementwiseAffine(2)) - for i in range(4): - self.post_flows.append(modules.ConvFlow(2, filter_channels, kernel_size, n_layers=3)) - self.post_flows.append(modules.Flip()) - - self.pre = nn.Conv1d(in_channels, filter_channels, 1) - self.proj = nn.Conv1d(filter_channels, filter_channels, 1) - self.convs = modules.DDSConv(filter_channels, kernel_size, n_layers=3, p_dropout=p_dropout) - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, filter_channels, 1) - - def forward(self, x, x_mask, w=None, g=None, reverse=False, noise_scale=1.0): - x = torch.detach(x) - x = self.pre(x) - if g is not None: - g = torch.detach(g) - x = x + self.cond(g) - x = self.convs(x, x_mask) - x = self.proj(x) * x_mask - - if not reverse: - flows = self.flows - assert w is not None - - logdet_tot_q = 0 - h_w = self.post_pre(w) - h_w = self.post_convs(h_w, x_mask) - h_w = self.post_proj(h_w) * x_mask - e_q = torch.randn(w.size(0), 2, w.size(2)).to(device=x.device, dtype=x.dtype) * x_mask - z_q = e_q - for flow in self.post_flows: - z_q, logdet_q = flow(z_q, x_mask, g=(x + h_w)) - logdet_tot_q += logdet_q - z_u, z1 = torch.split(z_q, [1, 1], 1) - u = torch.sigmoid(z_u) * x_mask - z0 = (w - u) * x_mask - logdet_tot_q += torch.sum((F.logsigmoid(z_u) + F.logsigmoid(-z_u)) * x_mask, [1, 2]) - logq = torch.sum(-0.5 * (math.log(2 * math.pi) + (e_q ** 2)) * x_mask, [1, 2]) - logdet_tot_q - - logdet_tot = 0 - z0, logdet = self.log_flow(z0, x_mask) - logdet_tot += logdet - z = torch.cat([z0, z1], 1) - for flow in flows: - z, logdet = flow(z, x_mask, g=x, reverse=reverse) - logdet_tot = logdet_tot + logdet - nll = torch.sum(0.5 * (math.log(2 * math.pi) + (z ** 2)) * x_mask, [1, 2]) - logdet_tot - return nll + logq # [b] - else: - flows = list(reversed(self.flows)) - flows = flows[:-2] + [flows[-1]] # remove a useless vflow - z = torch.randn(x.size(0), 2, x.size(2)).to(device=x.device, dtype=x.dtype) * noise_scale - for flow in flows: - z = flow(z, x_mask, g=x, reverse=reverse) - z0, z1 = torch.split(z, [1, 1], 1) - logw = z0 - return logw - - -class DurationPredictor(nn.Module): - def __init__(self, in_channels, filter_channels, kernel_size, p_dropout, gin_channels=0): - super().__init__() - - self.in_channels = in_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.gin_channels = gin_channels - - self.drop = nn.Dropout(p_dropout) - self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size, padding=kernel_size // 2) - self.norm_1 = modules.LayerNorm(filter_channels) - self.conv_2 = nn.Conv1d(filter_channels, filter_channels, kernel_size, padding=kernel_size // 2) - self.norm_2 = modules.LayerNorm(filter_channels) - self.proj = nn.Conv1d(filter_channels, 1, 1) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, in_channels, 1) - - def forward(self, x, x_mask, g=None): - x = torch.detach(x) - if g is not None: - g = torch.detach(g) - x = x + self.cond(g) - x = self.conv_1(x * x_mask) - x = torch.relu(x) - x = self.norm_1(x) - x = self.drop(x) - x = self.conv_2(x * x_mask) - x = torch.relu(x) - x = self.norm_2(x) - x = self.drop(x) - x = self.proj(x * x_mask) - return x * x_mask - - -class TextEncoder(nn.Module): - def __init__(self, - n_vocab, - out_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout): - super().__init__() - self.n_vocab = n_vocab - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - - if self.n_vocab != 0: - self.emb = nn.Embedding(n_vocab, hidden_channels) - nn.init.normal_(self.emb.weight, 0.0, hidden_channels ** -0.5) - - self.encoder = attentions.Encoder( - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, x, x_lengths): - if self.n_vocab != 0: - x = self.emb(x) * math.sqrt(self.hidden_channels) # [b, t, h] - x = torch.transpose(x, 1, -1) # [b, h, t] - x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(x.dtype) - - x = self.encoder(x * x_mask, x_mask) - stats = self.proj(x) * x_mask - - m, logs = torch.split(stats, self.out_channels, dim=1) - return x, m, logs, x_mask - - -class ResidualCouplingBlock(nn.Module): - def __init__(self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - n_flows=4, - gin_channels=0): - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.n_flows = n_flows - self.gin_channels = gin_channels - - self.flows = nn.ModuleList() - for i in range(n_flows): - self.flows.append( - modules.ResidualCouplingLayer(channels, hidden_channels, kernel_size, dilation_rate, n_layers, - gin_channels=gin_channels, mean_only=True)) - self.flows.append(modules.Flip()) - - def forward(self, x, x_mask, g=None, reverse=False): - if not reverse: - for flow in self.flows: - x, _ = flow(x, x_mask, g=g, reverse=reverse) - else: - for flow in reversed(self.flows): - x = flow(x, x_mask, g=g, reverse=reverse) - return x - - -class PosteriorEncoder(nn.Module): - def __init__(self, - in_channels, - out_channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=0): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - - self.pre = nn.Conv1d(in_channels, hidden_channels, 1) - self.enc = modules.WN(hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=gin_channels) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, x, x_lengths, g=None): - x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(x.dtype) - x = self.pre(x) * x_mask - x = self.enc(x, x_mask, g=g) - stats = self.proj(x) * x_mask - m, logs = torch.split(stats, self.out_channels, dim=1) - z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask - return z, m, logs, x_mask - - -class Generator(torch.nn.Module): - def __init__(self, initial_channel, resblock, resblock_kernel_sizes, resblock_dilation_sizes, upsample_rates, - upsample_initial_channel, upsample_kernel_sizes, gin_channels=0): - super(Generator, self).__init__() - self.num_kernels = len(resblock_kernel_sizes) - self.num_upsamples = len(upsample_rates) - self.conv_pre = Conv1d(initial_channel, upsample_initial_channel, 7, 1, padding=3) - resblock = modules.ResBlock1 if resblock == '1' else modules.ResBlock2 - - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)): - self.ups.append(weight_norm( - ConvTranspose1d(upsample_initial_channel // (2 ** i), upsample_initial_channel // (2 ** (i + 1)), - k, u, padding=(k - u) // 2))) - - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = upsample_initial_channel // (2 ** (i + 1)) - for j, (k, d) in enumerate(zip(resblock_kernel_sizes, resblock_dilation_sizes)): - self.resblocks.append(resblock(ch, k, d)) - - self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False) - self.ups.apply(init_weights) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1) - - def forward(self, x, g=None): - x = self.conv_pre(x) - if g is not None: - x = x + self.cond(g) - - for i in range(self.num_upsamples): - x = F.leaky_relu(x, modules.LRELU_SLOPE) - x = self.ups[i](x) - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i * self.num_kernels + j](x) - else: - xs += self.resblocks[i * self.num_kernels + j](x) - x = xs / self.num_kernels - x = F.leaky_relu(x) - x = self.conv_post(x) - x = torch.tanh(x) - - return x - - def remove_weight_norm(self): - print('Removing weight norm...') - for l in self.ups: - remove_weight_norm(l) - for l in self.resblocks: - l.remove_weight_norm() - - -class DiscriminatorP(torch.nn.Module): - def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False): - super(DiscriminatorP, self).__init__() - self.period = period - self.use_spectral_norm = use_spectral_norm - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList([ - norm_f(Conv2d(1, 32, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(32, 128, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(128, 512, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(512, 1024, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(1024, 1024, (kernel_size, 1), 1, padding=(get_padding(kernel_size, 1), 0))), - ]) - self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0))) - - def forward(self, x): - fmap = [] - - # 1d to 2d - b, c, t = x.shape - if t % self.period != 0: # pad first - n_pad = self.period - (t % self.period) - x = F.pad(x, (0, n_pad), "reflect") - t = t + n_pad - x = x.view(b, c, t // self.period, self.period) - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class DiscriminatorS(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(DiscriminatorS, self).__init__() - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList([ - norm_f(Conv1d(1, 16, 15, 1, padding=7)), - norm_f(Conv1d(16, 64, 41, 4, groups=4, padding=20)), - norm_f(Conv1d(64, 256, 41, 4, groups=16, padding=20)), - norm_f(Conv1d(256, 1024, 41, 4, groups=64, padding=20)), - norm_f(Conv1d(1024, 1024, 41, 4, groups=256, padding=20)), - norm_f(Conv1d(1024, 1024, 5, 1, padding=2)), - ]) - self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1)) - - def forward(self, x): - fmap = [] - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class MultiPeriodDiscriminator(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(MultiPeriodDiscriminator, self).__init__() - periods = [2, 3, 5, 7, 11] - - discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)] - discs = discs + [DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods] - self.discriminators = nn.ModuleList(discs) - - def forward(self, y, y_hat): - y_d_rs = [] - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for i, d in enumerate(self.discriminators): - y_d_r, fmap_r = d(y) - y_d_g, fmap_g = d(y_hat) - y_d_rs.append(y_d_r) - y_d_gs.append(y_d_g) - fmap_rs.append(fmap_r) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - - -class SynthesizerTrn(nn.Module): - """ - Synthesizer for Training - """ - - def __init__(self, - n_vocab, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - n_speakers=0, - gin_channels=0, - use_sdp=True, - **kwargs): - - super().__init__() - self.n_vocab = n_vocab - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.n_speakers = n_speakers - self.gin_channels = gin_channels - - self.use_sdp = use_sdp - - self.enc_p = TextEncoder(n_vocab, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout) - self.dec = Generator(inter_channels, resblock, resblock_kernel_sizes, resblock_dilation_sizes, upsample_rates, - upsample_initial_channel, upsample_kernel_sizes, gin_channels=gin_channels) - self.enc_q = PosteriorEncoder(spec_channels, inter_channels, hidden_channels, 5, 1, 16, - gin_channels=gin_channels) - self.flow = ResidualCouplingBlock(inter_channels, hidden_channels, 5, 1, 4, gin_channels=gin_channels) - - if use_sdp: - self.dp = StochasticDurationPredictor(hidden_channels, 192, 3, 0.5, 4, gin_channels=gin_channels) - else: - self.dp = DurationPredictor(hidden_channels, 256, 3, 0.5, gin_channels=gin_channels) - - if n_speakers > 1: - self.emb_g = nn.Embedding(n_speakers, gin_channels) - - def forward(self, x, x_lengths, y, y_lengths, sid=None): - - x, m_p, logs_p, x_mask = self.enc_p(x, x_lengths) - if self.n_speakers > 1: - g = self.emb_g(sid).unsqueeze(-1) # [b, h, 1] - else: - g = None - - z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g) - z_p = self.flow(z, y_mask, g=g) - - with torch.no_grad(): - # negative cross-entropy - s_p_sq_r = torch.exp(-2 * logs_p) # [b, d, t] - neg_cent1 = torch.sum(-0.5 * math.log(2 * math.pi) - logs_p, [1], keepdim=True) # [b, 1, t_s] - neg_cent2 = torch.matmul(-0.5 * (z_p ** 2).transpose(1, 2), - s_p_sq_r) # [b, t_t, d] x [b, d, t_s] = [b, t_t, t_s] - neg_cent3 = torch.matmul(z_p.transpose(1, 2), (m_p * s_p_sq_r)) # [b, t_t, d] x [b, d, t_s] = [b, t_t, t_s] - neg_cent4 = torch.sum(-0.5 * (m_p ** 2) * s_p_sq_r, [1], keepdim=True) # [b, 1, t_s] - neg_cent = neg_cent1 + neg_cent2 + neg_cent3 + neg_cent4 - - attn_mask = torch.unsqueeze(x_mask, 2) * torch.unsqueeze(y_mask, -1) - attn = monotonic_align.maximum_path(neg_cent, attn_mask.squeeze(1)).unsqueeze(1).detach() - - w = attn.sum(2) - if self.use_sdp: - l_length = self.dp(x, x_mask, w, g=g) - l_length = l_length / torch.sum(x_mask) - else: - logw_ = torch.log(w + 1e-6) * x_mask - logw = self.dp(x, x_mask, g=g) - l_length = torch.sum((logw - logw_) ** 2, [1, 2]) / torch.sum(x_mask) # for averaging - - # expand prior - m_p = torch.matmul(attn.squeeze(1), m_p.transpose(1, 2)).transpose(1, 2) - logs_p = torch.matmul(attn.squeeze(1), logs_p.transpose(1, 2)).transpose(1, 2) - - z_slice, ids_slice = commons.rand_slice_segments(z, y_lengths, self.segment_size) - o = self.dec(z_slice, g=g) - return o, l_length, attn, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q) - - def infer(self, x, x_lengths, sid=None, noise_scale=1, length_scale=1, noise_scale_w=1., max_len=None): - x, m_p, logs_p, x_mask = self.enc_p(x, x_lengths) - if self.n_speakers > 1: - g = self.emb_g(sid).unsqueeze(-1) # [b, h, 1] - else: - g = None - - if self.use_sdp: - logw = self.dp(x, x_mask, g=g, reverse=True, noise_scale=noise_scale_w) - else: - logw = self.dp(x, x_mask, g=g) - w = torch.exp(logw) * x_mask * length_scale - w_ceil = torch.ceil(w) - y_lengths = torch.clamp_min(torch.sum(w_ceil, [1, 2]), 1).long() - y_mask = torch.unsqueeze(commons.sequence_mask(y_lengths, None), 1).to(x_mask.dtype) - attn_mask = torch.unsqueeze(x_mask, 2) * torch.unsqueeze(y_mask, -1) - attn = commons.generate_path(w_ceil, attn_mask) - - m_p = torch.matmul(attn.squeeze(1), m_p.transpose(1, 2)).transpose(1, 2) # [b, t', t], [b, t, d] -> [b, d, t'] - logs_p = torch.matmul(attn.squeeze(1), logs_p.transpose(1, 2)).transpose(1, - 2) # [b, t', t], [b, t, d] -> [b, d, t'] - - z_p = m_p + torch.randn_like(m_p) * torch.exp(logs_p) * noise_scale - z = self.flow(z_p, y_mask, g=g, reverse=True) - o = self.dec((z * y_mask)[:, :, :max_len], g=g) - return o, attn, y_mask, (z, z_p, m_p, logs_p) - - def voice_conversion(self, y, y_lengths, sid_src, sid_tgt): - assert self.n_speakers > 1, "n_speakers have to be larger than 1." - g_src = self.emb_g(sid_src).unsqueeze(-1) - g_tgt = self.emb_g(sid_tgt).unsqueeze(-1) - z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g_src) - z_p = self.flow(z, y_mask, g=g_src) - z_hat = self.flow(z_p, y_mask, g=g_tgt, reverse=True) - o_hat = self.dec(z_hat * y_mask, g=g_tgt) - return o_hat, y_mask, (z, z_p, z_hat) diff --git a/spaces/Omnibus/MusicGen/MODEL_CARD.md b/spaces/Omnibus/MusicGen/MODEL_CARD.md deleted file mode 100644 index 6c2c9f883969eb905e74ad3376966d156cc5ca00..0000000000000000000000000000000000000000 --- a/spaces/Omnibus/MusicGen/MODEL_CARD.md +++ /dev/null @@ -1,81 +0,0 @@ -# MusicGen Model Card - -## Model details - -**Organization developing the model:** The FAIR team of Meta AI. - -**Model date:** MusicGen was trained between April 2023 and May 2023. - -**Model version:** This is the version 1 of the model. - -**Model type:** MusicGen consists of an EnCodec model for audio tokenization, an auto-regressive language model based on the transformer architecture for music modeling. The model comes in different sizes: 300M, 1.5B and 3.3B parameters ; and two variants: a model trained for text-to-music generation task and a model trained for melody-guided music generation. - -**Paper or resources for more information:** More information can be found in the paper [Simple and Controllable Music Generation][arxiv]. - -**Citation details** See [our paper][arxiv] - -**License** Code is released under MIT, model weights are released under CC-BY-NC 4.0. - -**Where to send questions or comments about the model:** Questions and comments about MusicGen can be sent via the [Github repository](https://github.com/facebookresearch/audiocraft) of the project, or by opening an issue. - -## Intended use -**Primary intended use:** The primary use of MusicGen is research on AI-based music generation, including: - -- Research efforts, such as probing and better understanding the limitations of generative models to further improve the state of science -- Generation of music guided by text or melody to understand current abilities of generative AI models by machine learning amateurs - -**Primary intended users:** The primary intended users of the model are researchers in audio, machine learning and artificial intelligence, as well as amateur seeking to better understand those models. - -**Out-of-scope use cases** The model should not be used on downstream applications without further risk evaluation and mitigation. The model should not be used to intentionally create or disseminate music pieces that create hostile or alienating environments for people. This includes generating music that people would foreseeably find disturbing, distressing, or offensive; or content that propagates historical or current stereotypes. - -## Metrics - -**Models performance measures:** We used the following objective measure to evaluate the model on a standard music benchmark: - -- Frechet Audio Distance computed on features extracted from a pre-trained audio classifier (VGGish) -- Kullback-Leibler Divergence on label distributions extracted from a pre-trained audio classifier (PaSST) -- CLAP Score between audio embedding and text embedding extracted from a pre-trained CLAP model - -Additionally, we run qualitative studies with human participants, evaluating the performance of the model with the following axes: - -- Overall quality of the music samples; -- Text relevance to the provided text input; -- Adherence to the melody for melody-guided music generation. - -More details on performance measures and human studies can be found in the paper. - -**Decision thresholds:** Not applicable. - -## Evaluation datasets - -The model was evaluated on the [MusicCaps benchmark](https://www.kaggle.com/datasets/googleai/musiccaps) and on an in-domain held-out evaluation set, with no artist overlap with the training set. - -## Training datasets - -The model was trained on licensed data using the following sources: the [Meta Music Initiative Sound Collection](https://www.fb.com/sound), [Shutterstock music collection](https://www.shutterstock.com/music) and the [Pond5 music collection](https://www.pond5.com/). See the paper for more details about the training set and corresponding preprocessing. - -## Quantitative analysis - -More information can be found in the paper [Simple and Controllable Music Generation][arxiv], in the Experimental Setup section. - -## Limitations and biases - -**Data:** The data sources used to train the model are created by music professionals and covered by legal agreements with the right holders. The model is trained on 20K hours of data, we believe that scaling the model on larger datasets can further improve the performance of the model. - -**Mitigations:** Vocals have been removed from the data source using corresponding tags, and then using using a state-of-the-art music source separation method, namely using the open source [Hybrid Transformer for Music Source Separation](https://github.com/facebookresearch/demucs) (HT-Demucs). - -**Limitations:** - -- The model is not able to generate realistic vocals. -- The model has been trained with English descriptions and will not perform as well in other languages. -- The model does not perform equally well for all music styles and cultures. -- The model sometimes generates end of songs, collapsing to silence. -- It is sometimes difficult to assess what types of text descriptions provide the best generations. Prompt engineering may be required to obtain satisfying results. - -**Biases:** The source of data is potentially lacking diversity and all music cultures are not equally represented in the dataset. The model may not perform equally well on the wide variety of music genres that exists. The generated samples from the model will reflect the biases from the training data. Further work on this model should include methods for balanced and just representations of cultures, for example, by scaling the training data to be both diverse and inclusive. - -**Risks and harms:** Biases and limitations of the model may lead to generation of samples that may be considered as biased, inappropriate or offensive. We believe that providing the code to reproduce the research and train new models will allow to broaden the application to new and more representative data. - -**Use cases:** Users must be aware of the biases, limitations and risks of the model. MusicGen is a model developed for artificial intelligence research on controllable music generation. As such, it should not be used for downstream applications without further investigation and mitigation of risks. - -[arxiv]: https://arxiv.org/abs/2306.05284 diff --git a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/configs/new_baselines/mask_rcnn_regnety_4gf_dds_FPN_400ep_LSJ.py b/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/configs/new_baselines/mask_rcnn_regnety_4gf_dds_FPN_400ep_LSJ.py deleted file mode 100644 index 7b86ea8c6c5c48f5d26c9e0df7cf96e745b17b34..0000000000000000000000000000000000000000 --- a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/configs/new_baselines/mask_rcnn_regnety_4gf_dds_FPN_400ep_LSJ.py +++ /dev/null @@ -1,14 +0,0 @@ -from .mask_rcnn_regnety_4gf_dds_FPN_100ep_LSJ import ( - dataloader, - lr_multiplier, - model, - optimizer, - train, -) - -train.max_iter *= 4 # 100ep -> 400ep - -lr_multiplier.scheduler.milestones = [ - milestone * 4 for milestone in lr_multiplier.scheduler.milestones -] -lr_multiplier.scheduler.num_updates = train.max_iter diff --git a/spaces/PAIR/PAIR-Diffusion/annotator/OneFormer/oneformer/config.py b/spaces/PAIR/PAIR-Diffusion/annotator/OneFormer/oneformer/config.py deleted file mode 100644 index 0dc8320dfb8b7e718cf59b31c5a3f4f018c94d9e..0000000000000000000000000000000000000000 --- a/spaces/PAIR/PAIR-Diffusion/annotator/OneFormer/oneformer/config.py +++ /dev/null @@ -1,210 +0,0 @@ -# -*- coding: utf-8 -*- -# Copyright (c) Facebook, Inc. and its affiliates. -from detectron2.config import CfgNode as CN - -__all__ = ["add_common_config", "add_oneformer_config", "add_swin_config", - "add_dinat_config", "add_convnext_config"] - -def add_common_config(cfg): - """ - Add config for common configuration - """ - - # data config - # select the dataset mapper - cfg.INPUT.DATASET_MAPPER_NAME = "oneformer_unified" - # Color augmentation - cfg.INPUT.COLOR_AUG_SSD = False - # We retry random cropping until no single category in semantic segmentation GT occupies more - # than `SINGLE_CATEGORY_MAX_AREA` part of the crop. - cfg.INPUT.CROP.SINGLE_CATEGORY_MAX_AREA = 1.0 - # Pad image and segmentation GT in dataset mapper. - cfg.INPUT.SIZE_DIVISIBILITY = -1 - - cfg.INPUT.TASK_SEQ_LEN = 77 - cfg.INPUT.MAX_SEQ_LEN = 77 - - cfg.INPUT.TASK_PROB = CN() - cfg.INPUT.TASK_PROB.SEMANTIC = 0.33 - cfg.INPUT.TASK_PROB.INSTANCE = 0.66 - - # test dataset - cfg.DATASETS.TEST_PANOPTIC = ("",) - cfg.DATASETS.TEST_INSTANCE = ("",) - cfg.DATASETS.TEST_SEMANTIC = ("",) - - # solver config - # weight decay on embedding - cfg.SOLVER.WEIGHT_DECAY_EMBED = 0.0 - # optimizer - cfg.SOLVER.OPTIMIZER = "ADAMW" - cfg.SOLVER.BACKBONE_MULTIPLIER = 0.1 - - # wandb - cfg.WANDB = CN() - cfg.WANDB.PROJECT = "OneFormer" - cfg.WANDB.NAME = None - - cfg.MODEL.IS_TRAIN = True - cfg.MODEL.IS_DEMO = False - - # text encoder config - cfg.MODEL.TEXT_ENCODER = CN() - - cfg.MODEL.TEXT_ENCODER.WIDTH = 256 - cfg.MODEL.TEXT_ENCODER.CONTEXT_LENGTH = 77 - cfg.MODEL.TEXT_ENCODER.NUM_LAYERS = 12 - cfg.MODEL.TEXT_ENCODER.VOCAB_SIZE = 49408 - cfg.MODEL.TEXT_ENCODER.PROJ_NUM_LAYERS = 2 - cfg.MODEL.TEXT_ENCODER.N_CTX = 16 - - # oneformer inference config - cfg.MODEL.TEST = CN() - cfg.MODEL.TEST.SEMANTIC_ON = True - cfg.MODEL.TEST.INSTANCE_ON = False - cfg.MODEL.TEST.PANOPTIC_ON = False - cfg.MODEL.TEST.DETECTION_ON = False - cfg.MODEL.TEST.OBJECT_MASK_THRESHOLD = 0.0 - cfg.MODEL.TEST.OVERLAP_THRESHOLD = 0.0 - cfg.MODEL.TEST.SEM_SEG_POSTPROCESSING_BEFORE_INFERENCE = False - cfg.MODEL.TEST.TASK = "panoptic" - - # TEST AUG Slide - cfg.TEST.AUG.IS_SLIDE = False - cfg.TEST.AUG.CROP_SIZE = (640, 640) - cfg.TEST.AUG.STRIDE = (426, 426) - cfg.TEST.AUG.SCALE = (2048, 640) - cfg.TEST.AUG.SETR_MULTI_SCALE = True - cfg.TEST.AUG.KEEP_RATIO = True - cfg.TEST.AUG.SIZE_DIVISOR = 32 - - # pixel decoder config - cfg.MODEL.SEM_SEG_HEAD.MASK_DIM = 256 - # adding transformer in pixel decoder - cfg.MODEL.SEM_SEG_HEAD.TRANSFORMER_ENC_LAYERS = 0 - # pixel decoder - cfg.MODEL.SEM_SEG_HEAD.PIXEL_DECODER_NAME = "BasePixelDecoder" - cfg.MODEL.SEM_SEG_HEAD.SEM_EMBED_DIM = 256 - cfg.MODEL.SEM_SEG_HEAD.INST_EMBED_DIM = 256 - - # LSJ aug - cfg.INPUT.IMAGE_SIZE = 1024 - cfg.INPUT.MIN_SCALE = 0.1 - cfg.INPUT.MAX_SCALE = 2.0 - - # MSDeformAttn encoder configs - cfg.MODEL.SEM_SEG_HEAD.DEFORMABLE_TRANSFORMER_ENCODER_IN_FEATURES = ["res3", "res4", "res5"] - cfg.MODEL.SEM_SEG_HEAD.DEFORMABLE_TRANSFORMER_ENCODER_N_POINTS = 4 - cfg.MODEL.SEM_SEG_HEAD.DEFORMABLE_TRANSFORMER_ENCODER_N_HEADS = 8 - -def add_oneformer_config(cfg): - """ - Add config for ONE_FORMER. - """ - - # oneformer model config - cfg.MODEL.ONE_FORMER = CN() - - # loss - cfg.MODEL.ONE_FORMER.DEEP_SUPERVISION = True - cfg.MODEL.ONE_FORMER.NO_OBJECT_WEIGHT = 0.1 - cfg.MODEL.ONE_FORMER.CLASS_WEIGHT = 1.0 - cfg.MODEL.ONE_FORMER.DICE_WEIGHT = 1.0 - cfg.MODEL.ONE_FORMER.MASK_WEIGHT = 20.0 - cfg.MODEL.ONE_FORMER.CONTRASTIVE_WEIGHT = 0.5 - cfg.MODEL.ONE_FORMER.CONTRASTIVE_TEMPERATURE = 0.07 - - # transformer config - cfg.MODEL.ONE_FORMER.NHEADS = 8 - cfg.MODEL.ONE_FORMER.DROPOUT = 0.1 - cfg.MODEL.ONE_FORMER.DIM_FEEDFORWARD = 2048 - cfg.MODEL.ONE_FORMER.ENC_LAYERS = 0 - cfg.MODEL.ONE_FORMER.CLASS_DEC_LAYERS = 2 - cfg.MODEL.ONE_FORMER.DEC_LAYERS = 6 - cfg.MODEL.ONE_FORMER.PRE_NORM = False - - cfg.MODEL.ONE_FORMER.HIDDEN_DIM = 256 - cfg.MODEL.ONE_FORMER.NUM_OBJECT_QUERIES = 120 - cfg.MODEL.ONE_FORMER.NUM_OBJECT_CTX = 16 - cfg.MODEL.ONE_FORMER.USE_TASK_NORM = True - - cfg.MODEL.ONE_FORMER.TRANSFORMER_IN_FEATURE = "res5" - cfg.MODEL.ONE_FORMER.ENFORCE_INPUT_PROJ = False - - # Sometimes `backbone.size_divisibility` is set to 0 for some backbone (e.g. ResNet) - # you can use this config to override - cfg.MODEL.ONE_FORMER.SIZE_DIVISIBILITY = 32 - - # transformer module - cfg.MODEL.ONE_FORMER.TRANSFORMER_DECODER_NAME = "ContrastiveMultiScaleMaskedTransformerDecoder" - - # point loss configs - # Number of points sampled during training for a mask point head. - cfg.MODEL.ONE_FORMER.TRAIN_NUM_POINTS = 112 * 112 - # Oversampling parameter for PointRend point sampling during training. Parameter `k` in the - # original paper. - cfg.MODEL.ONE_FORMER.OVERSAMPLE_RATIO = 3.0 - # Importance sampling parameter for PointRend point sampling during training. Parametr `beta` in - # the original paper. - cfg.MODEL.ONE_FORMER.IMPORTANCE_SAMPLE_RATIO = 0.75 - -def add_swin_config(cfg): - """ - Add config forSWIN Backbone. - """ - - # swin transformer backbone - cfg.MODEL.SWIN = CN() - cfg.MODEL.SWIN.PRETRAIN_IMG_SIZE = 224 - cfg.MODEL.SWIN.PATCH_SIZE = 4 - cfg.MODEL.SWIN.EMBED_DIM = 96 - cfg.MODEL.SWIN.DEPTHS = [2, 2, 6, 2] - cfg.MODEL.SWIN.NUM_HEADS = [3, 6, 12, 24] - cfg.MODEL.SWIN.WINDOW_SIZE = 7 - cfg.MODEL.SWIN.MLP_RATIO = 4.0 - cfg.MODEL.SWIN.QKV_BIAS = True - cfg.MODEL.SWIN.QK_SCALE = None - cfg.MODEL.SWIN.DROP_RATE = 0.0 - cfg.MODEL.SWIN.ATTN_DROP_RATE = 0.0 - cfg.MODEL.SWIN.DROP_PATH_RATE = 0.3 - cfg.MODEL.SWIN.APE = False - cfg.MODEL.SWIN.PATCH_NORM = True - cfg.MODEL.SWIN.OUT_FEATURES = ["res2", "res3", "res4", "res5"] - cfg.MODEL.SWIN.USE_CHECKPOINT = False - -def add_dinat_config(cfg): - """ - Add config for NAT Backbone. - """ - - # DINAT transformer backbone - cfg.MODEL.DiNAT = CN() - cfg.MODEL.DiNAT.DEPTHS = [3, 4, 18, 5] - cfg.MODEL.DiNAT.OUT_FEATURES = ["res2", "res3", "res4", "res5"] - cfg.MODEL.DiNAT.EMBED_DIM = 64 - cfg.MODEL.DiNAT.MLP_RATIO = 3.0 - cfg.MODEL.DiNAT.NUM_HEADS = [2, 4, 8, 16] - cfg.MODEL.DiNAT.DROP_PATH_RATE = 0.2 - cfg.MODEL.DiNAT.KERNEL_SIZE = 7 - cfg.MODEL.DiNAT.DILATIONS = [[1, 16, 1], [1, 4, 1, 8], [1, 2, 1, 3, 1, 4], [1, 2, 1, 2, 1]] - cfg.MODEL.DiNAT.OUT_INDICES = (0, 1, 2, 3) - cfg.MODEL.DiNAT.QKV_BIAS = True - cfg.MODEL.DiNAT.QK_SCALE = None - cfg.MODEL.DiNAT.DROP_RATE = 0 - cfg.MODEL.DiNAT.ATTN_DROP_RATE = 0. - cfg.MODEL.DiNAT.IN_PATCH_SIZE = 4 - -def add_convnext_config(cfg): - """ - Add config for ConvNeXt Backbone. - """ - - # swin transformer backbone - cfg.MODEL.CONVNEXT = CN() - cfg.MODEL.CONVNEXT.IN_CHANNELS = 3 - cfg.MODEL.CONVNEXT.DEPTHS = [3, 3, 27, 3] - cfg.MODEL.CONVNEXT.DIMS = [192, 384, 768, 1536] - cfg.MODEL.CONVNEXT.DROP_PATH_RATE = 0.4 - cfg.MODEL.CONVNEXT.LSIT = 1.0 - cfg.MODEL.CONVNEXT.OUT_INDICES = [0, 1, 2, 3] - cfg.MODEL.CONVNEXT.OUT_FEATURES = ["res2", "res3", "res4", "res5"] \ No newline at end of file diff --git a/spaces/PAIR/PAIR-Diffusion/annotator/OneFormer/oneformer/modeling/transformer_decoder/oneformer_transformer_decoder.py b/spaces/PAIR/PAIR-Diffusion/annotator/OneFormer/oneformer/modeling/transformer_decoder/oneformer_transformer_decoder.py deleted file mode 100644 index a06d586f70131c86604ee0113993b99effaba340..0000000000000000000000000000000000000000 --- a/spaces/PAIR/PAIR-Diffusion/annotator/OneFormer/oneformer/modeling/transformer_decoder/oneformer_transformer_decoder.py +++ /dev/null @@ -1,528 +0,0 @@ -# ------------------------------------------------------------------------------ -# Reference: https://github.com/facebookresearch/Mask2Former/blob/main/mask2former/modeling/transformer_decoder/mask2former_transformer_decoder.py -# Modified by Jitesh Jain (https://github.com/praeclarumjj3) -# ------------------------------------------------------------------------------ - -import logging -import fvcore.nn.weight_init as weight_init -from typing import Optional -import torch -from torch import nn, Tensor -from torch.nn import functional as F - -from detectron2.config import configurable -from detectron2.layers import Conv2d - -from .position_encoding import PositionEmbeddingSine -from .transformer import Transformer - -from detectron2.utils.registry import Registry - - -TRANSFORMER_DECODER_REGISTRY = Registry("TRANSFORMER_MODULE") -TRANSFORMER_DECODER_REGISTRY.__doc__ = """ -Registry for transformer module in OneFormer. -""" - - -def build_transformer_decoder(cfg, in_channels, mask_classification=True): - """ - Build a instance embedding branch from `cfg.MODEL.INS_EMBED_HEAD.NAME`. - """ - name = cfg.MODEL.ONE_FORMER.TRANSFORMER_DECODER_NAME - return TRANSFORMER_DECODER_REGISTRY.get(name)(cfg, in_channels, mask_classification) - - -class SelfAttentionLayer(nn.Module): - - def __init__(self, d_model, nhead, dropout=0.0, - activation="relu", normalize_before=False): - super().__init__() - self.self_attn = nn.MultiheadAttention(d_model, nhead, dropout=dropout) - - self.norm = nn.LayerNorm(d_model) - self.dropout = nn.Dropout(dropout) - - self.activation = _get_activation_fn(activation) - self.normalize_before = normalize_before - - self._reset_parameters() - - def _reset_parameters(self): - for p in self.parameters(): - if p.dim() > 1: - nn.init.xavier_uniform_(p) - - def with_pos_embed(self, tensor, pos: Optional[Tensor]): - return tensor if pos is None else tensor + pos - - def forward_post(self, tgt, - tgt_mask: Optional[Tensor] = None, - tgt_key_padding_mask: Optional[Tensor] = None, - query_pos: Optional[Tensor] = None): - q = k = self.with_pos_embed(tgt, query_pos) - tgt2 = self.self_attn(q, k, value=tgt, attn_mask=tgt_mask, - key_padding_mask=tgt_key_padding_mask)[0] - tgt = tgt + self.dropout(tgt2) - tgt = self.norm(tgt) - - return tgt - - def forward_pre(self, tgt, - tgt_mask: Optional[Tensor] = None, - tgt_key_padding_mask: Optional[Tensor] = None, - query_pos: Optional[Tensor] = None): - tgt2 = self.norm(tgt) - q = k = self.with_pos_embed(tgt2, query_pos) - tgt2 = self.self_attn(q, k, value=tgt2, attn_mask=tgt_mask, - key_padding_mask=tgt_key_padding_mask)[0] - tgt = tgt + self.dropout(tgt2) - - return tgt - - def forward(self, tgt, - tgt_mask: Optional[Tensor] = None, - tgt_key_padding_mask: Optional[Tensor] = None, - query_pos: Optional[Tensor] = None): - if self.normalize_before: - return self.forward_pre(tgt, tgt_mask, - tgt_key_padding_mask, query_pos) - return self.forward_post(tgt, tgt_mask, - tgt_key_padding_mask, query_pos) - - -class CrossAttentionLayer(nn.Module): - - def __init__(self, d_model, nhead, dropout=0.0, - activation="relu", normalize_before=False): - super().__init__() - self.multihead_attn = nn.MultiheadAttention(d_model, nhead, dropout=dropout) - - self.norm = nn.LayerNorm(d_model) - self.dropout = nn.Dropout(dropout) - - self.activation = _get_activation_fn(activation) - self.normalize_before = normalize_before - - self._reset_parameters() - - def _reset_parameters(self): - for p in self.parameters(): - if p.dim() > 1: - nn.init.xavier_uniform_(p) - - def with_pos_embed(self, tensor, pos: Optional[Tensor]): - return tensor if pos is None else tensor + pos - - def forward_post(self, tgt, memory, - memory_mask: Optional[Tensor] = None, - memory_key_padding_mask: Optional[Tensor] = None, - pos: Optional[Tensor] = None, - query_pos: Optional[Tensor] = None): - tgt2 = self.multihead_attn(query=self.with_pos_embed(tgt, query_pos), - key=self.with_pos_embed(memory, pos), - value=memory, attn_mask=memory_mask, - key_padding_mask=memory_key_padding_mask)[0] - tgt = tgt + self.dropout(tgt2) - tgt = self.norm(tgt) - - return tgt - - def forward_pre(self, tgt, memory, - memory_mask: Optional[Tensor] = None, - memory_key_padding_mask: Optional[Tensor] = None, - pos: Optional[Tensor] = None, - query_pos: Optional[Tensor] = None): - tgt2 = self.norm(tgt) - tgt2 = self.multihead_attn(query=self.with_pos_embed(tgt2, query_pos), - key=self.with_pos_embed(memory, pos), - value=memory, attn_mask=memory_mask, - key_padding_mask=memory_key_padding_mask)[0] - tgt = tgt + self.dropout(tgt2) - - return tgt - - def forward(self, tgt, memory, - memory_mask: Optional[Tensor] = None, - memory_key_padding_mask: Optional[Tensor] = None, - pos: Optional[Tensor] = None, - query_pos: Optional[Tensor] = None): - if self.normalize_before: - return self.forward_pre(tgt, memory, memory_mask, - memory_key_padding_mask, pos, query_pos) - return self.forward_post(tgt, memory, memory_mask, - memory_key_padding_mask, pos, query_pos) - - -class FFNLayer(nn.Module): - - def __init__(self, d_model, dim_feedforward=2048, dropout=0.0, - activation="relu", normalize_before=False): - super().__init__() - # Implementation of Feedforward model - self.linear1 = nn.Linear(d_model, dim_feedforward) - self.dropout = nn.Dropout(dropout) - self.linear2 = nn.Linear(dim_feedforward, d_model) - - self.norm = nn.LayerNorm(d_model) - - self.activation = _get_activation_fn(activation) - self.normalize_before = normalize_before - - self._reset_parameters() - - def _reset_parameters(self): - for p in self.parameters(): - if p.dim() > 1: - nn.init.xavier_uniform_(p) - - def with_pos_embed(self, tensor, pos: Optional[Tensor]): - return tensor if pos is None else tensor + pos - - def forward_post(self, tgt): - tgt2 = self.linear2(self.dropout(self.activation(self.linear1(tgt)))) - tgt = tgt + self.dropout(tgt2) - tgt = self.norm(tgt) - return tgt - - def forward_pre(self, tgt): - tgt2 = self.norm(tgt) - tgt2 = self.linear2(self.dropout(self.activation(self.linear1(tgt2)))) - tgt = tgt + self.dropout(tgt2) - return tgt - - def forward(self, tgt): - if self.normalize_before: - return self.forward_pre(tgt) - return self.forward_post(tgt) - - -def _get_activation_fn(activation): - """Return an activation function given a string""" - if activation == "relu": - return F.relu - if activation == "gelu": - return F.gelu - if activation == "glu": - return F.glu - raise RuntimeError(F"activation should be relu/gelu, not {activation}.") - - -class MLP(nn.Module): - """ Very simple multi-layer perceptron (also called FFN)""" - - def __init__(self, input_dim, hidden_dim, output_dim, num_layers): - super().__init__() - self.num_layers = num_layers - h = [hidden_dim] * (num_layers - 1) - self.layers = nn.ModuleList(nn.Linear(n, k) for n, k in zip([input_dim] + h, h + [output_dim])) - - def forward(self, x): - for i, layer in enumerate(self.layers): - x = F.relu(layer(x)) if i < self.num_layers - 1 else layer(x) - return x - - -@TRANSFORMER_DECODER_REGISTRY.register() -class ContrastiveMultiScaleMaskedTransformerDecoder(nn.Module): - - _version = 2 - - def _load_from_state_dict( - self, state_dict, prefix, local_metadata, strict, missing_keys, unexpected_keys, error_msgs - ): - version = local_metadata.get("version", None) - if version is None or version < 2: - # Do not warn if train from scratch - scratch = True - logger = logging.getLogger(__name__) - for k in list(state_dict.keys()): - newk = k - if "static_query" in k: - newk = k.replace("static_query", "query_feat") - if newk != k: - state_dict[newk] = state_dict[k] - del state_dict[k] - scratch = False - - if not scratch: - logger.warning( - f"Weight format of {self.__class__.__name__} have changed! " - "Please upgrade your models. Applying automatic conversion now ..." - ) - - @configurable - def __init__( - self, - in_channels, - mask_classification=True, - *, - num_classes: int, - hidden_dim: int, - num_queries: int, - nheads: int, - dropout: float, - dim_feedforward: int, - enc_layers: int, - is_train: bool, - dec_layers: int, - class_dec_layers: int, - pre_norm: bool, - mask_dim: int, - enforce_input_project: bool, - use_task_norm: bool, - ): - """ - NOTE: this interface is experimental. - Args: - in_channels: channels of the input features - mask_classification: whether to add mask classifier or not - num_classes: number of classes - hidden_dim: Transformer feature dimension - num_queries: number of queries - nheads: number of heads - dim_feedforward: feature dimension in feedforward network - enc_layers: number of Transformer encoder layers - dec_layers: number of Transformer decoder layers - pre_norm: whether to use pre-LayerNorm or not - mask_dim: mask feature dimension - enforce_input_project: add input project 1x1 conv even if input - channels and hidden dim is identical - """ - super().__init__() - - assert mask_classification, "Only support mask classification model" - self.mask_classification = mask_classification - self.is_train = is_train - self.use_task_norm = use_task_norm - - # positional encoding - N_steps = hidden_dim // 2 - self.pe_layer = PositionEmbeddingSine(N_steps, normalize=True) - - self.class_transformer = Transformer( - d_model=hidden_dim, - dropout=dropout, - nhead=nheads, - dim_feedforward=dim_feedforward, - num_encoder_layers=enc_layers, - num_decoder_layers=class_dec_layers, - normalize_before=pre_norm, - return_intermediate_dec=False, - ) - - # define Transformer decoder here - self.num_heads = nheads - self.num_layers = dec_layers - self.transformer_self_attention_layers = nn.ModuleList() - self.transformer_cross_attention_layers = nn.ModuleList() - self.transformer_ffn_layers = nn.ModuleList() - - for _ in range(self.num_layers): - self.transformer_self_attention_layers.append( - SelfAttentionLayer( - d_model=hidden_dim, - nhead=nheads, - dropout=0.0, - normalize_before=pre_norm, - ) - ) - - self.transformer_cross_attention_layers.append( - CrossAttentionLayer( - d_model=hidden_dim, - nhead=nheads, - dropout=0.0, - normalize_before=pre_norm, - ) - ) - - self.transformer_ffn_layers.append( - FFNLayer( - d_model=hidden_dim, - dim_feedforward=dim_feedforward, - dropout=0.0, - normalize_before=pre_norm, - ) - ) - - self.decoder_norm = nn.LayerNorm(hidden_dim) - - self.num_queries = num_queries - # learnable query p.e. - self.query_embed = nn.Embedding(num_queries, hidden_dim) - - # level embedding (we always use 3 scales) - self.num_feature_levels = 3 - self.level_embed = nn.Embedding(self.num_feature_levels, hidden_dim) - self.input_proj = nn.ModuleList() - for _ in range(self.num_feature_levels): - if in_channels != hidden_dim or enforce_input_project: - self.input_proj.append(Conv2d(in_channels, hidden_dim, kernel_size=1)) - weight_init.c2_xavier_fill(self.input_proj[-1]) - else: - self.input_proj.append(nn.Sequential()) - - self.class_input_proj = Conv2d(in_channels, hidden_dim, kernel_size=1) - weight_init.c2_xavier_fill(self.class_input_proj) - - # output FFNs - if self.mask_classification: - self.class_embed = nn.Linear(hidden_dim, num_classes + 1) - self.mask_embed = MLP(hidden_dim, hidden_dim, mask_dim, 3) - - @classmethod - def from_config(cls, cfg, in_channels, mask_classification): - ret = {} - ret["in_channels"] = in_channels - ret["mask_classification"] = mask_classification - - ret["num_classes"] = cfg.MODEL.SEM_SEG_HEAD.NUM_CLASSES - ret["hidden_dim"] = cfg.MODEL.ONE_FORMER.HIDDEN_DIM - ret["num_queries"] = cfg.MODEL.ONE_FORMER.NUM_OBJECT_QUERIES - # Transformer parameters: - ret["nheads"] = cfg.MODEL.ONE_FORMER.NHEADS - ret["dim_feedforward"] = cfg.MODEL.ONE_FORMER.DIM_FEEDFORWARD - - # NOTE: because we add learnable query features which requires supervision, - # we add minus 1 to decoder layers to be consistent with our loss - # implementation: that is, number of auxiliary losses is always - # equal to number of decoder layers. With learnable query features, the number of - # auxiliary losses equals number of decoders plus 1. - assert cfg.MODEL.ONE_FORMER.DEC_LAYERS >= 1 - ret["dec_layers"] = cfg.MODEL.ONE_FORMER.DEC_LAYERS - 1 - ret["class_dec_layers"] = cfg.MODEL.ONE_FORMER.CLASS_DEC_LAYERS - ret["enc_layers"] = cfg.MODEL.ONE_FORMER.ENC_LAYERS - ret["dropout"] = cfg.MODEL.ONE_FORMER.DROPOUT - ret["pre_norm"] = cfg.MODEL.ONE_FORMER.PRE_NORM - ret["enforce_input_project"] = cfg.MODEL.ONE_FORMER.ENFORCE_INPUT_PROJ - ret["is_train"] = cfg.MODEL.IS_TRAIN - ret["mask_dim"] = cfg.MODEL.SEM_SEG_HEAD.MASK_DIM - ret["use_task_norm"] = cfg.MODEL.ONE_FORMER.USE_TASK_NORM - - return ret - - def forward(self, x, mask_features, tasks, mask = None): - # x is a list of multi-scale feature - assert len(x) == self.num_feature_levels - src = [] - pos = [] - size_list = [] - - # disable mask, it does not affect performance - del mask - - for i in range(self.num_feature_levels): - size_list.append(x[i].shape[-2:]) - pos.append(self.pe_layer(x[i], None).flatten(2)) - src.append(self.input_proj[i](x[i]).flatten(2) + self.level_embed.weight[i][None, :, None]) - - # flatten NxCxHxW to HWxNxC - pos[-1] = pos[-1].permute(2, 0, 1) - src[-1] = src[-1].permute(2, 0, 1) - - _, bs, _ = src[0].shape - - # QxNxC - query_embed = self.query_embed.weight.unsqueeze(1).repeat(1, bs, 1) - tasks = tasks.unsqueeze(0) - if self.use_task_norm: - tasks = self.decoder_norm(tasks) - - feats = self.pe_layer(mask_features, None) - - out_t, _ = self.class_transformer(feats, None, - self.query_embed.weight[:-1], - self.class_input_proj(mask_features), - tasks if self.use_task_norm else None) - out_t = out_t[0].permute(1, 0, 2) - - out = torch.cat([out_t, tasks], dim=0) - - output = out.clone() - - predictions_class = [] - predictions_mask = [] - - # prediction heads on learnable query features - outputs_class, outputs_mask, attn_mask = self.forward_prediction_heads(output, mask_features, attn_mask_target_size=size_list[0], i=0) - predictions_class.append(outputs_class) - predictions_mask.append(outputs_mask) - - for i in range(self.num_layers): - level_index = i % self.num_feature_levels - attn_mask[torch.where(attn_mask.sum(-1) == attn_mask.shape[-1])] = False - # attention: cross-attention first - output = self.transformer_cross_attention_layers[i]( - output, src[level_index], - memory_mask=attn_mask, - memory_key_padding_mask=None, # here we do not apply masking on padded region - pos=pos[level_index], query_pos=query_embed - ) - - output = self.transformer_self_attention_layers[i]( - output, tgt_mask=None, - tgt_key_padding_mask=None, - query_pos=query_embed - ) - - # FFN - output = self.transformer_ffn_layers[i]( - output - ) - - outputs_class, outputs_mask, attn_mask = self.forward_prediction_heads(output, mask_features, attn_mask_target_size=size_list[(i + 1) % self.num_feature_levels], i=i+1) - predictions_class.append(outputs_class) - predictions_mask.append(outputs_mask) - - assert len(predictions_class) == self.num_layers + 1 - if self.is_train: - query_class = out.permute(1, 0, 2) - else: - query_class = None - out = { - 'contrastive_logits': query_class, - 'pred_logits': predictions_class[-1], - 'pred_masks': predictions_mask[-1], - 'aux_outputs': self._set_aux_loss( - predictions_class if self.mask_classification else None, - predictions_mask, - ) - } - - return out - - def forward_prediction_heads(self, output, mask_features, attn_mask_target_size, i): - decoder_output = self.decoder_norm(output) - decoder_output = decoder_output.transpose(0, 1) - outputs_class = self.class_embed(decoder_output) - mask_embed = self.mask_embed(decoder_output) - outputs_mask = torch.einsum("bqc,bchw->bqhw", mask_embed, mask_features) - - # NOTE: prediction is of higher-resolution - # [B, Q, H, W] -> [B, Q, H*W] -> [B, h, Q, H*W] -> [B*h, Q, HW] - attn_mask = F.interpolate(outputs_mask, size=attn_mask_target_size, mode="bilinear", align_corners=False) - - # save_attn_masks(attn_mask.sigmoid() < 0.5, fname=f'demo/maps/{i}_pre_bool') - - # must use bool type - # If a BoolTensor is provided, positions with ``True`` are not allowed to attend while ``False`` values will be unchanged. - attn_mask = (attn_mask.sigmoid().flatten(2).unsqueeze(1).repeat(1, self.num_heads, 1, 1).flatten(0, 1) < 0.5).bool() - attn_mask = attn_mask.detach() - - return outputs_class, outputs_mask, attn_mask - - @torch.jit.unused - def _set_aux_loss(self, outputs_class, outputs_seg_masks): - # this is a workaround to make torchscript happy, as torchscript - # doesn't support dictionary with non-homogeneous values, such - # as a dict having both a Tensor and a list. - if self.mask_classification: - aux_list = [ - {"pred_logits": a, "pred_masks": b} - for a, b in zip(outputs_class[:-1], outputs_seg_masks[:-1]) - ] - else: - aux_list = [{"pred_masks": b} for b, in outputs_seg_masks[:-1]] - - return aux_list \ No newline at end of file diff --git a/spaces/PAIR/Text2Video-Zero/annotator/uniformer/mmcv/cnn/bricks/conv.py b/spaces/PAIR/Text2Video-Zero/annotator/uniformer/mmcv/cnn/bricks/conv.py deleted file mode 100644 index cf54491997a48ac3e7fadc4183ab7bf3e831024c..0000000000000000000000000000000000000000 --- a/spaces/PAIR/Text2Video-Zero/annotator/uniformer/mmcv/cnn/bricks/conv.py +++ /dev/null @@ -1,44 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from torch import nn - -from .registry import CONV_LAYERS - -CONV_LAYERS.register_module('Conv1d', module=nn.Conv1d) -CONV_LAYERS.register_module('Conv2d', module=nn.Conv2d) -CONV_LAYERS.register_module('Conv3d', module=nn.Conv3d) -CONV_LAYERS.register_module('Conv', module=nn.Conv2d) - - -def build_conv_layer(cfg, *args, **kwargs): - """Build convolution layer. - - Args: - cfg (None or dict): The conv layer config, which should contain: - - type (str): Layer type. - - layer args: Args needed to instantiate an conv layer. - args (argument list): Arguments passed to the `__init__` - method of the corresponding conv layer. - kwargs (keyword arguments): Keyword arguments passed to the `__init__` - method of the corresponding conv layer. - - Returns: - nn.Module: Created conv layer. - """ - if cfg is None: - cfg_ = dict(type='Conv2d') - else: - if not isinstance(cfg, dict): - raise TypeError('cfg must be a dict') - if 'type' not in cfg: - raise KeyError('the cfg dict must contain the key "type"') - cfg_ = cfg.copy() - - layer_type = cfg_.pop('type') - if layer_type not in CONV_LAYERS: - raise KeyError(f'Unrecognized norm type {layer_type}') - else: - conv_layer = CONV_LAYERS.get(layer_type) - - layer = conv_layer(*args, **kwargs, **cfg_) - - return layer diff --git a/spaces/PaulHilders/CLIPGroundingExplainability/app.py b/spaces/PaulHilders/CLIPGroundingExplainability/app.py deleted file mode 100644 index 0732b5ced06d8e39c4340484869ddebe49998461..0000000000000000000000000000000000000000 --- a/spaces/PaulHilders/CLIPGroundingExplainability/app.py +++ /dev/null @@ -1,67 +0,0 @@ -import sys -import gradio as gr - -# sys.path.append("../") -sys.path.append("CLIP_explainability/Transformer-MM-Explainability/") - -import torch -import CLIP.clip as clip - - -from clip_grounding.utils.image import pad_to_square -from clip_grounding.datasets.png import ( - overlay_relevance_map_on_image, -) -from CLIP_explainability.utils import interpret, show_img_heatmap, show_heatmap_on_text - -clip.clip._MODELS = { - "ViT-B/32": "https://openaipublic.azureedge.net/clip/models/40d365715913c9da98579312b702a82c18be219cc2a73407c4526f58eba950af/ViT-B-32.pt", - "ViT-B/16": "https://openaipublic.azureedge.net/clip/models/5806e77cd80f8b59890b7e101eabd078d9fb84e6937f9e85e4ecb61988df416f/ViT-B-16.pt", -} - -device = "cuda" if torch.cuda.is_available() else "cpu" -model, preprocess = clip.load("ViT-B/32", device=device, jit=False) - -# Gradio Section: -def run_demo(image, text): - orig_image = pad_to_square(image) - img = preprocess(orig_image).unsqueeze(0).to(device) - text_input = clip.tokenize([text]).to(device) - - R_text, R_image = interpret(model=model, image=img, texts=text_input, device=device) - - image_relevance = show_img_heatmap(R_image[0], img, orig_image=orig_image, device=device, show=False) - overlapped = overlay_relevance_map_on_image(image, image_relevance) - - text_scores, text_tokens_decoded = show_heatmap_on_text(text, text_input, R_text[0], show=False) - - highlighted_text = [] - for i, token in enumerate(text_tokens_decoded): - highlighted_text.append((str(token), float(text_scores[i]))) - - return overlapped, highlighted_text - -input_img = gr.inputs.Image(type='pil', label="Original Image") -input_txt = "text" -inputs = [input_img, input_txt] - -outputs = [gr.inputs.Image(type='pil', label="Output Image"), "highlight"] - - -iface = gr.Interface(fn=run_demo, - inputs=inputs, - outputs=outputs, - title="CLIP Grounding Explainability", - description="A demonstration based on the Generic Attention-model Explainability method for Interpreting Bi-Modal Transformers by Chefer et al. (2021): https://github.com/hila-chefer/Transformer-MM-Explainability.", - examples=[["example_images/London.png", "London Eye"], - ["example_images/London.png", "Big Ben"], - ["example_images/harrypotter.png", "Harry"], - ["example_images/harrypotter.png", "Hermione"], - ["example_images/harrypotter.png", "Ron"], - ["example_images/Amsterdam.png", "Amsterdam canal"], - ["example_images/Amsterdam.png", "Old buildings"], - ["example_images/Amsterdam.png", "Pink flowers"], - ["example_images/dogs_on_bed.png", "Two dogs"], - ["example_images/dogs_on_bed.png", "Book"], - ["example_images/dogs_on_bed.png", "Cat"]]) -iface.launch(debug=True) \ No newline at end of file diff --git a/spaces/Pinwheel/GLIP-BLIP-Object-Detection-VQA/maskrcnn_benchmark/data/transforms/build.py b/spaces/Pinwheel/GLIP-BLIP-Object-Detection-VQA/maskrcnn_benchmark/data/transforms/build.py deleted file mode 100644 index 3ed88faea6d328c3ce7e4a9a6361eea6b2646099..0000000000000000000000000000000000000000 --- a/spaces/Pinwheel/GLIP-BLIP-Object-Detection-VQA/maskrcnn_benchmark/data/transforms/build.py +++ /dev/null @@ -1,45 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved. -from . import transforms as T - - -def build_transforms(cfg, is_train=True): - if is_train: - if len(cfg.AUGMENT.MULT_MIN_SIZE_TRAIN)>0: - min_size = cfg.AUGMENT.MULT_MIN_SIZE_TRAIN - else: - min_size = cfg.INPUT.MIN_SIZE_TRAIN - max_size = cfg.INPUT.MAX_SIZE_TRAIN - flip_horizontal_prob = cfg.AUGMENT.FLIP_PROB_TRAIN - flip_vertical_prob = cfg.AUGMENT.VERTICAL_FLIP_PROB_TRAIN - brightness = cfg.AUGMENT.BRIGHTNESS - contrast = cfg.AUGMENT.CONTRAST - saturation = cfg.AUGMENT.SATURATION - hue = cfg.AUGMENT.HUE - - crop_prob = cfg.AUGMENT.CROP_PROB - min_ious = cfg.AUGMENT.CROP_MIN_IOUS - min_crop_size = cfg.AUGMENT.CROP_MIN_SIZE - - else: - min_size = cfg.INPUT.MIN_SIZE_TEST - max_size = cfg.INPUT.MAX_SIZE_TEST - flip_horizontal_prob = 0.0 - - fix_res = cfg.INPUT.FIX_RES - if cfg.INPUT.FORMAT is not '': - input_format = cfg.INPUT.FORMAT - elif cfg.INPUT.TO_BGR255: - input_format = 'bgr255' - normalize_transform = T.Normalize( - mean=cfg.INPUT.PIXEL_MEAN, std=cfg.INPUT.PIXEL_STD, format=input_format - ) - - transform = T.Compose( - [ - T.Resize(min_size, max_size, restrict=fix_res), - T.RandomHorizontalFlip(flip_horizontal_prob), - T.ToTensor(), - normalize_transform, - ] - ) - return transform diff --git a/spaces/Pinwheel/SuperGlue-Image-Matching/models/superpoint.py b/spaces/Pinwheel/SuperGlue-Image-Matching/models/superpoint.py deleted file mode 100644 index b837d938f755850180ddc168e957742e874adacd..0000000000000000000000000000000000000000 --- a/spaces/Pinwheel/SuperGlue-Image-Matching/models/superpoint.py +++ /dev/null @@ -1,202 +0,0 @@ -# %BANNER_BEGIN% -# --------------------------------------------------------------------- -# %COPYRIGHT_BEGIN% -# -# Magic Leap, Inc. ("COMPANY") CONFIDENTIAL -# -# Unpublished Copyright (c) 2020 -# Magic Leap, Inc., All Rights Reserved. -# -# NOTICE: All information contained herein is, and remains the property -# of COMPANY. The intellectual and technical concepts contained herein -# are proprietary to COMPANY and may be covered by U.S. and Foreign -# Patents, patents in process, and are protected by trade secret or -# copyright law. Dissemination of this information or reproduction of -# this material is strictly forbidden unless prior written permission is -# obtained from COMPANY. Access to the source code contained herein is -# hereby forbidden to anyone except current COMPANY employees, managers -# or contractors who have executed Confidentiality and Non-disclosure -# agreements explicitly covering such access. -# -# The copyright notice above does not evidence any actual or intended -# publication or disclosure of this source code, which includes -# information that is confidential and/or proprietary, and is a trade -# secret, of COMPANY. ANY REPRODUCTION, MODIFICATION, DISTRIBUTION, -# PUBLIC PERFORMANCE, OR PUBLIC DISPLAY OF OR THROUGH USE OF THIS -# SOURCE CODE WITHOUT THE EXPRESS WRITTEN CONSENT OF COMPANY IS -# STRICTLY PROHIBITED, AND IN VIOLATION OF APPLICABLE LAWS AND -# INTERNATIONAL TREATIES. THE RECEIPT OR POSSESSION OF THIS SOURCE -# CODE AND/OR RELATED INFORMATION DOES NOT CONVEY OR IMPLY ANY RIGHTS -# TO REPRODUCE, DISCLOSE OR DISTRIBUTE ITS CONTENTS, OR TO MANUFACTURE, -# USE, OR SELL ANYTHING THAT IT MAY DESCRIBE, IN WHOLE OR IN PART. -# -# %COPYRIGHT_END% -# ---------------------------------------------------------------------- -# %AUTHORS_BEGIN% -# -# Originating Authors: Paul-Edouard Sarlin -# -# %AUTHORS_END% -# --------------------------------------------------------------------*/ -# %BANNER_END% - -from pathlib import Path -import torch -from torch import nn - -def simple_nms(scores, nms_radius: int): - """ Fast Non-maximum suppression to remove nearby points """ - assert(nms_radius >= 0) - - def max_pool(x): - return torch.nn.functional.max_pool2d( - x, kernel_size=nms_radius*2+1, stride=1, padding=nms_radius) - - zeros = torch.zeros_like(scores) - max_mask = scores == max_pool(scores) - for _ in range(2): - supp_mask = max_pool(max_mask.float()) > 0 - supp_scores = torch.where(supp_mask, zeros, scores) - new_max_mask = supp_scores == max_pool(supp_scores) - max_mask = max_mask | (new_max_mask & (~supp_mask)) - return torch.where(max_mask, scores, zeros) - - -def remove_borders(keypoints, scores, border: int, height: int, width: int): - """ Removes keypoints too close to the border """ - mask_h = (keypoints[:, 0] >= border) & (keypoints[:, 0] < (height - border)) - mask_w = (keypoints[:, 1] >= border) & (keypoints[:, 1] < (width - border)) - mask = mask_h & mask_w - return keypoints[mask], scores[mask] - - -def top_k_keypoints(keypoints, scores, k: int): - if k >= len(keypoints): - return keypoints, scores - scores, indices = torch.topk(scores, k, dim=0) - return keypoints[indices], scores - - -def sample_descriptors(keypoints, descriptors, s: int = 8): - """ Interpolate descriptors at keypoint locations """ - b, c, h, w = descriptors.shape - keypoints = keypoints - s / 2 + 0.5 - keypoints /= torch.tensor([(w*s - s/2 - 0.5), (h*s - s/2 - 0.5)], - ).to(keypoints)[None] - keypoints = keypoints*2 - 1 # normalize to (-1, 1) - args = {'align_corners': True} if torch.__version__ >= '1.3' else {} - descriptors = torch.nn.functional.grid_sample( - descriptors, keypoints.view(b, 1, -1, 2), mode='bilinear', **args) - descriptors = torch.nn.functional.normalize( - descriptors.reshape(b, c, -1), p=2, dim=1) - return descriptors - - -class SuperPoint(nn.Module): - """SuperPoint Convolutional Detector and Descriptor - - SuperPoint: Self-Supervised Interest Point Detection and - Description. Daniel DeTone, Tomasz Malisiewicz, and Andrew - Rabinovich. In CVPRW, 2019. https://arxiv.org/abs/1712.07629 - - """ - default_config = { - 'descriptor_dim': 256, - 'nms_radius': 4, - 'keypoint_threshold': 0.005, - 'max_keypoints': -1, - 'remove_borders': 4, - } - - def __init__(self, config): - super().__init__() - self.config = {**self.default_config, **config} - - self.relu = nn.ReLU(inplace=True) - self.pool = nn.MaxPool2d(kernel_size=2, stride=2) - c1, c2, c3, c4, c5 = 64, 64, 128, 128, 256 - - self.conv1a = nn.Conv2d(1, c1, kernel_size=3, stride=1, padding=1) - self.conv1b = nn.Conv2d(c1, c1, kernel_size=3, stride=1, padding=1) - self.conv2a = nn.Conv2d(c1, c2, kernel_size=3, stride=1, padding=1) - self.conv2b = nn.Conv2d(c2, c2, kernel_size=3, stride=1, padding=1) - self.conv3a = nn.Conv2d(c2, c3, kernel_size=3, stride=1, padding=1) - self.conv3b = nn.Conv2d(c3, c3, kernel_size=3, stride=1, padding=1) - self.conv4a = nn.Conv2d(c3, c4, kernel_size=3, stride=1, padding=1) - self.conv4b = nn.Conv2d(c4, c4, kernel_size=3, stride=1, padding=1) - - self.convPa = nn.Conv2d(c4, c5, kernel_size=3, stride=1, padding=1) - self.convPb = nn.Conv2d(c5, 65, kernel_size=1, stride=1, padding=0) - - self.convDa = nn.Conv2d(c4, c5, kernel_size=3, stride=1, padding=1) - self.convDb = nn.Conv2d( - c5, self.config['descriptor_dim'], - kernel_size=1, stride=1, padding=0) - - path = Path(__file__).parent / 'weights/superpoint_v1.pth' - self.load_state_dict(torch.load(str(path))) - - mk = self.config['max_keypoints'] - if mk == 0 or mk < -1: - raise ValueError('\"max_keypoints\" must be positive or \"-1\"') - - print('Loaded SuperPoint model') - - def forward(self, data): - """ Compute keypoints, scores, descriptors for image """ - # Shared Encoder - x = self.relu(self.conv1a(data['image'])) - x = self.relu(self.conv1b(x)) - x = self.pool(x) - x = self.relu(self.conv2a(x)) - x = self.relu(self.conv2b(x)) - x = self.pool(x) - x = self.relu(self.conv3a(x)) - x = self.relu(self.conv3b(x)) - x = self.pool(x) - x = self.relu(self.conv4a(x)) - x = self.relu(self.conv4b(x)) - - # Compute the dense keypoint scores - cPa = self.relu(self.convPa(x)) - scores = self.convPb(cPa) - scores = torch.nn.functional.softmax(scores, 1)[:, :-1] - b, _, h, w = scores.shape - scores = scores.permute(0, 2, 3, 1).reshape(b, h, w, 8, 8) - scores = scores.permute(0, 1, 3, 2, 4).reshape(b, h*8, w*8) - scores = simple_nms(scores, self.config['nms_radius']) - - # Extract keypoints - keypoints = [ - torch.nonzero(s > self.config['keypoint_threshold']) - for s in scores] - scores = [s[tuple(k.t())] for s, k in zip(scores, keypoints)] - - # Discard keypoints near the image borders - keypoints, scores = list(zip(*[ - remove_borders(k, s, self.config['remove_borders'], h*8, w*8) - for k, s in zip(keypoints, scores)])) - - # Keep the k keypoints with highest score - if self.config['max_keypoints'] >= 0: - keypoints, scores = list(zip(*[ - top_k_keypoints(k, s, self.config['max_keypoints']) - for k, s in zip(keypoints, scores)])) - - # Convert (h, w) to (x, y) - keypoints = [torch.flip(k, [1]).float() for k in keypoints] - - # Compute the dense descriptors - cDa = self.relu(self.convDa(x)) - descriptors = self.convDb(cDa) - descriptors = torch.nn.functional.normalize(descriptors, p=2, dim=1) - - # Extract descriptors - descriptors = [sample_descriptors(k[None], d[None], 8)[0] - for k, d in zip(keypoints, descriptors)] - - return { - 'keypoints': keypoints, - 'scores': scores, - 'descriptors': descriptors, - } diff --git a/spaces/Plachta/VALL-E-X/models/macros.py b/spaces/Plachta/VALL-E-X/models/macros.py deleted file mode 100644 index cbc54966f43b2ef27d87c3b4bc69cb866d2b8fd0..0000000000000000000000000000000000000000 --- a/spaces/Plachta/VALL-E-X/models/macros.py +++ /dev/null @@ -1,11 +0,0 @@ -# Text -NUM_TEXT_TOKENS = 2048 - -# Audio -NUM_AUDIO_TOKENS = 1024 # EnCodec RVQ bins -NUM_MEL_BINS = 100 # BigVGAN bigvgan_24khz_100band - - -# Speaker -NUM_SPEAKER_CLASSES = 4096 -SPEAKER_EMBEDDING_DIM = 64 diff --git a/spaces/RMXK/RVC_HFF/lib/uvr5_pack/lib_v5/nets_537238KB.py b/spaces/RMXK/RVC_HFF/lib/uvr5_pack/lib_v5/nets_537238KB.py deleted file mode 100644 index a1bb530e006482704f234c2e739a695174142941..0000000000000000000000000000000000000000 --- a/spaces/RMXK/RVC_HFF/lib/uvr5_pack/lib_v5/nets_537238KB.py +++ /dev/null @@ -1,123 +0,0 @@ -import torch -import numpy as np -from torch import nn -import torch.nn.functional as F - -from . import layers_537238KB as layers - - -class BaseASPPNet(nn.Module): - def __init__(self, nin, ch, dilations=(4, 8, 16)): - super(BaseASPPNet, self).__init__() - self.enc1 = layers.Encoder(nin, ch, 3, 2, 1) - self.enc2 = layers.Encoder(ch, ch * 2, 3, 2, 1) - self.enc3 = layers.Encoder(ch * 2, ch * 4, 3, 2, 1) - self.enc4 = layers.Encoder(ch * 4, ch * 8, 3, 2, 1) - - self.aspp = layers.ASPPModule(ch * 8, ch * 16, dilations) - - self.dec4 = layers.Decoder(ch * (8 + 16), ch * 8, 3, 1, 1) - self.dec3 = layers.Decoder(ch * (4 + 8), ch * 4, 3, 1, 1) - self.dec2 = layers.Decoder(ch * (2 + 4), ch * 2, 3, 1, 1) - self.dec1 = layers.Decoder(ch * (1 + 2), ch, 3, 1, 1) - - def __call__(self, x): - h, e1 = self.enc1(x) - h, e2 = self.enc2(h) - h, e3 = self.enc3(h) - h, e4 = self.enc4(h) - - h = self.aspp(h) - - h = self.dec4(h, e4) - h = self.dec3(h, e3) - h = self.dec2(h, e2) - h = self.dec1(h, e1) - - return h - - -class CascadedASPPNet(nn.Module): - def __init__(self, n_fft): - super(CascadedASPPNet, self).__init__() - self.stg1_low_band_net = BaseASPPNet(2, 64) - self.stg1_high_band_net = BaseASPPNet(2, 64) - - self.stg2_bridge = layers.Conv2DBNActiv(66, 32, 1, 1, 0) - self.stg2_full_band_net = BaseASPPNet(32, 64) - - self.stg3_bridge = layers.Conv2DBNActiv(130, 64, 1, 1, 0) - self.stg3_full_band_net = BaseASPPNet(64, 128) - - self.out = nn.Conv2d(128, 2, 1, bias=False) - self.aux1_out = nn.Conv2d(64, 2, 1, bias=False) - self.aux2_out = nn.Conv2d(64, 2, 1, bias=False) - - self.max_bin = n_fft // 2 - self.output_bin = n_fft // 2 + 1 - - self.offset = 128 - - def forward(self, x, aggressiveness=None): - mix = x.detach() - x = x.clone() - - x = x[:, :, : self.max_bin] - - bandw = x.size()[2] // 2 - aux1 = torch.cat( - [ - self.stg1_low_band_net(x[:, :, :bandw]), - self.stg1_high_band_net(x[:, :, bandw:]), - ], - dim=2, - ) - - h = torch.cat([x, aux1], dim=1) - aux2 = self.stg2_full_band_net(self.stg2_bridge(h)) - - h = torch.cat([x, aux1, aux2], dim=1) - h = self.stg3_full_band_net(self.stg3_bridge(h)) - - mask = torch.sigmoid(self.out(h)) - mask = F.pad( - input=mask, - pad=(0, 0, 0, self.output_bin - mask.size()[2]), - mode="replicate", - ) - - if self.training: - aux1 = torch.sigmoid(self.aux1_out(aux1)) - aux1 = F.pad( - input=aux1, - pad=(0, 0, 0, self.output_bin - aux1.size()[2]), - mode="replicate", - ) - aux2 = torch.sigmoid(self.aux2_out(aux2)) - aux2 = F.pad( - input=aux2, - pad=(0, 0, 0, self.output_bin - aux2.size()[2]), - mode="replicate", - ) - return mask * mix, aux1 * mix, aux2 * mix - else: - if aggressiveness: - mask[:, :, : aggressiveness["split_bin"]] = torch.pow( - mask[:, :, : aggressiveness["split_bin"]], - 1 + aggressiveness["value"] / 3, - ) - mask[:, :, aggressiveness["split_bin"] :] = torch.pow( - mask[:, :, aggressiveness["split_bin"] :], - 1 + aggressiveness["value"], - ) - - return mask * mix - - def predict(self, x_mag, aggressiveness=None): - h = self.forward(x_mag, aggressiveness) - - if self.offset > 0: - h = h[:, :, :, self.offset : -self.offset] - assert h.size()[3] > 0 - - return h diff --git a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_internal/cli/command_context.py b/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_internal/cli/command_context.py deleted file mode 100644 index 139995ac3f109a82664e4913f7ebc32ecf7617e1..0000000000000000000000000000000000000000 --- a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_internal/cli/command_context.py +++ /dev/null @@ -1,27 +0,0 @@ -from contextlib import ExitStack, contextmanager -from typing import ContextManager, Generator, TypeVar - -_T = TypeVar("_T", covariant=True) - - -class CommandContextMixIn: - def __init__(self) -> None: - super().__init__() - self._in_main_context = False - self._main_context = ExitStack() - - @contextmanager - def main_context(self) -> Generator[None, None, None]: - assert not self._in_main_context - - self._in_main_context = True - try: - with self._main_context: - yield - finally: - self._in_main_context = False - - def enter_context(self, context_provider: ContextManager[_T]) -> _T: - assert self._in_main_context - - return self._main_context.enter_context(context_provider) diff --git a/spaces/Realcat/image-matching-webui/third_party/TopicFM/viz/configs/__init__.py b/spaces/Realcat/image-matching-webui/third_party/TopicFM/viz/configs/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/Reeve/Ohayou_Face/torch_utils/misc.py b/spaces/Reeve/Ohayou_Face/torch_utils/misc.py deleted file mode 100644 index 0f158cd871e1df433b018a7658ca24dbddc4ea7c..0000000000000000000000000000000000000000 --- a/spaces/Reeve/Ohayou_Face/torch_utils/misc.py +++ /dev/null @@ -1,262 +0,0 @@ -# Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved. -# -# NVIDIA CORPORATION and its licensors retain all intellectual property -# and proprietary rights in and to this software, related documentation -# and any modifications thereto. Any use, reproduction, disclosure or -# distribution of this software and related documentation without an express -# license agreement from NVIDIA CORPORATION is strictly prohibited. - -import re -import contextlib -import numpy as np -import torch -import warnings -import dnnlib - -#---------------------------------------------------------------------------- -# Cached construction of constant tensors. Avoids CPU=>GPU copy when the -# same constant is used multiple times. - -_constant_cache = dict() - -def constant(value, shape=None, dtype=None, device=None, memory_format=None): - value = np.asarray(value) - if shape is not None: - shape = tuple(shape) - if dtype is None: - dtype = torch.get_default_dtype() - if device is None: - device = torch.device('cpu') - if memory_format is None: - memory_format = torch.contiguous_format - - key = (value.shape, value.dtype, value.tobytes(), shape, dtype, device, memory_format) - tensor = _constant_cache.get(key, None) - if tensor is None: - tensor = torch.as_tensor(value.copy(), dtype=dtype, device=device) - if shape is not None: - tensor, _ = torch.broadcast_tensors(tensor, torch.empty(shape)) - tensor = tensor.contiguous(memory_format=memory_format) - _constant_cache[key] = tensor - return tensor - -#---------------------------------------------------------------------------- -# Replace NaN/Inf with specified numerical values. - -try: - nan_to_num = torch.nan_to_num # 1.8.0a0 -except AttributeError: - def nan_to_num(input, nan=0.0, posinf=None, neginf=None, *, out=None): # pylint: disable=redefined-builtin - assert isinstance(input, torch.Tensor) - if posinf is None: - posinf = torch.finfo(input.dtype).max - if neginf is None: - neginf = torch.finfo(input.dtype).min - assert nan == 0 - return torch.clamp(input.unsqueeze(0).nansum(0), min=neginf, max=posinf, out=out) - -#---------------------------------------------------------------------------- -# Symbolic assert. - -try: - symbolic_assert = torch._assert # 1.8.0a0 # pylint: disable=protected-access -except AttributeError: - symbolic_assert = torch.Assert # 1.7.0 - -#---------------------------------------------------------------------------- -# Context manager to suppress known warnings in torch.jit.trace(). - -class suppress_tracer_warnings(warnings.catch_warnings): - def __enter__(self): - super().__enter__() - warnings.simplefilter('ignore', category=torch.jit.TracerWarning) - return self - -#---------------------------------------------------------------------------- -# Assert that the shape of a tensor matches the given list of integers. -# None indicates that the size of a dimension is allowed to vary. -# Performs symbolic assertion when used in torch.jit.trace(). - -def assert_shape(tensor, ref_shape): - if tensor.ndim != len(ref_shape): - raise AssertionError(f'Wrong number of dimensions: got {tensor.ndim}, expected {len(ref_shape)}') - for idx, (size, ref_size) in enumerate(zip(tensor.shape, ref_shape)): - if ref_size is None: - pass - elif isinstance(ref_size, torch.Tensor): - with suppress_tracer_warnings(): # as_tensor results are registered as constants - symbolic_assert(torch.equal(torch.as_tensor(size), ref_size), f'Wrong size for dimension {idx}') - elif isinstance(size, torch.Tensor): - with suppress_tracer_warnings(): # as_tensor results are registered as constants - symbolic_assert(torch.equal(size, torch.as_tensor(ref_size)), f'Wrong size for dimension {idx}: expected {ref_size}') - elif size != ref_size: - raise AssertionError(f'Wrong size for dimension {idx}: got {size}, expected {ref_size}') - -#---------------------------------------------------------------------------- -# Function decorator that calls torch.autograd.profiler.record_function(). - -def profiled_function(fn): - def decorator(*args, **kwargs): - with torch.autograd.profiler.record_function(fn.__name__): - return fn(*args, **kwargs) - decorator.__name__ = fn.__name__ - return decorator - -#---------------------------------------------------------------------------- -# Sampler for torch.utils.data.DataLoader that loops over the dataset -# indefinitely, shuffling items as it goes. - -class InfiniteSampler(torch.utils.data.Sampler): - def __init__(self, dataset, rank=0, num_replicas=1, shuffle=True, seed=0, window_size=0.5): - assert len(dataset) > 0 - assert num_replicas > 0 - assert 0 <= rank < num_replicas - assert 0 <= window_size <= 1 - super().__init__(dataset) - self.dataset = dataset - self.rank = rank - self.num_replicas = num_replicas - self.shuffle = shuffle - self.seed = seed - self.window_size = window_size - - def __iter__(self): - order = np.arange(len(self.dataset)) - rnd = None - window = 0 - if self.shuffle: - rnd = np.random.RandomState(self.seed) - rnd.shuffle(order) - window = int(np.rint(order.size * self.window_size)) - - idx = 0 - while True: - i = idx % order.size - if idx % self.num_replicas == self.rank: - yield order[i] - if window >= 2: - j = (i - rnd.randint(window)) % order.size - order[i], order[j] = order[j], order[i] - idx += 1 - -#---------------------------------------------------------------------------- -# Utilities for operating with torch.nn.Module parameters and buffers. - -def params_and_buffers(module): - assert isinstance(module, torch.nn.Module) - return list(module.parameters()) + list(module.buffers()) - -def named_params_and_buffers(module): - assert isinstance(module, torch.nn.Module) - return list(module.named_parameters()) + list(module.named_buffers()) - -def copy_params_and_buffers(src_module, dst_module, require_all=False): - assert isinstance(src_module, torch.nn.Module) - assert isinstance(dst_module, torch.nn.Module) - src_tensors = {name: tensor for name, tensor in named_params_and_buffers(src_module)} - for name, tensor in named_params_and_buffers(dst_module): - assert (name in src_tensors) or (not require_all) - if name in src_tensors: - tensor.copy_(src_tensors[name].detach()).requires_grad_(tensor.requires_grad) - -#---------------------------------------------------------------------------- -# Context manager for easily enabling/disabling DistributedDataParallel -# synchronization. - -@contextlib.contextmanager -def ddp_sync(module, sync): - assert isinstance(module, torch.nn.Module) - if sync or not isinstance(module, torch.nn.parallel.DistributedDataParallel): - yield - else: - with module.no_sync(): - yield - -#---------------------------------------------------------------------------- -# Check DistributedDataParallel consistency across processes. - -def check_ddp_consistency(module, ignore_regex=None): - assert isinstance(module, torch.nn.Module) - for name, tensor in named_params_and_buffers(module): - fullname = type(module).__name__ + '.' + name - if ignore_regex is not None and re.fullmatch(ignore_regex, fullname): - continue - tensor = tensor.detach() - other = tensor.clone() - torch.distributed.broadcast(tensor=other, src=0) - assert (nan_to_num(tensor) == nan_to_num(other)).all(), fullname - -#---------------------------------------------------------------------------- -# Print summary table of module hierarchy. - -def print_module_summary(module, inputs, max_nesting=3, skip_redundant=True): - assert isinstance(module, torch.nn.Module) - assert not isinstance(module, torch.jit.ScriptModule) - assert isinstance(inputs, (tuple, list)) - - # Register hooks. - entries = [] - nesting = [0] - def pre_hook(_mod, _inputs): - nesting[0] += 1 - def post_hook(mod, _inputs, outputs): - nesting[0] -= 1 - if nesting[0] <= max_nesting: - outputs = list(outputs) if isinstance(outputs, (tuple, list)) else [outputs] - outputs = [t for t in outputs if isinstance(t, torch.Tensor)] - entries.append(dnnlib.EasyDict(mod=mod, outputs=outputs)) - hooks = [mod.register_forward_pre_hook(pre_hook) for mod in module.modules()] - hooks += [mod.register_forward_hook(post_hook) for mod in module.modules()] - - # Run module. - outputs = module(*inputs) - for hook in hooks: - hook.remove() - - # Identify unique outputs, parameters, and buffers. - tensors_seen = set() - for e in entries: - e.unique_params = [t for t in e.mod.parameters() if id(t) not in tensors_seen] - e.unique_buffers = [t for t in e.mod.buffers() if id(t) not in tensors_seen] - e.unique_outputs = [t for t in e.outputs if id(t) not in tensors_seen] - tensors_seen |= {id(t) for t in e.unique_params + e.unique_buffers + e.unique_outputs} - - # Filter out redundant entries. - if skip_redundant: - entries = [e for e in entries if len(e.unique_params) or len(e.unique_buffers) or len(e.unique_outputs)] - - # Construct table. - rows = [[type(module).__name__, 'Parameters', 'Buffers', 'Output shape', 'Datatype']] - rows += [['---'] * len(rows[0])] - param_total = 0 - buffer_total = 0 - submodule_names = {mod: name for name, mod in module.named_modules()} - for e in entries: - name = '' if e.mod is module else submodule_names[e.mod] - param_size = sum(t.numel() for t in e.unique_params) - buffer_size = sum(t.numel() for t in e.unique_buffers) - output_shapes = [str(list(e.outputs[0].shape)) for t in e.outputs] - output_dtypes = [str(t.dtype).split('.')[-1] for t in e.outputs] - rows += [[ - name + (':0' if len(e.outputs) >= 2 else ''), - str(param_size) if param_size else '-', - str(buffer_size) if buffer_size else '-', - (output_shapes + ['-'])[0], - (output_dtypes + ['-'])[0], - ]] - for idx in range(1, len(e.outputs)): - rows += [[name + f':{idx}', '-', '-', output_shapes[idx], output_dtypes[idx]]] - param_total += param_size - buffer_total += buffer_size - rows += [['---'] * len(rows[0])] - rows += [['Total', str(param_total), str(buffer_total), '-', '-']] - - # Print table. - widths = [max(len(cell) for cell in column) for column in zip(*rows)] - print() - for row in rows: - print(' '.join(cell + ' ' * (width - len(cell)) for cell, width in zip(row, widths))) - print() - return outputs - -#---------------------------------------------------------------------------- diff --git a/spaces/Ritori/TTS_Yui/text/numbers.py b/spaces/Ritori/TTS_Yui/text/numbers.py deleted file mode 100644 index 0d5f7fa818a45ecf132627d240afac653e148070..0000000000000000000000000000000000000000 --- a/spaces/Ritori/TTS_Yui/text/numbers.py +++ /dev/null @@ -1,71 +0,0 @@ -""" from https://github.com/keithito/tacotron """ - -import inflect -import re - - -_inflect = inflect.engine() -_comma_number_re = re.compile(r'([0-9][0-9\,]+[0-9])') -_decimal_number_re = re.compile(r'([0-9]+\.[0-9]+)') -_pounds_re = re.compile(r'£([0-9\,]*[0-9]+)') -_dollars_re = re.compile(r'\$([0-9\.\,]*[0-9]+)') -_ordinal_re = re.compile(r'[0-9]+(st|nd|rd|th)') -_number_re = re.compile(r'[0-9]+') - - -def _remove_commas(m): - return m.group(1).replace(',', '') - - -def _expand_decimal_point(m): - return m.group(1).replace('.', ' point ') - - -def _expand_dollars(m): - match = m.group(1) - parts = match.split('.') - if len(parts) > 2: - return match + ' dollars' # Unexpected format - dollars = int(parts[0]) if parts[0] else 0 - cents = int(parts[1]) if len(parts) > 1 and parts[1] else 0 - if dollars and cents: - dollar_unit = 'dollar' if dollars == 1 else 'dollars' - cent_unit = 'cent' if cents == 1 else 'cents' - return '%s %s, %s %s' % (dollars, dollar_unit, cents, cent_unit) - elif dollars: - dollar_unit = 'dollar' if dollars == 1 else 'dollars' - return '%s %s' % (dollars, dollar_unit) - elif cents: - cent_unit = 'cent' if cents == 1 else 'cents' - return '%s %s' % (cents, cent_unit) - else: - return 'zero dollars' - - -def _expand_ordinal(m): - return _inflect.number_to_words(m.group(0)) - - -def _expand_number(m): - num = int(m.group(0)) - if num > 1000 and num < 3000: - if num == 2000: - return 'two thousand' - elif num > 2000 and num < 2010: - return 'two thousand ' + _inflect.number_to_words(num % 100) - elif num % 100 == 0: - return _inflect.number_to_words(num // 100) + ' hundred' - else: - return _inflect.number_to_words(num, andword='', zero='oh', group=2).replace(', ', ' ') - else: - return _inflect.number_to_words(num, andword='') - - -def normalize_numbers(text): - text = re.sub(_comma_number_re, _remove_commas, text) - text = re.sub(_pounds_re, r'\1 pounds', text) - text = re.sub(_dollars_re, _expand_dollars, text) - text = re.sub(_decimal_number_re, _expand_decimal_point, text) - text = re.sub(_ordinal_re, _expand_ordinal, text) - text = re.sub(_number_re, _expand_number, text) - return text diff --git a/spaces/RobPruzan/automaticlitassesment/README.md b/spaces/RobPruzan/automaticlitassesment/README.md deleted file mode 100644 index 8ea880bbb57834038f35cf6efd152e41fff88736..0000000000000000000000000000000000000000 --- a/spaces/RobPruzan/automaticlitassesment/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Automaticlitassesment -emoji: ⚡ -colorFrom: indigo -colorTo: green -sdk: gradio -sdk_version: 3.1.4 -app_file: app.py -pinned: false -license: afl-3.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmcv/runner/hooks/logger/neptune.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmcv/runner/hooks/logger/neptune.py deleted file mode 100644 index 7a38772b0c93a8608f32c6357b8616e77c139dc9..0000000000000000000000000000000000000000 --- a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmcv/runner/hooks/logger/neptune.py +++ /dev/null @@ -1,82 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from ...dist_utils import master_only -from ..hook import HOOKS -from .base import LoggerHook - - -@HOOKS.register_module() -class NeptuneLoggerHook(LoggerHook): - """Class to log metrics to NeptuneAI. - - It requires `neptune-client` to be installed. - - Args: - init_kwargs (dict): a dict contains the initialization keys as below: - - project (str): Name of a project in a form of - namespace/project_name. If None, the value of - NEPTUNE_PROJECT environment variable will be taken. - - api_token (str): User’s API token. - If None, the value of NEPTUNE_API_TOKEN environment - variable will be taken. Note: It is strongly recommended - to use NEPTUNE_API_TOKEN environment variable rather than - placing your API token in plain text in your source code. - - name (str, optional, default is 'Untitled'): Editable name of - the run. Name is displayed in the run's Details and in - Runs table as a column. - Check https://docs.neptune.ai/api-reference/neptune#init for - more init arguments. - interval (int): Logging interval (every k iterations). - ignore_last (bool): Ignore the log of last iterations in each epoch - if less than `interval`. - reset_flag (bool): Whether to clear the output buffer after logging - by_epoch (bool): Whether EpochBasedRunner is used. - - .. _NeptuneAI: - https://docs.neptune.ai/you-should-know/logging-metadata - """ - - def __init__(self, - init_kwargs=None, - interval=10, - ignore_last=True, - reset_flag=True, - with_step=True, - by_epoch=True): - - super(NeptuneLoggerHook, self).__init__(interval, ignore_last, - reset_flag, by_epoch) - self.import_neptune() - self.init_kwargs = init_kwargs - self.with_step = with_step - - def import_neptune(self): - try: - import neptune.new as neptune - except ImportError: - raise ImportError( - 'Please run "pip install neptune-client" to install neptune') - self.neptune = neptune - self.run = None - - @master_only - def before_run(self, runner): - if self.init_kwargs: - self.run = self.neptune.init(**self.init_kwargs) - else: - self.run = self.neptune.init() - - @master_only - def log(self, runner): - tags = self.get_loggable_tags(runner) - if tags: - for tag_name, tag_value in tags.items(): - if self.with_step: - self.run[tag_name].log( - tag_value, step=self.get_iter(runner)) - else: - tags['global_step'] = self.get_iter(runner) - self.run[tag_name].log(tags) - - @master_only - def after_run(self, runner): - self.run.stop() diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet/models/backbones/resnet.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet/models/backbones/resnet.py deleted file mode 100644 index 3826815a6d94fdc4c54001d4c186d10ca3380e80..0000000000000000000000000000000000000000 --- a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet/models/backbones/resnet.py +++ /dev/null @@ -1,663 +0,0 @@ -import torch.nn as nn -import torch.utils.checkpoint as cp -from mmcv.cnn import (build_conv_layer, build_norm_layer, build_plugin_layer, - constant_init, kaiming_init) -from mmcv.runner import load_checkpoint -from torch.nn.modules.batchnorm import _BatchNorm - -from mmdet.utils import get_root_logger -from ..builder import BACKBONES -from ..utils import ResLayer - - -class BasicBlock(nn.Module): - expansion = 1 - - def __init__(self, - inplanes, - planes, - stride=1, - dilation=1, - downsample=None, - style='pytorch', - with_cp=False, - conv_cfg=None, - norm_cfg=dict(type='BN'), - dcn=None, - plugins=None): - super(BasicBlock, self).__init__() - assert dcn is None, 'Not implemented yet.' - assert plugins is None, 'Not implemented yet.' - - self.norm1_name, norm1 = build_norm_layer(norm_cfg, planes, postfix=1) - self.norm2_name, norm2 = build_norm_layer(norm_cfg, planes, postfix=2) - - self.conv1 = build_conv_layer( - conv_cfg, - inplanes, - planes, - 3, - stride=stride, - padding=dilation, - dilation=dilation, - bias=False) - self.add_module(self.norm1_name, norm1) - self.conv2 = build_conv_layer( - conv_cfg, planes, planes, 3, padding=1, bias=False) - self.add_module(self.norm2_name, norm2) - - self.relu = nn.ReLU(inplace=True) - self.downsample = downsample - self.stride = stride - self.dilation = dilation - self.with_cp = with_cp - - @property - def norm1(self): - """nn.Module: normalization layer after the first convolution layer""" - return getattr(self, self.norm1_name) - - @property - def norm2(self): - """nn.Module: normalization layer after the second convolution layer""" - return getattr(self, self.norm2_name) - - def forward(self, x): - """Forward function.""" - - def _inner_forward(x): - identity = x - - out = self.conv1(x) - out = self.norm1(out) - out = self.relu(out) - - out = self.conv2(out) - out = self.norm2(out) - - if self.downsample is not None: - identity = self.downsample(x) - - out += identity - - return out - - if self.with_cp and x.requires_grad: - out = cp.checkpoint(_inner_forward, x) - else: - out = _inner_forward(x) - - out = self.relu(out) - - return out - - -class Bottleneck(nn.Module): - expansion = 4 - - def __init__(self, - inplanes, - planes, - stride=1, - dilation=1, - downsample=None, - style='pytorch', - with_cp=False, - conv_cfg=None, - norm_cfg=dict(type='BN'), - dcn=None, - plugins=None): - """Bottleneck block for ResNet. - - If style is "pytorch", the stride-two layer is the 3x3 conv layer, if - it is "caffe", the stride-two layer is the first 1x1 conv layer. - """ - super(Bottleneck, self).__init__() - assert style in ['pytorch', 'caffe'] - assert dcn is None or isinstance(dcn, dict) - assert plugins is None or isinstance(plugins, list) - if plugins is not None: - allowed_position = ['after_conv1', 'after_conv2', 'after_conv3'] - assert all(p['position'] in allowed_position for p in plugins) - - self.inplanes = inplanes - self.planes = planes - self.stride = stride - self.dilation = dilation - self.style = style - self.with_cp = with_cp - self.conv_cfg = conv_cfg - self.norm_cfg = norm_cfg - self.dcn = dcn - self.with_dcn = dcn is not None - self.plugins = plugins - self.with_plugins = plugins is not None - - if self.with_plugins: - # collect plugins for conv1/conv2/conv3 - self.after_conv1_plugins = [ - plugin['cfg'] for plugin in plugins - if plugin['position'] == 'after_conv1' - ] - self.after_conv2_plugins = [ - plugin['cfg'] for plugin in plugins - if plugin['position'] == 'after_conv2' - ] - self.after_conv3_plugins = [ - plugin['cfg'] for plugin in plugins - if plugin['position'] == 'after_conv3' - ] - - if self.style == 'pytorch': - self.conv1_stride = 1 - self.conv2_stride = stride - else: - self.conv1_stride = stride - self.conv2_stride = 1 - - self.norm1_name, norm1 = build_norm_layer(norm_cfg, planes, postfix=1) - self.norm2_name, norm2 = build_norm_layer(norm_cfg, planes, postfix=2) - self.norm3_name, norm3 = build_norm_layer( - norm_cfg, planes * self.expansion, postfix=3) - - self.conv1 = build_conv_layer( - conv_cfg, - inplanes, - planes, - kernel_size=1, - stride=self.conv1_stride, - bias=False) - self.add_module(self.norm1_name, norm1) - fallback_on_stride = False - if self.with_dcn: - fallback_on_stride = dcn.pop('fallback_on_stride', False) - if not self.with_dcn or fallback_on_stride: - self.conv2 = build_conv_layer( - conv_cfg, - planes, - planes, - kernel_size=3, - stride=self.conv2_stride, - padding=dilation, - dilation=dilation, - bias=False) - else: - assert self.conv_cfg is None, 'conv_cfg must be None for DCN' - self.conv2 = build_conv_layer( - dcn, - planes, - planes, - kernel_size=3, - stride=self.conv2_stride, - padding=dilation, - dilation=dilation, - bias=False) - - self.add_module(self.norm2_name, norm2) - self.conv3 = build_conv_layer( - conv_cfg, - planes, - planes * self.expansion, - kernel_size=1, - bias=False) - self.add_module(self.norm3_name, norm3) - - self.relu = nn.ReLU(inplace=True) - self.downsample = downsample - - if self.with_plugins: - self.after_conv1_plugin_names = self.make_block_plugins( - planes, self.after_conv1_plugins) - self.after_conv2_plugin_names = self.make_block_plugins( - planes, self.after_conv2_plugins) - self.after_conv3_plugin_names = self.make_block_plugins( - planes * self.expansion, self.after_conv3_plugins) - - def make_block_plugins(self, in_channels, plugins): - """make plugins for block. - - Args: - in_channels (int): Input channels of plugin. - plugins (list[dict]): List of plugins cfg to build. - - Returns: - list[str]: List of the names of plugin. - """ - assert isinstance(plugins, list) - plugin_names = [] - for plugin in plugins: - plugin = plugin.copy() - name, layer = build_plugin_layer( - plugin, - in_channels=in_channels, - postfix=plugin.pop('postfix', '')) - assert not hasattr(self, name), f'duplicate plugin {name}' - self.add_module(name, layer) - plugin_names.append(name) - return plugin_names - - def forward_plugin(self, x, plugin_names): - out = x - for name in plugin_names: - out = getattr(self, name)(x) - return out - - @property - def norm1(self): - """nn.Module: normalization layer after the first convolution layer""" - return getattr(self, self.norm1_name) - - @property - def norm2(self): - """nn.Module: normalization layer after the second convolution layer""" - return getattr(self, self.norm2_name) - - @property - def norm3(self): - """nn.Module: normalization layer after the third convolution layer""" - return getattr(self, self.norm3_name) - - def forward(self, x): - """Forward function.""" - - def _inner_forward(x): - identity = x - out = self.conv1(x) - out = self.norm1(out) - out = self.relu(out) - - if self.with_plugins: - out = self.forward_plugin(out, self.after_conv1_plugin_names) - - out = self.conv2(out) - out = self.norm2(out) - out = self.relu(out) - - if self.with_plugins: - out = self.forward_plugin(out, self.after_conv2_plugin_names) - - out = self.conv3(out) - out = self.norm3(out) - - if self.with_plugins: - out = self.forward_plugin(out, self.after_conv3_plugin_names) - - if self.downsample is not None: - identity = self.downsample(x) - - out += identity - - return out - - if self.with_cp and x.requires_grad: - out = cp.checkpoint(_inner_forward, x) - else: - out = _inner_forward(x) - - out = self.relu(out) - - return out - - -@BACKBONES.register_module() -class ResNet(nn.Module): - """ResNet backbone. - - Args: - depth (int): Depth of resnet, from {18, 34, 50, 101, 152}. - stem_channels (int | None): Number of stem channels. If not specified, - it will be the same as `base_channels`. Default: None. - base_channels (int): Number of base channels of res layer. Default: 64. - in_channels (int): Number of input image channels. Default: 3. - num_stages (int): Resnet stages. Default: 4. - strides (Sequence[int]): Strides of the first block of each stage. - dilations (Sequence[int]): Dilation of each stage. - out_indices (Sequence[int]): Output from which stages. - style (str): `pytorch` or `caffe`. If set to "pytorch", the stride-two - layer is the 3x3 conv layer, otherwise the stride-two layer is - the first 1x1 conv layer. - deep_stem (bool): Replace 7x7 conv in input stem with 3 3x3 conv - avg_down (bool): Use AvgPool instead of stride conv when - downsampling in the bottleneck. - frozen_stages (int): Stages to be frozen (stop grad and set eval mode). - -1 means not freezing any parameters. - norm_cfg (dict): Dictionary to construct and config norm layer. - norm_eval (bool): Whether to set norm layers to eval mode, namely, - freeze running stats (mean and var). Note: Effect on Batch Norm - and its variants only. - plugins (list[dict]): List of plugins for stages, each dict contains: - - - cfg (dict, required): Cfg dict to build plugin. - - position (str, required): Position inside block to insert - plugin, options are 'after_conv1', 'after_conv2', 'after_conv3'. - - stages (tuple[bool], optional): Stages to apply plugin, length - should be same as 'num_stages'. - with_cp (bool): Use checkpoint or not. Using checkpoint will save some - memory while slowing down the training speed. - zero_init_residual (bool): Whether to use zero init for last norm layer - in resblocks to let them behave as identity. - - Example: - >>> from mmdet.models import ResNet - >>> import torch - >>> self = ResNet(depth=18) - >>> self.eval() - >>> inputs = torch.rand(1, 3, 32, 32) - >>> level_outputs = self.forward(inputs) - >>> for level_out in level_outputs: - ... print(tuple(level_out.shape)) - (1, 64, 8, 8) - (1, 128, 4, 4) - (1, 256, 2, 2) - (1, 512, 1, 1) - """ - - arch_settings = { - 18: (BasicBlock, (2, 2, 2, 2)), - 34: (BasicBlock, (3, 4, 6, 3)), - 50: (Bottleneck, (3, 4, 6, 3)), - 101: (Bottleneck, (3, 4, 23, 3)), - 152: (Bottleneck, (3, 8, 36, 3)) - } - - def __init__(self, - depth, - in_channels=3, - stem_channels=None, - base_channels=64, - num_stages=4, - strides=(1, 2, 2, 2), - dilations=(1, 1, 1, 1), - out_indices=(0, 1, 2, 3), - style='pytorch', - deep_stem=False, - avg_down=False, - frozen_stages=-1, - conv_cfg=None, - norm_cfg=dict(type='BN', requires_grad=True), - norm_eval=True, - dcn=None, - stage_with_dcn=(False, False, False, False), - plugins=None, - with_cp=False, - zero_init_residual=True): - super(ResNet, self).__init__() - if depth not in self.arch_settings: - raise KeyError(f'invalid depth {depth} for resnet') - self.depth = depth - if stem_channels is None: - stem_channels = base_channels - self.stem_channels = stem_channels - self.base_channels = base_channels - self.num_stages = num_stages - assert num_stages >= 1 and num_stages <= 4 - self.strides = strides - self.dilations = dilations - assert len(strides) == len(dilations) == num_stages - self.out_indices = out_indices - assert max(out_indices) < num_stages - self.style = style - self.deep_stem = deep_stem - self.avg_down = avg_down - self.frozen_stages = frozen_stages - self.conv_cfg = conv_cfg - self.norm_cfg = norm_cfg - self.with_cp = with_cp - self.norm_eval = norm_eval - self.dcn = dcn - self.stage_with_dcn = stage_with_dcn - if dcn is not None: - assert len(stage_with_dcn) == num_stages - self.plugins = plugins - self.zero_init_residual = zero_init_residual - self.block, stage_blocks = self.arch_settings[depth] - self.stage_blocks = stage_blocks[:num_stages] - self.inplanes = stem_channels - - self._make_stem_layer(in_channels, stem_channels) - - self.res_layers = [] - for i, num_blocks in enumerate(self.stage_blocks): - stride = strides[i] - dilation = dilations[i] - dcn = self.dcn if self.stage_with_dcn[i] else None - if plugins is not None: - stage_plugins = self.make_stage_plugins(plugins, i) - else: - stage_plugins = None - planes = base_channels * 2**i - res_layer = self.make_res_layer( - block=self.block, - inplanes=self.inplanes, - planes=planes, - num_blocks=num_blocks, - stride=stride, - dilation=dilation, - style=self.style, - avg_down=self.avg_down, - with_cp=with_cp, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - dcn=dcn, - plugins=stage_plugins) - self.inplanes = planes * self.block.expansion - layer_name = f'layer{i + 1}' - self.add_module(layer_name, res_layer) - self.res_layers.append(layer_name) - - self._freeze_stages() - - self.feat_dim = self.block.expansion * base_channels * 2**( - len(self.stage_blocks) - 1) - - def make_stage_plugins(self, plugins, stage_idx): - """Make plugins for ResNet ``stage_idx`` th stage. - - Currently we support to insert ``context_block``, - ``empirical_attention_block``, ``nonlocal_block`` into the backbone - like ResNet/ResNeXt. They could be inserted after conv1/conv2/conv3 of - Bottleneck. - - An example of plugins format could be: - - Examples: - >>> plugins=[ - ... dict(cfg=dict(type='xxx', arg1='xxx'), - ... stages=(False, True, True, True), - ... position='after_conv2'), - ... dict(cfg=dict(type='yyy'), - ... stages=(True, True, True, True), - ... position='after_conv3'), - ... dict(cfg=dict(type='zzz', postfix='1'), - ... stages=(True, True, True, True), - ... position='after_conv3'), - ... dict(cfg=dict(type='zzz', postfix='2'), - ... stages=(True, True, True, True), - ... position='after_conv3') - ... ] - >>> self = ResNet(depth=18) - >>> stage_plugins = self.make_stage_plugins(plugins, 0) - >>> assert len(stage_plugins) == 3 - - Suppose ``stage_idx=0``, the structure of blocks in the stage would be: - - .. code-block:: none - - conv1-> conv2->conv3->yyy->zzz1->zzz2 - - Suppose 'stage_idx=1', the structure of blocks in the stage would be: - - .. code-block:: none - - conv1-> conv2->xxx->conv3->yyy->zzz1->zzz2 - - If stages is missing, the plugin would be applied to all stages. - - Args: - plugins (list[dict]): List of plugins cfg to build. The postfix is - required if multiple same type plugins are inserted. - stage_idx (int): Index of stage to build - - Returns: - list[dict]: Plugins for current stage - """ - stage_plugins = [] - for plugin in plugins: - plugin = plugin.copy() - stages = plugin.pop('stages', None) - assert stages is None or len(stages) == self.num_stages - # whether to insert plugin into current stage - if stages is None or stages[stage_idx]: - stage_plugins.append(plugin) - - return stage_plugins - - def make_res_layer(self, **kwargs): - """Pack all blocks in a stage into a ``ResLayer``.""" - return ResLayer(**kwargs) - - @property - def norm1(self): - """nn.Module: the normalization layer named "norm1" """ - return getattr(self, self.norm1_name) - - def _make_stem_layer(self, in_channels, stem_channels): - if self.deep_stem: - self.stem = nn.Sequential( - build_conv_layer( - self.conv_cfg, - in_channels, - stem_channels // 2, - kernel_size=3, - stride=2, - padding=1, - bias=False), - build_norm_layer(self.norm_cfg, stem_channels // 2)[1], - nn.ReLU(inplace=True), - build_conv_layer( - self.conv_cfg, - stem_channels // 2, - stem_channels // 2, - kernel_size=3, - stride=1, - padding=1, - bias=False), - build_norm_layer(self.norm_cfg, stem_channels // 2)[1], - nn.ReLU(inplace=True), - build_conv_layer( - self.conv_cfg, - stem_channels // 2, - stem_channels, - kernel_size=3, - stride=1, - padding=1, - bias=False), - build_norm_layer(self.norm_cfg, stem_channels)[1], - nn.ReLU(inplace=True)) - else: - self.conv1 = build_conv_layer( - self.conv_cfg, - in_channels, - stem_channels, - kernel_size=7, - stride=2, - padding=3, - bias=False) - self.norm1_name, norm1 = build_norm_layer( - self.norm_cfg, stem_channels, postfix=1) - self.add_module(self.norm1_name, norm1) - self.relu = nn.ReLU(inplace=True) - self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1) - - def _freeze_stages(self): - if self.frozen_stages >= 0: - if self.deep_stem: - self.stem.eval() - for param in self.stem.parameters(): - param.requires_grad = False - else: - self.norm1.eval() - for m in [self.conv1, self.norm1]: - for param in m.parameters(): - param.requires_grad = False - - for i in range(1, self.frozen_stages + 1): - m = getattr(self, f'layer{i}') - m.eval() - for param in m.parameters(): - param.requires_grad = False - - def init_weights(self, pretrained=None): - """Initialize the weights in backbone. - - Args: - pretrained (str, optional): Path to pre-trained weights. - Defaults to None. - """ - if isinstance(pretrained, str): - logger = get_root_logger() - load_checkpoint(self, pretrained, strict=False, logger=logger) - elif pretrained is None: - for m in self.modules(): - if isinstance(m, nn.Conv2d): - kaiming_init(m) - elif isinstance(m, (_BatchNorm, nn.GroupNorm)): - constant_init(m, 1) - - if self.dcn is not None: - for m in self.modules(): - if isinstance(m, Bottleneck) and hasattr( - m.conv2, 'conv_offset'): - constant_init(m.conv2.conv_offset, 0) - - if self.zero_init_residual: - for m in self.modules(): - if isinstance(m, Bottleneck): - constant_init(m.norm3, 0) - elif isinstance(m, BasicBlock): - constant_init(m.norm2, 0) - else: - raise TypeError('pretrained must be a str or None') - - def forward(self, x): - """Forward function.""" - if self.deep_stem: - x = self.stem(x) - else: - x = self.conv1(x) - x = self.norm1(x) - x = self.relu(x) - x = self.maxpool(x) - outs = [] - for i, layer_name in enumerate(self.res_layers): - res_layer = getattr(self, layer_name) - x = res_layer(x) - if i in self.out_indices: - outs.append(x) - return tuple(outs) - - def train(self, mode=True): - """Convert the model into training mode while keep normalization layer - freezed.""" - super(ResNet, self).train(mode) - self._freeze_stages() - if mode and self.norm_eval: - for m in self.modules(): - # trick: eval have effect on BatchNorm only - if isinstance(m, _BatchNorm): - m.eval() - - -@BACKBONES.register_module() -class ResNetV1d(ResNet): - r"""ResNetV1d variant described in `Bag of Tricks - `_. - - Compared with default ResNet(ResNetV1b), ResNetV1d replaces the 7x7 conv in - the input stem with three 3x3 convs. And in the downsampling block, a 2x2 - avg_pool with stride 2 is added before conv, whose stride is changed to 1. - """ - - def __init__(self, **kwargs): - super(ResNetV1d, self).__init__( - deep_stem=True, avg_down=True, **kwargs) diff --git a/spaces/Rongjiehuang/ProDiff/data_gen/tts/base_preprocess.py b/spaces/Rongjiehuang/ProDiff/data_gen/tts/base_preprocess.py deleted file mode 100644 index 6c0b2cda06076d32b4eda800b134415e20d0f730..0000000000000000000000000000000000000000 --- a/spaces/Rongjiehuang/ProDiff/data_gen/tts/base_preprocess.py +++ /dev/null @@ -1,245 +0,0 @@ -import json -import os -import random -import re -import traceback -from collections import Counter -from functools import partial - -import librosa -from tqdm import tqdm -from data_gen.tts.txt_processors.base_text_processor import get_txt_processor_cls -from data_gen.tts.wav_processors.base_processor import get_wav_processor_cls -from utils.hparams import hparams -from utils.multiprocess_utils import multiprocess_run_tqdm -from utils.os_utils import link_file, move_file, remove_file -from data_gen.tts.data_gen_utils import is_sil_phoneme, build_token_encoder - - -class BasePreprocessor: - def __init__(self): - self.preprocess_args = hparams['preprocess_args'] - txt_processor = self.preprocess_args['txt_processor'] - self.txt_processor = get_txt_processor_cls(txt_processor) - self.raw_data_dir = hparams['raw_data_dir'] - self.processed_dir = hparams['processed_data_dir'] - self.spk_map_fn = f"{self.processed_dir}/spk_map.json" - - def meta_data(self): - """ - :return: {'item_name': Str, 'wav_fn': Str, 'txt': Str, 'spk_name': Str, 'txt_loader': None or Func} - """ - raise NotImplementedError - - def process(self): - processed_dir = self.processed_dir - wav_processed_tmp_dir = f'{processed_dir}/processed_tmp' - remove_file(wav_processed_tmp_dir) - os.makedirs(wav_processed_tmp_dir, exist_ok=True) - wav_processed_dir = f'{processed_dir}/{self.wav_processed_dirname}' - remove_file(wav_processed_dir) - os.makedirs(wav_processed_dir, exist_ok=True) - - meta_data = list(tqdm(self.meta_data(), desc='Load meta data')) - item_names = [d['item_name'] for d in meta_data] - assert len(item_names) == len(set(item_names)), 'Key `item_name` should be Unique.' - - # preprocess data - phone_list = [] - word_list = [] - spk_names = set() - process_item = partial(self.preprocess_first_pass, - txt_processor=self.txt_processor, - wav_processed_dir=wav_processed_dir, - wav_processed_tmp=wav_processed_tmp_dir, - preprocess_args=self.preprocess_args) - items = [] - args = [{ - 'item_name': item_raw['item_name'], - 'txt_raw': item_raw['txt'], - 'wav_fn': item_raw['wav_fn'], - 'txt_loader': item_raw.get('txt_loader'), - 'others': item_raw.get('others', None) - } for item_raw in meta_data] - for item_, (item_id, item) in zip(meta_data, multiprocess_run_tqdm(process_item, args, desc='Preprocess')): - if item is not None: - item_.update(item) - item = item_ - if 'txt_loader' in item: - del item['txt_loader'] - item['id'] = item_id - item['spk_name'] = item.get('spk_name', '') - item['others'] = item.get('others', None) - phone_list += item['ph'].split(" ") - word_list += item['word'].split(" ") - spk_names.add(item['spk_name']) - items.append(item) - - # add encoded tokens - ph_encoder, word_encoder = self._phone_encoder(phone_list), self._word_encoder(word_list) - spk_map = self.build_spk_map(spk_names) - args = [{ - 'ph': item['ph'], 'word': item['word'], 'spk_name': item['spk_name'], - 'word_encoder': word_encoder, 'ph_encoder': ph_encoder, 'spk_map': spk_map - } for item in items] - for idx, item_new_kv in multiprocess_run_tqdm(self.preprocess_second_pass, args, desc='Add encoded tokens'): - items[idx].update(item_new_kv) - - # build mfa data - if self.preprocess_args['use_mfa']: - mfa_dict = set() - mfa_input_dir = f'{processed_dir}/mfa_inputs' - remove_file(mfa_input_dir) - # group MFA inputs for better parallelism - mfa_groups = [i // self.preprocess_args['nsample_per_mfa_group'] for i in range(len(items))] - if self.preprocess_args['mfa_group_shuffle']: - random.seed(hparams['seed']) - random.shuffle(mfa_groups) - args = [{ - 'item': item, 'mfa_input_dir': mfa_input_dir, - 'mfa_group': mfa_group, 'wav_processed_tmp': wav_processed_tmp_dir, - 'preprocess_args': self.preprocess_args - } for item, mfa_group in zip(items, mfa_groups)] - for i, (ph_gb_word_nosil, new_wav_align_fn) in multiprocess_run_tqdm( - self.build_mfa_inputs, args, desc='Build MFA data'): - items[i]['wav_align_fn'] = new_wav_align_fn - for w in ph_gb_word_nosil.split(" "): - mfa_dict.add(f"{w} {w.replace('_', ' ')}") - mfa_dict = sorted(mfa_dict) - with open(f'{processed_dir}/mfa_dict.txt', 'w') as f: - f.writelines([f'{l}\n' for l in mfa_dict]) - with open(f"{processed_dir}/{self.meta_csv_filename}.json", 'w') as f: - f.write(re.sub(r'\n\s+([\d+\]])', r'\1', json.dumps(items, ensure_ascii=False, sort_keys=False, indent=1))) - remove_file(wav_processed_tmp_dir) - - @classmethod - def preprocess_first_pass(cls, item_name, txt_raw, txt_processor, - wav_fn, wav_processed_dir, wav_processed_tmp, - preprocess_args, txt_loader=None, others=None): - try: - if txt_loader is not None: - txt_raw = txt_loader(txt_raw) - ph, txt, word, ph2word, ph_gb_word = cls.txt_to_ph(txt_processor, txt_raw, preprocess_args) - wav_fn, wav_align_fn = cls.process_wav( - item_name, wav_fn, - hparams['processed_data_dir'], - wav_processed_tmp, preprocess_args) - - # wav for binarization - ext = os.path.splitext(wav_fn)[1] - os.makedirs(wav_processed_dir, exist_ok=True) - new_wav_fn = f"{wav_processed_dir}/{item_name}{ext}" - move_link_func = move_file if os.path.dirname(wav_fn) == wav_processed_tmp else link_file - move_link_func(wav_fn, new_wav_fn) - return { - 'txt': txt, 'txt_raw': txt_raw, 'ph': ph, - 'word': word, 'ph2word': ph2word, 'ph_gb_word': ph_gb_word, - 'wav_fn': new_wav_fn, 'wav_align_fn': wav_align_fn, - 'others': others - } - except: - traceback.print_exc() - print(f"| Error is caught. item_name: {item_name}.") - return None - - @staticmethod - def txt_to_ph(txt_processor, txt_raw, preprocess_args): - txt_struct, txt = txt_processor.process(txt_raw, preprocess_args) - ph = [p for w in txt_struct for p in w[1]] - return " ".join(ph), txt - - @staticmethod - def process_wav(item_name, wav_fn, processed_dir, wav_processed_tmp, preprocess_args): - processors = [get_wav_processor_cls(v) for v in preprocess_args['wav_processors']] - processors = [k() for k in processors if k is not None] - if len(processors) >= 1: - sr_file = librosa.core.get_samplerate(wav_fn) - output_fn_for_align = None - ext = os.path.splitext(wav_fn)[1] - input_fn = f"{wav_processed_tmp}/{item_name}{ext}" - link_file(wav_fn, input_fn) - for p in processors: - outputs = p.process(input_fn, sr_file, wav_processed_tmp, processed_dir, item_name, preprocess_args) - if len(outputs) == 3: - input_fn, sr, output_fn_for_align = outputs - else: - input_fn, sr = outputs - return input_fn, output_fn_for_align - else: - return wav_fn, wav_fn - - def _phone_encoder(self, ph_set): - ph_set_fn = f"{self.processed_dir}/phone_set.json" - if self.preprocess_args['reset_phone_dict'] or not os.path.exists(ph_set_fn): - ph_set = sorted(set(ph_set)) - json.dump(ph_set, open(ph_set_fn, 'w'), ensure_ascii=False) - print("| Build phone set: ", ph_set) - else: - ph_set = json.load(open(ph_set_fn, 'r')) - print("| Load phone set: ", ph_set) - return build_token_encoder(ph_set_fn) - - def _word_encoder(self, word_set): - word_set_fn = f"{self.processed_dir}/word_set.json" - if self.preprocess_args['reset_word_dict']: - word_set = Counter(word_set) - total_words = sum(word_set.values()) - word_set = word_set.most_common(hparams['word_dict_size']) - num_unk_words = total_words - sum([x[1] for x in word_set]) - word_set = ['', ''] + [x[0] for x in word_set] - word_set = sorted(set(word_set)) - json.dump(word_set, open(word_set_fn, 'w'), ensure_ascii=False) - print(f"| Build word set. Size: {len(word_set)}, #total words: {total_words}," - f" #unk_words: {num_unk_words}, word_set[:10]:, {word_set[:10]}.") - else: - word_set = json.load(open(word_set_fn, 'r')) - print("| Load word set. Size: ", len(word_set), word_set[:10]) - return build_token_encoder(word_set_fn) - - @classmethod - def preprocess_second_pass(cls, word, ph, spk_name, word_encoder, ph_encoder, spk_map): - word_token = word_encoder.encode(word) - ph_token = ph_encoder.encode(ph) - spk_id = spk_map[spk_name] - return {'word_token': word_token, 'ph_token': ph_token, 'spk_id': spk_id} - - def build_spk_map(self, spk_names): - spk_map = {x: i for i, x in enumerate(sorted(list(spk_names)))} - assert len(spk_map) == 0 or len(spk_map) <= hparams['num_spk'], len(spk_map) - print(f"| Number of spks: {len(spk_map)}, spk_map: {spk_map}") - json.dump(spk_map, open(self.spk_map_fn, 'w'), ensure_ascii=False) - return spk_map - - @classmethod - def build_mfa_inputs(cls, item, mfa_input_dir, mfa_group, wav_processed_tmp, preprocess_args): - item_name = item['item_name'] - wav_align_fn = item['wav_align_fn'] - ph_gb_word = item['ph_gb_word'] - ext = os.path.splitext(wav_align_fn)[1] - mfa_input_group_dir = f'{mfa_input_dir}/{mfa_group}' - os.makedirs(mfa_input_group_dir, exist_ok=True) - new_wav_align_fn = f"{mfa_input_group_dir}/{item_name}{ext}" - move_link_func = move_file if os.path.dirname(wav_align_fn) == wav_processed_tmp else link_file - move_link_func(wav_align_fn, new_wav_align_fn) - ph_gb_word_nosil = " ".join(["_".join([p for p in w.split("_") if not is_sil_phoneme(p)]) - for w in ph_gb_word.split(" ") if not is_sil_phoneme(w)]) - with open(f'{mfa_input_group_dir}/{item_name}.lab', 'w') as f_txt: - f_txt.write(ph_gb_word_nosil) - return ph_gb_word_nosil, new_wav_align_fn - - def load_spk_map(self, base_dir): - spk_map_fn = f"{base_dir}/spk_map.json" - spk_map = json.load(open(spk_map_fn, 'r')) - return spk_map - - def load_dict(self, base_dir): - ph_encoder = build_token_encoder(f'{base_dir}/phone_set.json') - return ph_encoder - - @property - def meta_csv_filename(self): - return 'metadata' - - @property - def wav_processed_dirname(self): - return 'wav_processed' \ No newline at end of file diff --git a/spaces/Rongjiehuang/ProDiff/usr/diffspeech_task.py b/spaces/Rongjiehuang/ProDiff/usr/diffspeech_task.py deleted file mode 100644 index e4fca7e9e46fc378468188d58fc42bc989df824c..0000000000000000000000000000000000000000 --- a/spaces/Rongjiehuang/ProDiff/usr/diffspeech_task.py +++ /dev/null @@ -1,122 +0,0 @@ -import torch - -import utils -from utils.hparams import hparams -from .diff.net import DiffNet -from .diff.shallow_diffusion_tts import GaussianDiffusion -from .task import DiffFsTask -from vocoders.base_vocoder import get_vocoder_cls, BaseVocoder -from utils.pitch_utils import denorm_f0 -from tasks.tts.fs2_utils import FastSpeechDataset - -DIFF_DECODERS = { - 'wavenet': lambda hp: DiffNet(hp['audio_num_mel_bins']), -} - - -class DiffSpeechTask(DiffFsTask): - def __init__(self): - super(DiffSpeechTask, self).__init__() - self.dataset_cls = FastSpeechDataset - self.vocoder: BaseVocoder = get_vocoder_cls(hparams)() - - def build_tts_model(self): - mel_bins = hparams['audio_num_mel_bins'] - self.model = GaussianDiffusion( - phone_encoder=self.phone_encoder, - out_dims=mel_bins, denoise_fn=DIFF_DECODERS[hparams['diff_decoder_type']](hparams), - timesteps=hparams['timesteps'], - K_step=hparams['K_step'], - loss_type=hparams['diff_loss_type'], - spec_min=hparams['spec_min'], spec_max=hparams['spec_max'], - ) - if hparams['fs2_ckpt'] != '': - utils.load_ckpt(self.model.fs2, hparams['fs2_ckpt'], 'model', strict=True) - # self.model.fs2.decoder = None - for k, v in self.model.fs2.named_parameters(): - if not 'predictor' in k: - v.requires_grad = False - - def build_optimizer(self, model): - self.optimizer = optimizer = torch.optim.AdamW( - filter(lambda p: p.requires_grad, model.parameters()), - lr=hparams['lr'], - betas=(hparams['optimizer_adam_beta1'], hparams['optimizer_adam_beta2']), - weight_decay=hparams['weight_decay']) - return optimizer - - def run_model(self, model, sample, return_output=False, infer=False): - txt_tokens = sample['txt_tokens'] # [B, T_t] - target = sample['mels'] # [B, T_s, 80] - # mel2ph = sample['mel2ph'] if hparams['use_gt_dur'] else None # [B, T_s] - mel2ph = sample['mel2ph'] - f0 = sample['f0'] - uv = sample['uv'] - energy = sample['energy'] - # fs2_mel = sample['fs2_mels'] - spk_embed = sample.get('spk_embed') if not hparams['use_spk_id'] else sample.get('spk_ids') - if hparams['pitch_type'] == 'cwt': - cwt_spec = sample[f'cwt_spec'] - f0_mean = sample['f0_mean'] - f0_std = sample['f0_std'] - sample['f0_cwt'] = f0 = model.cwt2f0_norm(cwt_spec, f0_mean, f0_std, mel2ph) - - output = model(txt_tokens, mel2ph=mel2ph, spk_embed=spk_embed, - ref_mels=target, f0=f0, uv=uv, energy=energy, infer=infer) - - losses = {} - if 'diff_loss' in output: - losses['mel'] = output['diff_loss'] - self.add_dur_loss(output['dur'], mel2ph, txt_tokens, losses=losses) - if hparams['use_pitch_embed']: - self.add_pitch_loss(output, sample, losses) - if hparams['use_energy_embed']: - self.add_energy_loss(output['energy_pred'], energy, losses) - if not return_output: - return losses - else: - return losses, output - - def validation_step(self, sample, batch_idx): - outputs = {} - txt_tokens = sample['txt_tokens'] # [B, T_t] - - energy = sample['energy'] - spk_embed = sample.get('spk_embed') if not hparams['use_spk_id'] else sample.get('spk_ids') - mel2ph = sample['mel2ph'] - f0 = sample['f0'] - uv = sample['uv'] - - outputs['losses'] = {} - - outputs['losses'], model_out = self.run_model(self.model, sample, return_output=True, infer=False) - - - outputs['total_loss'] = sum(outputs['losses'].values()) - outputs['nsamples'] = sample['nsamples'] - outputs = utils.tensors_to_scalars(outputs) - if batch_idx < hparams['num_valid_plots']: - # model_out = self.model( - # txt_tokens, spk_embed=spk_embed, mel2ph=None, f0=None, uv=None, energy=None, ref_mels=None, inference=True) - # self.plot_mel(batch_idx, model_out['mel_out'], model_out['fs2_mel'], name=f'diffspeech_vs_fs2_{batch_idx}') - model_out = self.model( - txt_tokens, spk_embed=spk_embed, mel2ph=mel2ph, f0=f0, uv=uv, energy=energy, ref_mels=None, infer=True) - gt_f0 = denorm_f0(sample['f0'], sample['uv'], hparams) - self.plot_wav(batch_idx, sample['mels'], model_out['mel_out'], is_mel=True, gt_f0=gt_f0, f0=model_out.get('f0_denorm')) - self.plot_mel(batch_idx, sample['mels'], model_out['mel_out']) - return outputs - - ############ - # validation plots - ############ - def plot_wav(self, batch_idx, gt_wav, wav_out, is_mel=False, gt_f0=None, f0=None, name=None): - gt_wav = gt_wav[0].cpu().numpy() - wav_out = wav_out[0].cpu().numpy() - gt_f0 = gt_f0[0].cpu().numpy() - f0 = f0[0].cpu().numpy() - if is_mel: - gt_wav = self.vocoder.spec2wav(gt_wav, f0=gt_f0) - wav_out = self.vocoder.spec2wav(wav_out, f0=f0) - self.logger.experiment.add_audio(f'gt_{batch_idx}', gt_wav, sample_rate=hparams['audio_sample_rate'], global_step=self.global_step) - self.logger.experiment.add_audio(f'wav_{batch_idx}', wav_out, sample_rate=hparams['audio_sample_rate'], global_step=self.global_step) - diff --git a/spaces/Rothfeld/stable-diffusion-mat-outpainting-primer/datasets/dataset_256_val.py b/spaces/Rothfeld/stable-diffusion-mat-outpainting-primer/datasets/dataset_256_val.py deleted file mode 100644 index 26b619986ce380a88da88ff5792cb11166cf7e6d..0000000000000000000000000000000000000000 --- a/spaces/Rothfeld/stable-diffusion-mat-outpainting-primer/datasets/dataset_256_val.py +++ /dev/null @@ -1,282 +0,0 @@ -# Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved. -# -# NVIDIA CORPORATION and its licensors retain all intellectual property -# and proprietary rights in and to this software, related documentation -# and any modifications thereto. Any use, reproduction, disclosure or -# distribution of this software and related documentation without an express -# license agreement from NVIDIA CORPORATION is strictly prohibited. - -import os -import numpy as np -import zipfile -import PIL.Image -import cv2 -import json -import torch -import dnnlib -import glob - -try: - import pyspng -except ImportError: - pyspng = None - -from datasets.mask_generator_256 import RandomMask - -#---------------------------------------------------------------------------- - -class Dataset(torch.utils.data.Dataset): - def __init__(self, - name, # Name of the dataset. - raw_shape, # Shape of the raw image data (NCHW). - max_size = None, # Artificially limit the size of the dataset. None = no limit. Applied before xflip. - use_labels = False, # Enable conditioning labels? False = label dimension is zero. - xflip = False, # Artificially double the size of the dataset via x-flips. Applied after max_size. - random_seed = 0, # Random seed to use when applying max_size. - ): - self._name = name - self._raw_shape = list(raw_shape) - self._use_labels = use_labels - self._raw_labels = None - self._label_shape = None - - # Apply max_size. - self._raw_idx = np.arange(self._raw_shape[0], dtype=np.int64) - if (max_size is not None) and (self._raw_idx.size > max_size): - np.random.RandomState(random_seed).shuffle(self._raw_idx) - self._raw_idx = np.sort(self._raw_idx[:max_size]) - - # Apply xflip. - self._xflip = np.zeros(self._raw_idx.size, dtype=np.uint8) - if xflip: - self._raw_idx = np.tile(self._raw_idx, 2) - self._xflip = np.concatenate([self._xflip, np.ones_like(self._xflip)]) - - def _get_raw_labels(self): - if self._raw_labels is None: - self._raw_labels = self._load_raw_labels() if self._use_labels else None - if self._raw_labels is None: - self._raw_labels = np.zeros([self._raw_shape[0], 0], dtype=np.float32) - assert isinstance(self._raw_labels, np.ndarray) - assert self._raw_labels.shape[0] == self._raw_shape[0] - assert self._raw_labels.dtype in [np.float32, np.int64] - if self._raw_labels.dtype == np.int64: - assert self._raw_labels.ndim == 1 - assert np.all(self._raw_labels >= 0) - return self._raw_labels - - def close(self): # to be overridden by subclass - pass - - def _load_raw_image(self, raw_idx): # to be overridden by subclass - raise NotImplementedError - - def _load_raw_labels(self): # to be overridden by subclass - raise NotImplementedError - - def __getstate__(self): - return dict(self.__dict__, _raw_labels=None) - - def __del__(self): - try: - self.close() - except: - pass - - def __len__(self): - return self._raw_idx.size - - def __getitem__(self, idx): - image = self._load_raw_image(self._raw_idx[idx]) - assert isinstance(image, np.ndarray) - assert list(image.shape) == self.image_shape - assert image.dtype == np.uint8 - if self._xflip[idx]: - assert image.ndim == 3 # CHW - image = image[:, :, ::-1] - return image.copy(), self.get_label(idx) - - def get_label(self, idx): - label = self._get_raw_labels()[self._raw_idx[idx]] - if label.dtype == np.int64: - onehot = np.zeros(self.label_shape, dtype=np.float32) - onehot[label] = 1 - label = onehot - return label.copy() - - def get_details(self, idx): - d = dnnlib.EasyDict() - d.raw_idx = int(self._raw_idx[idx]) - d.xflip = (int(self._xflip[idx]) != 0) - d.raw_label = self._get_raw_labels()[d.raw_idx].copy() - return d - - @property - def name(self): - return self._name - - @property - def image_shape(self): - return list(self._raw_shape[1:]) - - @property - def num_channels(self): - assert len(self.image_shape) == 3 # CHW - return self.image_shape[0] - - @property - def resolution(self): - assert len(self.image_shape) == 3 # CHW - assert self.image_shape[1] == self.image_shape[2] - return self.image_shape[1] - - @property - def label_shape(self): - if self._label_shape is None: - raw_labels = self._get_raw_labels() - if raw_labels.dtype == np.int64: - self._label_shape = [int(np.max(raw_labels)) + 1] - else: - self._label_shape = raw_labels.shape[1:] - return list(self._label_shape) - - @property - def label_dim(self): - assert len(self.label_shape) == 1 - return self.label_shape[0] - - @property - def has_labels(self): - return any(x != 0 for x in self.label_shape) - - @property - def has_onehot_labels(self): - return self._get_raw_labels().dtype == np.int64 - - -#---------------------------------------------------------------------------- - - -class ImageFolderMaskDataset(Dataset): - def __init__(self, - path, # Path to directory or zip. - resolution = None, # Ensure specific resolution, None = highest available. - hole_range=[0,1], - **super_kwargs, # Additional arguments for the Dataset base class. - ): - self._path = path - self._zipfile = None - self._hole_range = hole_range - - if os.path.isdir(self._path): - self._type = 'dir' - self._all_fnames = {os.path.relpath(os.path.join(root, fname), start=self._path) for root, _dirs, files in os.walk(self._path) for fname in files} - elif self._file_ext(self._path) == '.zip': - self._type = 'zip' - self._all_fnames = set(self._get_zipfile().namelist()) - else: - raise IOError('Path must point to a directory or zip') - - PIL.Image.init() - self._image_fnames = sorted(fname for fname in self._all_fnames if self._file_ext(fname) in PIL.Image.EXTENSION) - if len(self._image_fnames) == 0: - raise IOError('No image files found in the specified path') - - name = os.path.splitext(os.path.basename(self._path))[0] - raw_shape = [len(self._image_fnames)] + list(self._load_raw_image(0).shape) - if resolution is not None and (raw_shape[2] != resolution or raw_shape[3] != resolution): - raise IOError('Image files do not match the specified resolution') - self._load_mask() - super().__init__(name=name, raw_shape=raw_shape, **super_kwargs) - - def _load_mask(self, mpath='/data/liwenbo/datasets/Places365/standard/masks_val_256_eval'): - self.masks = sorted(glob.glob(mpath + '/*.png')) - - @staticmethod - def _file_ext(fname): - return os.path.splitext(fname)[1].lower() - - def _get_zipfile(self): - assert self._type == 'zip' - if self._zipfile is None: - self._zipfile = zipfile.ZipFile(self._path) - return self._zipfile - - def _open_file(self, fname): - if self._type == 'dir': - return open(os.path.join(self._path, fname), 'rb') - if self._type == 'zip': - return self._get_zipfile().open(fname, 'r') - return None - - def close(self): - try: - if self._zipfile is not None: - self._zipfile.close() - finally: - self._zipfile = None - - def __getstate__(self): - return dict(super().__getstate__(), _zipfile=None) - - def _load_raw_image(self, raw_idx): - fname = self._image_fnames[raw_idx] - with self._open_file(fname) as f: - if pyspng is not None and self._file_ext(fname) == '.png': - image = pyspng.load(f.read()) - else: - image = np.array(PIL.Image.open(f)) - if image.ndim == 2: - image = image[:, :, np.newaxis] # HW => HWC - - # for grayscale image - if image.shape[2] == 1: - image = np.repeat(image, 3, axis=2) - - # restricted to 256x256 - res = 256 - H, W, C = image.shape - if H < res or W < res: - top = 0 - bottom = max(0, res - H) - left = 0 - right = max(0, res - W) - image = cv2.copyMakeBorder(image, top, bottom, left, right, cv2.BORDER_REFLECT) - H, W, C = image.shape - h = (H - res) // 2 - w = (W - res) // 2 - image = image[h:h+res, w:w+res, :] - - image = np.ascontiguousarray(image.transpose(2, 0, 1)) # HWC => CHW - return image - - def _load_raw_labels(self): - fname = 'labels.json' - if fname not in self._all_fnames: - return None - with self._open_file(fname) as f: - labels = json.load(f)['labels'] - if labels is None: - return None - labels = dict(labels) - labels = [labels[fname.replace('\\', '/')] for fname in self._image_fnames] - labels = np.array(labels) - labels = labels.astype({1: np.int64, 2: np.float32}[labels.ndim]) - return labels - - def __getitem__(self, idx): - image = self._load_raw_image(self._raw_idx[idx]) - - # for grayscale image - if image.shape[0] == 1: - image = np.repeat(image, 3, axis=0) - - assert isinstance(image, np.ndarray) - assert list(image.shape) == self.image_shape - assert image.dtype == np.uint8 - if self._xflip[idx]: - assert image.ndim == 3 # CHW - image = image[:, :, ::-1] - # mask = RandomMask(image.shape[-1], hole_range=self._hole_range) # hole as 0, reserved as 1 - mask = cv2.imread(self.masks[idx], cv2.IMREAD_GRAYSCALE).astype(np.float32)[np.newaxis, :, :] / 255.0 - return image.copy(), mask, self.get_label(idx) diff --git a/spaces/SAAZIZI/SummarizeAV/query_service/__init__.py b/spaces/SAAZIZI/SummarizeAV/query_service/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/SeViLA/SeViLA/lavis/models/clip_models/__init__.py b/spaces/SeViLA/SeViLA/lavis/models/clip_models/__init__.py deleted file mode 100644 index 325e25255550a00fdd082deb82a8a0da567cadb0..0000000000000000000000000000000000000000 --- a/spaces/SeViLA/SeViLA/lavis/models/clip_models/__init__.py +++ /dev/null @@ -1,14 +0,0 @@ -""" - Copyright (c) 2022, salesforce.com, inc. - All rights reserved. - SPDX-License-Identifier: BSD-3-Clause - For full license text, see the LICENSE file in the repo root or https://opensource.org/licenses/BSD-3-Clause - - Based on https://github.com/mlfoundations/open_clip -""" - -""" OpenAI pretrained model functions -Adapted from https://github.com/mlfoundations/open_clip and https://github.com/openai/CLIP. - -Originally MIT License, Copyright (c) 2021 OpenAI. -""" diff --git a/spaces/Shredder/CONBERT-3/fin_readability_sustainability.py b/spaces/Shredder/CONBERT-3/fin_readability_sustainability.py deleted file mode 100644 index 53ea0c60eab0dd27868f9bdc6d4652ea0ddc71b9..0000000000000000000000000000000000000000 --- a/spaces/Shredder/CONBERT-3/fin_readability_sustainability.py +++ /dev/null @@ -1,110 +0,0 @@ -import torch -import transformers -from torch.utils.data import Dataset, DataLoader -from transformers import RobertaModel, RobertaTokenizer, BertModel, BertTokenizer -import pandas as pd - -device = torch.device("cuda" if torch.cuda.is_available() else "cpu") - -MAX_LEN = 128 -BATCH_SIZE = 20 -text_col_name = 'sentence' - -def scoring_data_prep(dataset): - out = [] - target = [] - mask = [] - - for i in range(len(dataset)): - rec = dataset[i] - out.append(rec['ids'].reshape(-1,MAX_LEN)) - mask.append(rec['mask'].reshape(-1,MAX_LEN)) - - out_stack = torch.cat(out, dim = 0) - mask_stack = torch.cat(mask, dim =0 ) - out_stack = out_stack.to(device, dtype = torch.long) - mask_stack = mask_stack.to(device, dtype = torch.long) - - return out_stack, mask_stack - -class Triage(Dataset): - """ - This is a subclass of torch packages Dataset class. It processes input to create ids, masks and targets required for model training. - """ - - def __init__(self, dataframe, tokenizer, max_len, text_col_name): - self.len = len(dataframe) - self.data = dataframe - self.tokenizer = tokenizer - self.max_len = max_len - self.text_col_name = text_col_name - - - def __getitem__(self, index): - title = str(self.data[self.text_col_name][index]) - title = " ".join(title.split()) - inputs = self.tokenizer.encode_plus( - title, - None, - add_special_tokens=True, - max_length=self.max_len, - pad_to_max_length=True, #padding='max_length' #For future version use `padding='max_length'` - return_token_type_ids=True, - truncation=True, - ) - ids = inputs["input_ids"] - mask = inputs["attention_mask"] - - return { - "ids": torch.tensor(ids, dtype=torch.long), - "mask": torch.tensor(mask, dtype=torch.long), - - } - - def __len__(self): - return self.len - -class BERTClass(torch.nn.Module): - def __init__(self, num_class, task): - super(BERTClass, self).__init__() - self.num_class = num_class - if task =="sustanability": - self.l1 = RobertaModel.from_pretrained("roberta-base") - else: - self.l1 = BertModel.from_pretrained("ProsusAI/finbert") - self.pre_classifier = torch.nn.Linear(768, 768) - self.dropout = torch.nn.Dropout(0.3) - self.classifier = torch.nn.Linear(768, self.num_class) - self.history = dict() - - def forward(self, input_ids, attention_mask): - output_1 = self.l1(input_ids=input_ids, attention_mask=attention_mask) - hidden_state = output_1[0] - pooler = hidden_state[:, 0] - pooler = self.pre_classifier(pooler) - pooler = torch.nn.ReLU()(pooler) - pooler = self.dropout(pooler) - output = self.classifier(pooler) - return output - -def do_predict(model, tokenizer, test_df): - test_set = Triage(test_df, tokenizer, MAX_LEN, text_col_name) - test_params = {'batch_size' : BATCH_SIZE, 'shuffle': False, 'num_workers':0} - test_loader = DataLoader(test_set, **test_params) - out_stack, mask_stack = scoring_data_prep(dataset = test_set) - n = 0 - combined_output = [] - model.eval() - with torch.no_grad(): - while n < test_df.shape[0]: - output = model(out_stack[n:n+BATCH_SIZE,:],mask_stack[n:n+BATCH_SIZE,:]) - n = n + BATCH_SIZE - combined_output.append(output) - combined_output = torch.cat(combined_output, dim = 0) - preds = torch.argsort(combined_output, axis = 1, descending = True) - preds = preds.to('cpu') - actual_predictions = [i[0] for i in preds.tolist()] - combined_output = combined_output.to('cpu') - prob_predictions= [i[1] for i in combined_output.tolist()] - return (actual_predictions, prob_predictions) - \ No newline at end of file diff --git a/spaces/Singularity666/RadiXGPT_/app.py b/spaces/Singularity666/RadiXGPT_/app.py deleted file mode 100644 index f67a828cddf75e6d1cca19e109a16fbef2a8f855..0000000000000000000000000000000000000000 --- a/spaces/Singularity666/RadiXGPT_/app.py +++ /dev/null @@ -1,149 +0,0 @@ -import torch -from PIL import Image -import streamlit as st -import numpy as np -import pandas as pd -from main import predict_caption, CLIPModel, get_text_embeddings -import openai -import base64 -from docx import Document -from docx.enum.text import WD_PARAGRAPH_ALIGNMENT -from io import BytesIO -import re - -openai.api_key = "sk-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" - -st.markdown( - """ - - """, - unsafe_allow_html=True, -) - - - -device = torch.device("cpu") - -testing_df = pd.read_csv("testing_df.csv") -model = CLIPModel() # Create an instance of CLIPModel -# Load the model -state_dict = torch.load("weights.pt", map_location=torch.device('cpu')) -print("Loaded State Dict Keys:", state_dict.keys()) - -# Create an instance of CLIPModel -model = CLIPModel().to(device) -print("Model Keys:", model.state_dict().keys()) - -# Load the state_dict into the model -model.load_state_dict(state_dict, strict=False) # Set strict=False to ignore unexpected keys - -text_embeddings = torch.load('saved_text_embeddings.pt', map_location=device) - -def download_link(content, filename, link_text): - b64 = base64.b64encode(content).decode() - href = f'{link_text}' - return href - -def show_predicted_caption(image, top_k=8): - matches = predict_caption( - image, model, text_embeddings, testing_df["caption"] - )[:top_k] - cleaned_matches = [re.sub(r'\s\(ROCO_\d+\)', '', match) for match in matches] # Add this line to clean the matches - return cleaned_matches # Return the cleaned_matches instead of matches - -def generate_radiology_report(prompt): - response = openai.Completion.create( - engine="text-davinci-003", - prompt=prompt, - max_tokens=800, - n=1, - stop=None, - temperature=1, - ) - report = response.choices[0].text.strip() - # Remove reference string from the report - report = re.sub(r'\(ROCO_\d+\)', '', report).strip() - return report - - -def save_as_docx(text, filename): - document = Document() - document.add_paragraph(text) - with BytesIO() as output: - document.save(output) - output.seek(0) - return output.getvalue() - -st.title("RadiXGPT: An Evolution of machine doctors towards Radiology") - - -# Collect user's personal information -st.subheader("Personal Information") -first_name = st.text_input("First Name") -last_name = st.text_input("Last Name") -age = st.number_input("Age", min_value=0, max_value=120, value=25, step=1) -gender = st.selectbox("Gender", ["Male", "Female", "Other"]) - -st.write("Upload Scan to get Radiological Report:") -uploaded_file = st.file_uploader("Choose an image...", type=["jpg", "png", "jpeg"]) -if uploaded_file is not None: - image = Image.open(uploaded_file) - if st.button("Generate Caption"): - with st.spinner("Generating caption..."): - image_np = np.array(image) - caption = show_predicted_caption(image_np)[0] - - st.success(f"Caption: {caption}") - - # Generate the radiology report - radiology_report = generate_radiology_report(f"Write Complete Radiology Report for this with clinical info, subjective, Assessment, Finding, Impressions, Conclusion and more in proper order : {caption}") - - # Add personal information to the radiology report - radiology_report_with_personal_info = f"Patient Name: {first_name} {last_name}\nAge: {age}\nGender: {gender}\n\n{radiology_report}" - - st.header("Radiology Report") - st.write(radiology_report_with_personal_info) - st.markdown(download_link(save_as_docx(radiology_report_with_personal_info, "radiology_report.docx"), "radiology_report.docx", "Download Report as DOCX"), unsafe_allow_html=True) - - feedback_options = ["Satisfied", "Not Satisfied"] - selected_feedback = st.radio("Please provide feedback on the generated report:", feedback_options) - - if selected_feedback == "Not Satisfied": - if st.button("Regenerate Report"): - with st.spinner("Regenerating report..."): - alternative_caption = get_alternative_caption(image_np, model, text_embeddings, testing_df["caption"]) - regenerated_radiology_report = generate_radiology_report(f"Write Complete Radiology Report for this with clinical info, subjective, Assessment, Finding, Impressions, Conclusion and more in proper order : {alternative_caption}") - - regenerated_radiology_report_with_personal_info = f"Patient Name: {first_name} {last_name}\nAge: {age}\nGender: {gender}\n\n{regenerated_radiology_report}" - - st.header("Regenerated Radiology Report") - st.write(regenerated_radiology_report_with_personal_info) - st.markdown(download_link(save_as_docx(regenerated_radiology_report_with_personal_info, "regenerated_radiology_report.docx"), "regenerated_radiology_report.docx", "Download Regenerated Report as DOCX"), unsafe_allow_html=True) \ No newline at end of file diff --git a/spaces/Smotto/Vocal-Isolator/src/constants.py b/spaces/Smotto/Vocal-Isolator/src/constants.py deleted file mode 100644 index 6845f8a7ababcb635ae7fb4f1a6ab68d2d16810a..0000000000000000000000000000000000000000 --- a/spaces/Smotto/Vocal-Isolator/src/constants.py +++ /dev/null @@ -1,9 +0,0 @@ -# Third-party -import torch - -# Global Variables -COMPUTATION_DEVICE = "cuda" if torch.cuda.is_available() else "cpu" -EXECUTION_PROVIDER_LIST = ["CUDAExecutionProvider", "CPUExecutionProvider"] -ONNX_MODEL_PATH = "./pretrained_models/MDX_net/Kim_Vocal.onnx" -INPUT_FOLDER = "./datasets/input" -OUTPUT_FOLDER = "./datasets/output" diff --git a/spaces/Sultannn/YOLOX-Demo/README.md b/spaces/Sultannn/YOLOX-Demo/README.md deleted file mode 100644 index c2554d23cda278a5563ab897aa06a12e5e2e5573..0000000000000000000000000000000000000000 --- a/spaces/Sultannn/YOLOX-Demo/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: YOLOX Demo -emoji: 🖼 -colorFrom: blue -colorTo: yellow -sdk: gradio -sdk_version: 2.8.10 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/_distutils_hack/override.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/_distutils_hack/override.py deleted file mode 100644 index 2cc433a4a55e3b41fa31089918fb62096092f89f..0000000000000000000000000000000000000000 --- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/_distutils_hack/override.py +++ /dev/null @@ -1 +0,0 @@ -__import__('_distutils_hack').do_override() diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/aiohttp/web_fileresponse.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/aiohttp/web_fileresponse.py deleted file mode 100644 index f41ed3fd0a9c1e0d5e45ce1e97b99bfef8361cac..0000000000000000000000000000000000000000 --- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/aiohttp/web_fileresponse.py +++ /dev/null @@ -1,288 +0,0 @@ -import asyncio -import mimetypes -import os -import pathlib -import sys -from typing import ( # noqa - IO, - TYPE_CHECKING, - Any, - Awaitable, - Callable, - Iterator, - List, - Optional, - Tuple, - Union, - cast, -) - -from . import hdrs -from .abc import AbstractStreamWriter -from .helpers import ETAG_ANY, ETag -from .typedefs import Final, LooseHeaders -from .web_exceptions import ( - HTTPNotModified, - HTTPPartialContent, - HTTPPreconditionFailed, - HTTPRequestRangeNotSatisfiable, -) -from .web_response import StreamResponse - -__all__ = ("FileResponse",) - -if TYPE_CHECKING: # pragma: no cover - from .web_request import BaseRequest - - -_T_OnChunkSent = Optional[Callable[[bytes], Awaitable[None]]] - - -NOSENDFILE: Final[bool] = bool(os.environ.get("AIOHTTP_NOSENDFILE")) - - -class FileResponse(StreamResponse): - """A response object can be used to send files.""" - - def __init__( - self, - path: Union[str, pathlib.Path], - chunk_size: int = 256 * 1024, - status: int = 200, - reason: Optional[str] = None, - headers: Optional[LooseHeaders] = None, - ) -> None: - super().__init__(status=status, reason=reason, headers=headers) - - if isinstance(path, str): - path = pathlib.Path(path) - - self._path = path - self._chunk_size = chunk_size - - async def _sendfile_fallback( - self, writer: AbstractStreamWriter, fobj: IO[Any], offset: int, count: int - ) -> AbstractStreamWriter: - # To keep memory usage low,fobj is transferred in chunks - # controlled by the constructor's chunk_size argument. - - chunk_size = self._chunk_size - loop = asyncio.get_event_loop() - - await loop.run_in_executor(None, fobj.seek, offset) - - chunk = await loop.run_in_executor(None, fobj.read, chunk_size) - while chunk: - await writer.write(chunk) - count = count - chunk_size - if count <= 0: - break - chunk = await loop.run_in_executor(None, fobj.read, min(chunk_size, count)) - - await writer.drain() - return writer - - async def _sendfile( - self, request: "BaseRequest", fobj: IO[Any], offset: int, count: int - ) -> AbstractStreamWriter: - writer = await super().prepare(request) - assert writer is not None - - if NOSENDFILE or sys.version_info < (3, 7) or self.compression: - return await self._sendfile_fallback(writer, fobj, offset, count) - - loop = request._loop - transport = request.transport - assert transport is not None - - try: - await loop.sendfile(transport, fobj, offset, count) - except NotImplementedError: - return await self._sendfile_fallback(writer, fobj, offset, count) - - await super().write_eof() - return writer - - @staticmethod - def _strong_etag_match(etag_value: str, etags: Tuple[ETag, ...]) -> bool: - if len(etags) == 1 and etags[0].value == ETAG_ANY: - return True - return any(etag.value == etag_value for etag in etags if not etag.is_weak) - - async def _not_modified( - self, request: "BaseRequest", etag_value: str, last_modified: float - ) -> Optional[AbstractStreamWriter]: - self.set_status(HTTPNotModified.status_code) - self._length_check = False - self.etag = etag_value # type: ignore[assignment] - self.last_modified = last_modified # type: ignore[assignment] - # Delete any Content-Length headers provided by user. HTTP 304 - # should always have empty response body - return await super().prepare(request) - - async def _precondition_failed( - self, request: "BaseRequest" - ) -> Optional[AbstractStreamWriter]: - self.set_status(HTTPPreconditionFailed.status_code) - self.content_length = 0 - return await super().prepare(request) - - async def prepare(self, request: "BaseRequest") -> Optional[AbstractStreamWriter]: - filepath = self._path - - gzip = False - if "gzip" in request.headers.get(hdrs.ACCEPT_ENCODING, ""): - gzip_path = filepath.with_name(filepath.name + ".gz") - - if gzip_path.is_file(): - filepath = gzip_path - gzip = True - - loop = asyncio.get_event_loop() - st: os.stat_result = await loop.run_in_executor(None, filepath.stat) - - etag_value = f"{st.st_mtime_ns:x}-{st.st_size:x}" - last_modified = st.st_mtime - - # https://tools.ietf.org/html/rfc7232#section-6 - ifmatch = request.if_match - if ifmatch is not None and not self._strong_etag_match(etag_value, ifmatch): - return await self._precondition_failed(request) - - unmodsince = request.if_unmodified_since - if ( - unmodsince is not None - and ifmatch is None - and st.st_mtime > unmodsince.timestamp() - ): - return await self._precondition_failed(request) - - ifnonematch = request.if_none_match - if ifnonematch is not None and self._strong_etag_match(etag_value, ifnonematch): - return await self._not_modified(request, etag_value, last_modified) - - modsince = request.if_modified_since - if ( - modsince is not None - and ifnonematch is None - and st.st_mtime <= modsince.timestamp() - ): - return await self._not_modified(request, etag_value, last_modified) - - if hdrs.CONTENT_TYPE not in self.headers: - ct, encoding = mimetypes.guess_type(str(filepath)) - if not ct: - ct = "application/octet-stream" - should_set_ct = True - else: - encoding = "gzip" if gzip else None - should_set_ct = False - - status = self._status - file_size = st.st_size - count = file_size - - start = None - - ifrange = request.if_range - if ifrange is None or st.st_mtime <= ifrange.timestamp(): - # If-Range header check: - # condition = cached date >= last modification date - # return 206 if True else 200. - # if False: - # Range header would not be processed, return 200 - # if True but Range header missing - # return 200 - try: - rng = request.http_range - start = rng.start - end = rng.stop - except ValueError: - # https://tools.ietf.org/html/rfc7233: - # A server generating a 416 (Range Not Satisfiable) response to - # a byte-range request SHOULD send a Content-Range header field - # with an unsatisfied-range value. - # The complete-length in a 416 response indicates the current - # length of the selected representation. - # - # Will do the same below. Many servers ignore this and do not - # send a Content-Range header with HTTP 416 - self.headers[hdrs.CONTENT_RANGE] = f"bytes */{file_size}" - self.set_status(HTTPRequestRangeNotSatisfiable.status_code) - return await super().prepare(request) - - # If a range request has been made, convert start, end slice - # notation into file pointer offset and count - if start is not None or end is not None: - if start < 0 and end is None: # return tail of file - start += file_size - if start < 0: - # if Range:bytes=-1000 in request header but file size - # is only 200, there would be trouble without this - start = 0 - count = file_size - start - else: - # rfc7233:If the last-byte-pos value is - # absent, or if the value is greater than or equal to - # the current length of the representation data, - # the byte range is interpreted as the remainder - # of the representation (i.e., the server replaces the - # value of last-byte-pos with a value that is one less than - # the current length of the selected representation). - count = ( - min(end if end is not None else file_size, file_size) - start - ) - - if start >= file_size: - # HTTP 416 should be returned in this case. - # - # According to https://tools.ietf.org/html/rfc7233: - # If a valid byte-range-set includes at least one - # byte-range-spec with a first-byte-pos that is less than - # the current length of the representation, or at least one - # suffix-byte-range-spec with a non-zero suffix-length, - # then the byte-range-set is satisfiable. Otherwise, the - # byte-range-set is unsatisfiable. - self.headers[hdrs.CONTENT_RANGE] = f"bytes */{file_size}" - self.set_status(HTTPRequestRangeNotSatisfiable.status_code) - return await super().prepare(request) - - status = HTTPPartialContent.status_code - # Even though you are sending the whole file, you should still - # return a HTTP 206 for a Range request. - self.set_status(status) - - if should_set_ct: - self.content_type = ct # type: ignore[assignment] - if encoding: - self.headers[hdrs.CONTENT_ENCODING] = encoding - if gzip: - self.headers[hdrs.VARY] = hdrs.ACCEPT_ENCODING - - self.etag = etag_value # type: ignore[assignment] - self.last_modified = st.st_mtime # type: ignore[assignment] - self.content_length = count - - self.headers[hdrs.ACCEPT_RANGES] = "bytes" - - real_start = cast(int, start) - - if status == HTTPPartialContent.status_code: - self.headers[hdrs.CONTENT_RANGE] = "bytes {}-{}/{}".format( - real_start, real_start + count - 1, file_size - ) - - # If we are sending 0 bytes calling sendfile() will throw a ValueError - if count == 0 or request.method == hdrs.METH_HEAD or self.status in [204, 304]: - return await super().prepare(request) - - fobj = await loop.run_in_executor(None, filepath.open, "rb") - if start: # be aware that start could be None or int=0 here. - offset = start - else: - offset = 0 - - try: - return await self._sendfile(request, fobj, offset, count) - finally: - await loop.run_in_executor(None, fobj.close) diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/anyio/abc/_sockets.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/anyio/abc/_sockets.py deleted file mode 100644 index 6aac5f7c22395759ebe3d5633d2adcf1f4ff1fe5..0000000000000000000000000000000000000000 --- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/anyio/abc/_sockets.py +++ /dev/null @@ -1,160 +0,0 @@ -from __future__ import annotations - -import socket -from abc import abstractmethod -from contextlib import AsyncExitStack -from io import IOBase -from ipaddress import IPv4Address, IPv6Address -from socket import AddressFamily -from typing import ( - Any, - Callable, - Collection, - Mapping, - Tuple, - TypeVar, - Union, -) - -from .._core._tasks import create_task_group -from .._core._typedattr import ( - TypedAttributeProvider, - TypedAttributeSet, - typed_attribute, -) -from ._streams import ByteStream, Listener, UnreliableObjectStream -from ._tasks import TaskGroup - -IPAddressType = Union[str, IPv4Address, IPv6Address] -IPSockAddrType = Tuple[str, int] -SockAddrType = Union[IPSockAddrType, str] -UDPPacketType = Tuple[bytes, IPSockAddrType] -T_Retval = TypeVar("T_Retval") - - -class SocketAttribute(TypedAttributeSet): - #: the address family of the underlying socket - family: AddressFamily = typed_attribute() - #: the local socket address of the underlying socket - local_address: SockAddrType = typed_attribute() - #: for IP addresses, the local port the underlying socket is bound to - local_port: int = typed_attribute() - #: the underlying stdlib socket object - raw_socket: socket.socket = typed_attribute() - #: the remote address the underlying socket is connected to - remote_address: SockAddrType = typed_attribute() - #: for IP addresses, the remote port the underlying socket is connected to - remote_port: int = typed_attribute() - - -class _SocketProvider(TypedAttributeProvider): - @property - def extra_attributes(self) -> Mapping[Any, Callable[[], Any]]: - from .._core._sockets import convert_ipv6_sockaddr as convert - - attributes: dict[Any, Callable[[], Any]] = { - SocketAttribute.family: lambda: self._raw_socket.family, - SocketAttribute.local_address: lambda: convert( - self._raw_socket.getsockname() - ), - SocketAttribute.raw_socket: lambda: self._raw_socket, - } - try: - peername: tuple[str, int] | None = convert(self._raw_socket.getpeername()) - except OSError: - peername = None - - # Provide the remote address for connected sockets - if peername is not None: - attributes[SocketAttribute.remote_address] = lambda: peername - - # Provide local and remote ports for IP based sockets - if self._raw_socket.family in (AddressFamily.AF_INET, AddressFamily.AF_INET6): - attributes[ - SocketAttribute.local_port - ] = lambda: self._raw_socket.getsockname()[1] - if peername is not None: - remote_port = peername[1] - attributes[SocketAttribute.remote_port] = lambda: remote_port - - return attributes - - @property - @abstractmethod - def _raw_socket(self) -> socket.socket: - pass - - -class SocketStream(ByteStream, _SocketProvider): - """ - Transports bytes over a socket. - - Supports all relevant extra attributes from :class:`~SocketAttribute`. - """ - - -class UNIXSocketStream(SocketStream): - @abstractmethod - async def send_fds(self, message: bytes, fds: Collection[int | IOBase]) -> None: - """ - Send file descriptors along with a message to the peer. - - :param message: a non-empty bytestring - :param fds: a collection of files (either numeric file descriptors or open file or socket - objects) - """ - - @abstractmethod - async def receive_fds(self, msglen: int, maxfds: int) -> tuple[bytes, list[int]]: - """ - Receive file descriptors along with a message from the peer. - - :param msglen: length of the message to expect from the peer - :param maxfds: maximum number of file descriptors to expect from the peer - :return: a tuple of (message, file descriptors) - """ - - -class SocketListener(Listener[SocketStream], _SocketProvider): - """ - Listens to incoming socket connections. - - Supports all relevant extra attributes from :class:`~SocketAttribute`. - """ - - @abstractmethod - async def accept(self) -> SocketStream: - """Accept an incoming connection.""" - - async def serve( - self, - handler: Callable[[SocketStream], Any], - task_group: TaskGroup | None = None, - ) -> None: - async with AsyncExitStack() as exit_stack: - if task_group is None: - task_group = await exit_stack.enter_async_context(create_task_group()) - - while True: - stream = await self.accept() - task_group.start_soon(handler, stream) - - -class UDPSocket(UnreliableObjectStream[UDPPacketType], _SocketProvider): - """ - Represents an unconnected UDP socket. - - Supports all relevant extra attributes from :class:`~SocketAttribute`. - """ - - async def sendto(self, data: bytes, host: str, port: int) -> None: - """Alias for :meth:`~.UnreliableObjectSendStream.send` ((data, (host, port))).""" - return await self.send((data, (host, port))) - - -class ConnectedUDPSocket(UnreliableObjectStream[bytes], _SocketProvider): - """ - Represents an connected UDP socket. - - Supports all relevant extra attributes from :class:`~SocketAttribute`. - """ diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/_vendored/pydevd/pydevd_plugins/extensions/types/__init__.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/_vendored/pydevd/pydevd_plugins/extensions/types/__init__.py deleted file mode 100644 index 274d7bc4fcf5f780858b55a14c6c4ac85f2a7f0d..0000000000000000000000000000000000000000 --- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/_vendored/pydevd/pydevd_plugins/extensions/types/__init__.py +++ /dev/null @@ -1,7 +0,0 @@ -import warnings -with warnings.catch_warnings(): - try: - __import__('pkg_resources').declare_namespace(__name__) - except ImportError: - import pkgutil - __path__ = pkgutil.extend_path(__path__, __name__) diff --git a/spaces/Superlang/ImageProcessor/annotator/openpose/face.py b/spaces/Superlang/ImageProcessor/annotator/openpose/face.py deleted file mode 100644 index f3c46d77664aa9fa91c63785a1485a396f05cacc..0000000000000000000000000000000000000000 --- a/spaces/Superlang/ImageProcessor/annotator/openpose/face.py +++ /dev/null @@ -1,362 +0,0 @@ -import logging -import numpy as np -from torchvision.transforms import ToTensor, ToPILImage -import torch -import torch.nn.functional as F -import cv2 - -from . import util -from torch.nn import Conv2d, Module, ReLU, MaxPool2d, init - - -class FaceNet(Module): - """Model the cascading heatmaps. """ - def __init__(self): - super(FaceNet, self).__init__() - # cnn to make feature map - self.relu = ReLU() - self.max_pooling_2d = MaxPool2d(kernel_size=2, stride=2) - self.conv1_1 = Conv2d(in_channels=3, out_channels=64, - kernel_size=3, stride=1, padding=1) - self.conv1_2 = Conv2d( - in_channels=64, out_channels=64, kernel_size=3, stride=1, - padding=1) - self.conv2_1 = Conv2d( - in_channels=64, out_channels=128, kernel_size=3, stride=1, - padding=1) - self.conv2_2 = Conv2d( - in_channels=128, out_channels=128, kernel_size=3, stride=1, - padding=1) - self.conv3_1 = Conv2d( - in_channels=128, out_channels=256, kernel_size=3, stride=1, - padding=1) - self.conv3_2 = Conv2d( - in_channels=256, out_channels=256, kernel_size=3, stride=1, - padding=1) - self.conv3_3 = Conv2d( - in_channels=256, out_channels=256, kernel_size=3, stride=1, - padding=1) - self.conv3_4 = Conv2d( - in_channels=256, out_channels=256, kernel_size=3, stride=1, - padding=1) - self.conv4_1 = Conv2d( - in_channels=256, out_channels=512, kernel_size=3, stride=1, - padding=1) - self.conv4_2 = Conv2d( - in_channels=512, out_channels=512, kernel_size=3, stride=1, - padding=1) - self.conv4_3 = Conv2d( - in_channels=512, out_channels=512, kernel_size=3, stride=1, - padding=1) - self.conv4_4 = Conv2d( - in_channels=512, out_channels=512, kernel_size=3, stride=1, - padding=1) - self.conv5_1 = Conv2d( - in_channels=512, out_channels=512, kernel_size=3, stride=1, - padding=1) - self.conv5_2 = Conv2d( - in_channels=512, out_channels=512, kernel_size=3, stride=1, - padding=1) - self.conv5_3_CPM = Conv2d( - in_channels=512, out_channels=128, kernel_size=3, stride=1, - padding=1) - - # stage1 - self.conv6_1_CPM = Conv2d( - in_channels=128, out_channels=512, kernel_size=1, stride=1, - padding=0) - self.conv6_2_CPM = Conv2d( - in_channels=512, out_channels=71, kernel_size=1, stride=1, - padding=0) - - # stage2 - self.Mconv1_stage2 = Conv2d( - in_channels=199, out_channels=128, kernel_size=7, stride=1, - padding=3) - self.Mconv2_stage2 = Conv2d( - in_channels=128, out_channels=128, kernel_size=7, stride=1, - padding=3) - self.Mconv3_stage2 = Conv2d( - in_channels=128, out_channels=128, kernel_size=7, stride=1, - padding=3) - self.Mconv4_stage2 = Conv2d( - in_channels=128, out_channels=128, kernel_size=7, stride=1, - padding=3) - self.Mconv5_stage2 = Conv2d( - in_channels=128, out_channels=128, kernel_size=7, stride=1, - padding=3) - self.Mconv6_stage2 = Conv2d( - in_channels=128, out_channels=128, kernel_size=1, stride=1, - padding=0) - self.Mconv7_stage2 = Conv2d( - in_channels=128, out_channels=71, kernel_size=1, stride=1, - padding=0) - - # stage3 - self.Mconv1_stage3 = Conv2d( - in_channels=199, out_channels=128, kernel_size=7, stride=1, - padding=3) - self.Mconv2_stage3 = Conv2d( - in_channels=128, out_channels=128, kernel_size=7, stride=1, - padding=3) - self.Mconv3_stage3 = Conv2d( - in_channels=128, out_channels=128, kernel_size=7, stride=1, - padding=3) - self.Mconv4_stage3 = Conv2d( - in_channels=128, out_channels=128, kernel_size=7, stride=1, - padding=3) - self.Mconv5_stage3 = Conv2d( - in_channels=128, out_channels=128, kernel_size=7, stride=1, - padding=3) - self.Mconv6_stage3 = Conv2d( - in_channels=128, out_channels=128, kernel_size=1, stride=1, - padding=0) - self.Mconv7_stage3 = Conv2d( - in_channels=128, out_channels=71, kernel_size=1, stride=1, - padding=0) - - # stage4 - self.Mconv1_stage4 = Conv2d( - in_channels=199, out_channels=128, kernel_size=7, stride=1, - padding=3) - self.Mconv2_stage4 = Conv2d( - in_channels=128, out_channels=128, kernel_size=7, stride=1, - padding=3) - self.Mconv3_stage4 = Conv2d( - in_channels=128, out_channels=128, kernel_size=7, stride=1, - padding=3) - self.Mconv4_stage4 = Conv2d( - in_channels=128, out_channels=128, kernel_size=7, stride=1, - padding=3) - self.Mconv5_stage4 = Conv2d( - in_channels=128, out_channels=128, kernel_size=7, stride=1, - padding=3) - self.Mconv6_stage4 = Conv2d( - in_channels=128, out_channels=128, kernel_size=1, stride=1, - padding=0) - self.Mconv7_stage4 = Conv2d( - in_channels=128, out_channels=71, kernel_size=1, stride=1, - padding=0) - - # stage5 - self.Mconv1_stage5 = Conv2d( - in_channels=199, out_channels=128, kernel_size=7, stride=1, - padding=3) - self.Mconv2_stage5 = Conv2d( - in_channels=128, out_channels=128, kernel_size=7, stride=1, - padding=3) - self.Mconv3_stage5 = Conv2d( - in_channels=128, out_channels=128, kernel_size=7, stride=1, - padding=3) - self.Mconv4_stage5 = Conv2d( - in_channels=128, out_channels=128, kernel_size=7, stride=1, - padding=3) - self.Mconv5_stage5 = Conv2d( - in_channels=128, out_channels=128, kernel_size=7, stride=1, - padding=3) - self.Mconv6_stage5 = Conv2d( - in_channels=128, out_channels=128, kernel_size=1, stride=1, - padding=0) - self.Mconv7_stage5 = Conv2d( - in_channels=128, out_channels=71, kernel_size=1, stride=1, - padding=0) - - # stage6 - self.Mconv1_stage6 = Conv2d( - in_channels=199, out_channels=128, kernel_size=7, stride=1, - padding=3) - self.Mconv2_stage6 = Conv2d( - in_channels=128, out_channels=128, kernel_size=7, stride=1, - padding=3) - self.Mconv3_stage6 = Conv2d( - in_channels=128, out_channels=128, kernel_size=7, stride=1, - padding=3) - self.Mconv4_stage6 = Conv2d( - in_channels=128, out_channels=128, kernel_size=7, stride=1, - padding=3) - self.Mconv5_stage6 = Conv2d( - in_channels=128, out_channels=128, kernel_size=7, stride=1, - padding=3) - self.Mconv6_stage6 = Conv2d( - in_channels=128, out_channels=128, kernel_size=1, stride=1, - padding=0) - self.Mconv7_stage6 = Conv2d( - in_channels=128, out_channels=71, kernel_size=1, stride=1, - padding=0) - - for m in self.modules(): - if isinstance(m, Conv2d): - init.constant_(m.bias, 0) - - def forward(self, x): - """Return a list of heatmaps.""" - heatmaps = [] - - h = self.relu(self.conv1_1(x)) - h = self.relu(self.conv1_2(h)) - h = self.max_pooling_2d(h) - h = self.relu(self.conv2_1(h)) - h = self.relu(self.conv2_2(h)) - h = self.max_pooling_2d(h) - h = self.relu(self.conv3_1(h)) - h = self.relu(self.conv3_2(h)) - h = self.relu(self.conv3_3(h)) - h = self.relu(self.conv3_4(h)) - h = self.max_pooling_2d(h) - h = self.relu(self.conv4_1(h)) - h = self.relu(self.conv4_2(h)) - h = self.relu(self.conv4_3(h)) - h = self.relu(self.conv4_4(h)) - h = self.relu(self.conv5_1(h)) - h = self.relu(self.conv5_2(h)) - h = self.relu(self.conv5_3_CPM(h)) - feature_map = h - - # stage1 - h = self.relu(self.conv6_1_CPM(h)) - h = self.conv6_2_CPM(h) - heatmaps.append(h) - - # stage2 - h = torch.cat([h, feature_map], dim=1) # channel concat - h = self.relu(self.Mconv1_stage2(h)) - h = self.relu(self.Mconv2_stage2(h)) - h = self.relu(self.Mconv3_stage2(h)) - h = self.relu(self.Mconv4_stage2(h)) - h = self.relu(self.Mconv5_stage2(h)) - h = self.relu(self.Mconv6_stage2(h)) - h = self.Mconv7_stage2(h) - heatmaps.append(h) - - # stage3 - h = torch.cat([h, feature_map], dim=1) # channel concat - h = self.relu(self.Mconv1_stage3(h)) - h = self.relu(self.Mconv2_stage3(h)) - h = self.relu(self.Mconv3_stage3(h)) - h = self.relu(self.Mconv4_stage3(h)) - h = self.relu(self.Mconv5_stage3(h)) - h = self.relu(self.Mconv6_stage3(h)) - h = self.Mconv7_stage3(h) - heatmaps.append(h) - - # stage4 - h = torch.cat([h, feature_map], dim=1) # channel concat - h = self.relu(self.Mconv1_stage4(h)) - h = self.relu(self.Mconv2_stage4(h)) - h = self.relu(self.Mconv3_stage4(h)) - h = self.relu(self.Mconv4_stage4(h)) - h = self.relu(self.Mconv5_stage4(h)) - h = self.relu(self.Mconv6_stage4(h)) - h = self.Mconv7_stage4(h) - heatmaps.append(h) - - # stage5 - h = torch.cat([h, feature_map], dim=1) # channel concat - h = self.relu(self.Mconv1_stage5(h)) - h = self.relu(self.Mconv2_stage5(h)) - h = self.relu(self.Mconv3_stage5(h)) - h = self.relu(self.Mconv4_stage5(h)) - h = self.relu(self.Mconv5_stage5(h)) - h = self.relu(self.Mconv6_stage5(h)) - h = self.Mconv7_stage5(h) - heatmaps.append(h) - - # stage6 - h = torch.cat([h, feature_map], dim=1) # channel concat - h = self.relu(self.Mconv1_stage6(h)) - h = self.relu(self.Mconv2_stage6(h)) - h = self.relu(self.Mconv3_stage6(h)) - h = self.relu(self.Mconv4_stage6(h)) - h = self.relu(self.Mconv5_stage6(h)) - h = self.relu(self.Mconv6_stage6(h)) - h = self.Mconv7_stage6(h) - heatmaps.append(h) - - return heatmaps - - -LOG = logging.getLogger(__name__) -TOTEN = ToTensor() -TOPIL = ToPILImage() - - -params = { - 'gaussian_sigma': 2.5, - 'inference_img_size': 736, # 368, 736, 1312 - 'heatmap_peak_thresh': 0.1, - 'crop_scale': 1.5, - 'line_indices': [ - [0, 1], [1, 2], [2, 3], [3, 4], [4, 5], [5, 6], - [6, 7], [7, 8], [8, 9], [9, 10], [10, 11], [11, 12], [12, 13], - [13, 14], [14, 15], [15, 16], - [17, 18], [18, 19], [19, 20], [20, 21], - [22, 23], [23, 24], [24, 25], [25, 26], - [27, 28], [28, 29], [29, 30], - [31, 32], [32, 33], [33, 34], [34, 35], - [36, 37], [37, 38], [38, 39], [39, 40], [40, 41], [41, 36], - [42, 43], [43, 44], [44, 45], [45, 46], [46, 47], [47, 42], - [48, 49], [49, 50], [50, 51], [51, 52], [52, 53], [53, 54], - [54, 55], [55, 56], [56, 57], [57, 58], [58, 59], [59, 48], - [60, 61], [61, 62], [62, 63], [63, 64], [64, 65], [65, 66], - [66, 67], [67, 60] - ], -} - - -class Face(object): - """ - The OpenPose face landmark detector model. - - Args: - inference_size: set the size of the inference image size, suggested: - 368, 736, 1312, default 736 - gaussian_sigma: blur the heatmaps, default 2.5 - heatmap_peak_thresh: return landmark if over threshold, default 0.1 - - """ - def __init__(self, face_model_path, - inference_size=None, - gaussian_sigma=None, - heatmap_peak_thresh=None): - self.inference_size = inference_size or params["inference_img_size"] - self.sigma = gaussian_sigma or params['gaussian_sigma'] - self.threshold = heatmap_peak_thresh or params["heatmap_peak_thresh"] - self.model = FaceNet() - self.model.load_state_dict(torch.load(face_model_path)) - # if torch.cuda.is_available(): - # self.model = self.model.cuda() - # print('cuda') - self.model.eval() - - def __call__(self, face_img): - H, W, C = face_img.shape - - w_size = 384 - x_data = torch.from_numpy(util.smart_resize(face_img, (w_size, w_size))).permute([2, 0, 1]) / 256.0 - 0.5 - - x_data = x_data.to(self.cn_device) - - with torch.no_grad(): - hs = self.model(x_data[None, ...]) - heatmaps = F.interpolate( - hs[-1], - (H, W), - mode='bilinear', align_corners=True).cpu().numpy()[0] - return heatmaps - - def compute_peaks_from_heatmaps(self, heatmaps): - all_peaks = [] - for part in range(heatmaps.shape[0]): - map_ori = heatmaps[part].copy() - binary = np.ascontiguousarray(map_ori > 0.05, dtype=np.uint8) - - if np.sum(binary) == 0: - continue - - positions = np.where(binary > 0.5) - intensities = map_ori[positions] - mi = np.argmax(intensities) - y, x = positions[0][mi], positions[1][mi] - all_peaks.append([x, y]) - - return np.array(all_peaks) \ No newline at end of file diff --git a/spaces/Superlang/ImageProcessor/annotator/uniformer/mmcv/ops/roi_align_rotated.py b/spaces/Superlang/ImageProcessor/annotator/uniformer/mmcv/ops/roi_align_rotated.py deleted file mode 100644 index 0ce4961a3555d4da8bc3e32f1f7d5ad50036587d..0000000000000000000000000000000000000000 --- a/spaces/Superlang/ImageProcessor/annotator/uniformer/mmcv/ops/roi_align_rotated.py +++ /dev/null @@ -1,177 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch.nn as nn -from torch.autograd import Function - -from ..utils import ext_loader - -ext_module = ext_loader.load_ext( - '_ext', ['roi_align_rotated_forward', 'roi_align_rotated_backward']) - - -class RoIAlignRotatedFunction(Function): - - @staticmethod - def symbolic(g, features, rois, out_size, spatial_scale, sample_num, - aligned, clockwise): - if isinstance(out_size, int): - out_h = out_size - out_w = out_size - elif isinstance(out_size, tuple): - assert len(out_size) == 2 - assert isinstance(out_size[0], int) - assert isinstance(out_size[1], int) - out_h, out_w = out_size - else: - raise TypeError( - '"out_size" must be an integer or tuple of integers') - return g.op( - 'mmcv::MMCVRoIAlignRotated', - features, - rois, - output_height_i=out_h, - output_width_i=out_h, - spatial_scale_f=spatial_scale, - sampling_ratio_i=sample_num, - aligned_i=aligned, - clockwise_i=clockwise) - - @staticmethod - def forward(ctx, - features, - rois, - out_size, - spatial_scale, - sample_num=0, - aligned=True, - clockwise=False): - if isinstance(out_size, int): - out_h = out_size - out_w = out_size - elif isinstance(out_size, tuple): - assert len(out_size) == 2 - assert isinstance(out_size[0], int) - assert isinstance(out_size[1], int) - out_h, out_w = out_size - else: - raise TypeError( - '"out_size" must be an integer or tuple of integers') - ctx.spatial_scale = spatial_scale - ctx.sample_num = sample_num - ctx.aligned = aligned - ctx.clockwise = clockwise - ctx.save_for_backward(rois) - ctx.feature_size = features.size() - - batch_size, num_channels, data_height, data_width = features.size() - num_rois = rois.size(0) - - output = features.new_zeros(num_rois, num_channels, out_h, out_w) - ext_module.roi_align_rotated_forward( - features, - rois, - output, - pooled_height=out_h, - pooled_width=out_w, - spatial_scale=spatial_scale, - sample_num=sample_num, - aligned=aligned, - clockwise=clockwise) - return output - - @staticmethod - def backward(ctx, grad_output): - feature_size = ctx.feature_size - spatial_scale = ctx.spatial_scale - aligned = ctx.aligned - clockwise = ctx.clockwise - sample_num = ctx.sample_num - rois = ctx.saved_tensors[0] - assert feature_size is not None - batch_size, num_channels, data_height, data_width = feature_size - - out_w = grad_output.size(3) - out_h = grad_output.size(2) - - grad_input = grad_rois = None - - if ctx.needs_input_grad[0]: - grad_input = rois.new_zeros(batch_size, num_channels, data_height, - data_width) - ext_module.roi_align_rotated_backward( - grad_output.contiguous(), - rois, - grad_input, - pooled_height=out_h, - pooled_width=out_w, - spatial_scale=spatial_scale, - sample_num=sample_num, - aligned=aligned, - clockwise=clockwise) - return grad_input, grad_rois, None, None, None, None, None - - -roi_align_rotated = RoIAlignRotatedFunction.apply - - -class RoIAlignRotated(nn.Module): - """RoI align pooling layer for rotated proposals. - - It accepts a feature map of shape (N, C, H, W) and rois with shape - (n, 6) with each roi decoded as (batch_index, center_x, center_y, - w, h, angle). The angle is in radian. - - Args: - out_size (tuple): h, w - spatial_scale (float): scale the input boxes by this number - sample_num (int): number of inputs samples to take for each - output sample. 0 to take samples densely for current models. - aligned (bool): if False, use the legacy implementation in - MMDetection. If True, align the results more perfectly. - Default: True. - clockwise (bool): If True, the angle in each proposal follows a - clockwise fashion in image space, otherwise, the angle is - counterclockwise. Default: False. - - Note: - The implementation of RoIAlign when aligned=True is modified from - https://github.com/facebookresearch/detectron2/ - - The meaning of aligned=True: - - Given a continuous coordinate c, its two neighboring pixel - indices (in our pixel model) are computed by floor(c - 0.5) and - ceil(c - 0.5). For example, c=1.3 has pixel neighbors with discrete - indices [0] and [1] (which are sampled from the underlying signal - at continuous coordinates 0.5 and 1.5). But the original roi_align - (aligned=False) does not subtract the 0.5 when computing - neighboring pixel indices and therefore it uses pixels with a - slightly incorrect alignment (relative to our pixel model) when - performing bilinear interpolation. - - With `aligned=True`, - we first appropriately scale the ROI and then shift it by -0.5 - prior to calling roi_align. This produces the correct neighbors; - - The difference does not make a difference to the model's - performance if ROIAlign is used together with conv layers. - """ - - def __init__(self, - out_size, - spatial_scale, - sample_num=0, - aligned=True, - clockwise=False): - super(RoIAlignRotated, self).__init__() - - self.out_size = out_size - self.spatial_scale = float(spatial_scale) - self.sample_num = int(sample_num) - self.aligned = aligned - self.clockwise = clockwise - - def forward(self, features, rois): - return RoIAlignRotatedFunction.apply(features, rois, self.out_size, - self.spatial_scale, - self.sample_num, self.aligned, - self.clockwise) diff --git a/spaces/TabbyML/tabby-template-space/README.md b/spaces/TabbyML/tabby-template-space/README.md deleted file mode 100644 index 9346d32f16fd9787bbb1a957171223be775bf3a5..0000000000000000000000000000000000000000 --- a/spaces/TabbyML/tabby-template-space/README.md +++ /dev/null @@ -1,15 +0,0 @@ ---- -title: Tabby Template Space -emoji: 🏷️ -colorFrom: gray -colorTo: purple -sdk: docker -app_port: 8080 -fullWidth: true -suggested_storage: small -suggested_hardware: t4-medium -tags: - - tabby ---- - -This is the Tabby Space Template you can use to deploy and run your own instance of Tabby on the Hugging Face Hub. diff --git a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/setuptools/_vendor/packaging/_elffile.py b/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/setuptools/_vendor/packaging/_elffile.py deleted file mode 100644 index 6fb19b30bb53c18f38a9ef02dd7c4478670fb962..0000000000000000000000000000000000000000 --- a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/setuptools/_vendor/packaging/_elffile.py +++ /dev/null @@ -1,108 +0,0 @@ -""" -ELF file parser. - -This provides a class ``ELFFile`` that parses an ELF executable in a similar -interface to ``ZipFile``. Only the read interface is implemented. - -Based on: https://gist.github.com/lyssdod/f51579ae8d93c8657a5564aefc2ffbca -ELF header: https://refspecs.linuxfoundation.org/elf/gabi4+/ch4.eheader.html -""" - -import enum -import os -import struct -from typing import IO, Optional, Tuple - - -class ELFInvalid(ValueError): - pass - - -class EIClass(enum.IntEnum): - C32 = 1 - C64 = 2 - - -class EIData(enum.IntEnum): - Lsb = 1 - Msb = 2 - - -class EMachine(enum.IntEnum): - I386 = 3 - S390 = 22 - Arm = 40 - X8664 = 62 - AArc64 = 183 - - -class ELFFile: - """ - Representation of an ELF executable. - """ - - def __init__(self, f: IO[bytes]) -> None: - self._f = f - - try: - ident = self._read("16B") - except struct.error: - raise ELFInvalid("unable to parse identification") - magic = bytes(ident[:4]) - if magic != b"\x7fELF": - raise ELFInvalid(f"invalid magic: {magic!r}") - - self.capacity = ident[4] # Format for program header (bitness). - self.encoding = ident[5] # Data structure encoding (endianness). - - try: - # e_fmt: Format for program header. - # p_fmt: Format for section header. - # p_idx: Indexes to find p_type, p_offset, and p_filesz. - e_fmt, self._p_fmt, self._p_idx = { - (1, 1): ("HHIIIIIHHH", ">IIIIIIII", (0, 1, 4)), # 32-bit MSB. - (2, 1): ("HHIQQQIHHH", ">IIQQQQQQ", (0, 2, 5)), # 64-bit MSB. - }[(self.capacity, self.encoding)] - except KeyError: - raise ELFInvalid( - f"unrecognized capacity ({self.capacity}) or " - f"encoding ({self.encoding})" - ) - - try: - ( - _, - self.machine, # Architecture type. - _, - _, - self._e_phoff, # Offset of program header. - _, - self.flags, # Processor-specific flags. - _, - self._e_phentsize, # Size of section. - self._e_phnum, # Number of sections. - ) = self._read(e_fmt) - except struct.error as e: - raise ELFInvalid("unable to parse machine and section information") from e - - def _read(self, fmt: str) -> Tuple[int, ...]: - return struct.unpack(fmt, self._f.read(struct.calcsize(fmt))) - - @property - def interpreter(self) -> Optional[str]: - """ - The path recorded in the ``PT_INTERP`` section header. - """ - for index in range(self._e_phnum): - self._f.seek(self._e_phoff + self._e_phentsize * index) - try: - data = self._read(self._p_fmt) - except struct.error: - continue - if data[self._p_idx[0]] != 3: # Not PT_INTERP. - continue - self._f.seek(data[self._p_idx[1]]) - return os.fsdecode(self._f.read(data[self._p_idx[2]])).strip("\0") - return None diff --git a/spaces/Tape/yoga/openpose/model.py b/spaces/Tape/yoga/openpose/model.py deleted file mode 100644 index 5dfc80de827a17beccb9b0f3f7588545be78c9de..0000000000000000000000000000000000000000 --- a/spaces/Tape/yoga/openpose/model.py +++ /dev/null @@ -1,219 +0,0 @@ -import torch -from collections import OrderedDict - -import torch -import torch.nn as nn - -def make_layers(block, no_relu_layers): - layers = [] - for layer_name, v in block.items(): - if 'pool' in layer_name: - layer = nn.MaxPool2d(kernel_size=v[0], stride=v[1], - padding=v[2]) - layers.append((layer_name, layer)) - else: - conv2d = nn.Conv2d(in_channels=v[0], out_channels=v[1], - kernel_size=v[2], stride=v[3], - padding=v[4]) - layers.append((layer_name, conv2d)) - if layer_name not in no_relu_layers: - layers.append(('relu_'+layer_name, nn.ReLU(inplace=True))) - - return nn.Sequential(OrderedDict(layers)) - -class bodypose_model(nn.Module): - def __init__(self): - super(bodypose_model, self).__init__() - - # these layers have no relu layer - no_relu_layers = ['conv5_5_CPM_L1', 'conv5_5_CPM_L2', 'Mconv7_stage2_L1',\ - 'Mconv7_stage2_L2', 'Mconv7_stage3_L1', 'Mconv7_stage3_L2',\ - 'Mconv7_stage4_L1', 'Mconv7_stage4_L2', 'Mconv7_stage5_L1',\ - 'Mconv7_stage5_L2', 'Mconv7_stage6_L1', 'Mconv7_stage6_L1'] - blocks = {} - block0 = OrderedDict([ - ('conv1_1', [3, 64, 3, 1, 1]), - ('conv1_2', [64, 64, 3, 1, 1]), - ('pool1_stage1', [2, 2, 0]), - ('conv2_1', [64, 128, 3, 1, 1]), - ('conv2_2', [128, 128, 3, 1, 1]), - ('pool2_stage1', [2, 2, 0]), - ('conv3_1', [128, 256, 3, 1, 1]), - ('conv3_2', [256, 256, 3, 1, 1]), - ('conv3_3', [256, 256, 3, 1, 1]), - ('conv3_4', [256, 256, 3, 1, 1]), - ('pool3_stage1', [2, 2, 0]), - ('conv4_1', [256, 512, 3, 1, 1]), - ('conv4_2', [512, 512, 3, 1, 1]), - ('conv4_3_CPM', [512, 256, 3, 1, 1]), - ('conv4_4_CPM', [256, 128, 3, 1, 1]) - ]) - - - # Stage 1 - block1_1 = OrderedDict([ - ('conv5_1_CPM_L1', [128, 128, 3, 1, 1]), - ('conv5_2_CPM_L1', [128, 128, 3, 1, 1]), - ('conv5_3_CPM_L1', [128, 128, 3, 1, 1]), - ('conv5_4_CPM_L1', [128, 512, 1, 1, 0]), - ('conv5_5_CPM_L1', [512, 38, 1, 1, 0]) - ]) - - block1_2 = OrderedDict([ - ('conv5_1_CPM_L2', [128, 128, 3, 1, 1]), - ('conv5_2_CPM_L2', [128, 128, 3, 1, 1]), - ('conv5_3_CPM_L2', [128, 128, 3, 1, 1]), - ('conv5_4_CPM_L2', [128, 512, 1, 1, 0]), - ('conv5_5_CPM_L2', [512, 19, 1, 1, 0]) - ]) - blocks['block1_1'] = block1_1 - blocks['block1_2'] = block1_2 - - self.model0 = make_layers(block0, no_relu_layers) - - # Stages 2 - 6 - for i in range(2, 7): - blocks['block%d_1' % i] = OrderedDict([ - ('Mconv1_stage%d_L1' % i, [185, 128, 7, 1, 3]), - ('Mconv2_stage%d_L1' % i, [128, 128, 7, 1, 3]), - ('Mconv3_stage%d_L1' % i, [128, 128, 7, 1, 3]), - ('Mconv4_stage%d_L1' % i, [128, 128, 7, 1, 3]), - ('Mconv5_stage%d_L1' % i, [128, 128, 7, 1, 3]), - ('Mconv6_stage%d_L1' % i, [128, 128, 1, 1, 0]), - ('Mconv7_stage%d_L1' % i, [128, 38, 1, 1, 0]) - ]) - - blocks['block%d_2' % i] = OrderedDict([ - ('Mconv1_stage%d_L2' % i, [185, 128, 7, 1, 3]), - ('Mconv2_stage%d_L2' % i, [128, 128, 7, 1, 3]), - ('Mconv3_stage%d_L2' % i, [128, 128, 7, 1, 3]), - ('Mconv4_stage%d_L2' % i, [128, 128, 7, 1, 3]), - ('Mconv5_stage%d_L2' % i, [128, 128, 7, 1, 3]), - ('Mconv6_stage%d_L2' % i, [128, 128, 1, 1, 0]), - ('Mconv7_stage%d_L2' % i, [128, 19, 1, 1, 0]) - ]) - - for k in blocks.keys(): - blocks[k] = make_layers(blocks[k], no_relu_layers) - - self.model1_1 = blocks['block1_1'] - self.model2_1 = blocks['block2_1'] - self.model3_1 = blocks['block3_1'] - self.model4_1 = blocks['block4_1'] - self.model5_1 = blocks['block5_1'] - self.model6_1 = blocks['block6_1'] - - self.model1_2 = blocks['block1_2'] - self.model2_2 = blocks['block2_2'] - self.model3_2 = blocks['block3_2'] - self.model4_2 = blocks['block4_2'] - self.model5_2 = blocks['block5_2'] - self.model6_2 = blocks['block6_2'] - - - def forward(self, x): - - out1 = self.model0(x) - - out1_1 = self.model1_1(out1) - out1_2 = self.model1_2(out1) - out2 = torch.cat([out1_1, out1_2, out1], 1) - - out2_1 = self.model2_1(out2) - out2_2 = self.model2_2(out2) - out3 = torch.cat([out2_1, out2_2, out1], 1) - - out3_1 = self.model3_1(out3) - out3_2 = self.model3_2(out3) - out4 = torch.cat([out3_1, out3_2, out1], 1) - - out4_1 = self.model4_1(out4) - out4_2 = self.model4_2(out4) - out5 = torch.cat([out4_1, out4_2, out1], 1) - - out5_1 = self.model5_1(out5) - out5_2 = self.model5_2(out5) - out6 = torch.cat([out5_1, out5_2, out1], 1) - - out6_1 = self.model6_1(out6) - out6_2 = self.model6_2(out6) - - return out6_1, out6_2 - -class handpose_model(nn.Module): - def __init__(self): - super(handpose_model, self).__init__() - - # these layers have no relu layer - no_relu_layers = ['conv6_2_CPM', 'Mconv7_stage2', 'Mconv7_stage3',\ - 'Mconv7_stage4', 'Mconv7_stage5', 'Mconv7_stage6'] - # stage 1 - block1_0 = OrderedDict([ - ('conv1_1', [3, 64, 3, 1, 1]), - ('conv1_2', [64, 64, 3, 1, 1]), - ('pool1_stage1', [2, 2, 0]), - ('conv2_1', [64, 128, 3, 1, 1]), - ('conv2_2', [128, 128, 3, 1, 1]), - ('pool2_stage1', [2, 2, 0]), - ('conv3_1', [128, 256, 3, 1, 1]), - ('conv3_2', [256, 256, 3, 1, 1]), - ('conv3_3', [256, 256, 3, 1, 1]), - ('conv3_4', [256, 256, 3, 1, 1]), - ('pool3_stage1', [2, 2, 0]), - ('conv4_1', [256, 512, 3, 1, 1]), - ('conv4_2', [512, 512, 3, 1, 1]), - ('conv4_3', [512, 512, 3, 1, 1]), - ('conv4_4', [512, 512, 3, 1, 1]), - ('conv5_1', [512, 512, 3, 1, 1]), - ('conv5_2', [512, 512, 3, 1, 1]), - ('conv5_3_CPM', [512, 128, 3, 1, 1]) - ]) - - block1_1 = OrderedDict([ - ('conv6_1_CPM', [128, 512, 1, 1, 0]), - ('conv6_2_CPM', [512, 22, 1, 1, 0]) - ]) - - blocks = {} - blocks['block1_0'] = block1_0 - blocks['block1_1'] = block1_1 - - # stage 2-6 - for i in range(2, 7): - blocks['block%d' % i] = OrderedDict([ - ('Mconv1_stage%d' % i, [150, 128, 7, 1, 3]), - ('Mconv2_stage%d' % i, [128, 128, 7, 1, 3]), - ('Mconv3_stage%d' % i, [128, 128, 7, 1, 3]), - ('Mconv4_stage%d' % i, [128, 128, 7, 1, 3]), - ('Mconv5_stage%d' % i, [128, 128, 7, 1, 3]), - ('Mconv6_stage%d' % i, [128, 128, 1, 1, 0]), - ('Mconv7_stage%d' % i, [128, 22, 1, 1, 0]) - ]) - - for k in blocks.keys(): - blocks[k] = make_layers(blocks[k], no_relu_layers) - - self.model1_0 = blocks['block1_0'] - self.model1_1 = blocks['block1_1'] - self.model2 = blocks['block2'] - self.model3 = blocks['block3'] - self.model4 = blocks['block4'] - self.model5 = blocks['block5'] - self.model6 = blocks['block6'] - - def forward(self, x): - out1_0 = self.model1_0(x) - out1_1 = self.model1_1(out1_0) - concat_stage2 = torch.cat([out1_1, out1_0], 1) - out_stage2 = self.model2(concat_stage2) - concat_stage3 = torch.cat([out_stage2, out1_0], 1) - out_stage3 = self.model3(concat_stage3) - concat_stage4 = torch.cat([out_stage3, out1_0], 1) - out_stage4 = self.model4(concat_stage4) - concat_stage5 = torch.cat([out_stage4, out1_0], 1) - out_stage5 = self.model5(concat_stage5) - concat_stage6 = torch.cat([out_stage5, out1_0], 1) - out_stage6 = self.model6(concat_stage6) - return out_stage6 - - diff --git a/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/detectron2/modeling/proposal_generator/rpn.py b/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/detectron2/modeling/proposal_generator/rpn.py deleted file mode 100644 index 99cd536d2f9880d2049390c45f73eb22335e1b82..0000000000000000000000000000000000000000 --- a/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/detectron2/modeling/proposal_generator/rpn.py +++ /dev/null @@ -1,533 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -from typing import Dict, List, Optional, Tuple, Union -import torch -import torch.nn.functional as F -from torch import nn - -from detectron2.config import configurable -from detectron2.layers import Conv2d, ShapeSpec, cat -from detectron2.structures import Boxes, ImageList, Instances, pairwise_iou -from detectron2.utils.events import get_event_storage -from detectron2.utils.memory import retry_if_cuda_oom -from detectron2.utils.registry import Registry - -from ..anchor_generator import build_anchor_generator -from ..box_regression import Box2BoxTransform, _dense_box_regression_loss -from ..matcher import Matcher -from ..sampling import subsample_labels -from .build import PROPOSAL_GENERATOR_REGISTRY -from .proposal_utils import find_top_rpn_proposals - -RPN_HEAD_REGISTRY = Registry("RPN_HEAD") -RPN_HEAD_REGISTRY.__doc__ = """ -Registry for RPN heads, which take feature maps and perform -objectness classification and bounding box regression for anchors. - -The registered object will be called with `obj(cfg, input_shape)`. -The call should return a `nn.Module` object. -""" - - -""" -Shape shorthand in this module: - - N: number of images in the minibatch - L: number of feature maps per image on which RPN is run - A: number of cell anchors (must be the same for all feature maps) - Hi, Wi: height and width of the i-th feature map - B: size of the box parameterization - -Naming convention: - - objectness: refers to the binary classification of an anchor as object vs. not object. - - deltas: refers to the 4-d (dx, dy, dw, dh) deltas that parameterize the box2box - transform (see :class:`box_regression.Box2BoxTransform`), or 5d for rotated boxes. - - pred_objectness_logits: predicted objectness scores in [-inf, +inf]; use - sigmoid(pred_objectness_logits) to estimate P(object). - - gt_labels: ground-truth binary classification labels for objectness - - pred_anchor_deltas: predicted box2box transform deltas - - gt_anchor_deltas: ground-truth box2box transform deltas -""" - - -def build_rpn_head(cfg, input_shape): - """ - Build an RPN head defined by `cfg.MODEL.RPN.HEAD_NAME`. - """ - name = cfg.MODEL.RPN.HEAD_NAME - return RPN_HEAD_REGISTRY.get(name)(cfg, input_shape) - - -@RPN_HEAD_REGISTRY.register() -class StandardRPNHead(nn.Module): - """ - Standard RPN classification and regression heads described in :paper:`Faster R-CNN`. - Uses a 3x3 conv to produce a shared hidden state from which one 1x1 conv predicts - objectness logits for each anchor and a second 1x1 conv predicts bounding-box deltas - specifying how to deform each anchor into an object proposal. - """ - - @configurable - def __init__( - self, *, in_channels: int, num_anchors: int, box_dim: int = 4, conv_dims: List[int] = (-1,) - ): - """ - NOTE: this interface is experimental. - - Args: - in_channels (int): number of input feature channels. When using multiple - input features, they must have the same number of channels. - num_anchors (int): number of anchors to predict for *each spatial position* - on the feature map. The total number of anchors for each - feature map will be `num_anchors * H * W`. - box_dim (int): dimension of a box, which is also the number of box regression - predictions to make for each anchor. An axis aligned box has - box_dim=4, while a rotated box has box_dim=5. - conv_dims (list[int]): a list of integers representing the output channels - of N conv layers. Set it to -1 to use the same number of output channels - as input channels. - """ - super().__init__() - cur_channels = in_channels - # Keeping the old variable names and structure for backwards compatiblity. - # Otherwise the old checkpoints will fail to load. - if len(conv_dims) == 1: - out_channels = cur_channels if conv_dims[0] == -1 else conv_dims[0] - # 3x3 conv for the hidden representation - self.conv = self._get_rpn_conv(cur_channels, out_channels) - cur_channels = out_channels - else: - self.conv = nn.Sequential() - for k, conv_dim in enumerate(conv_dims): - out_channels = cur_channels if conv_dim == -1 else conv_dim - if out_channels <= 0: - raise ValueError( - f"Conv output channels should be greater than 0. Got {out_channels}" - ) - conv = self._get_rpn_conv(cur_channels, out_channels) - self.conv.add_module(f"conv{k}", conv) - cur_channels = out_channels - # 1x1 conv for predicting objectness logits - self.objectness_logits = nn.Conv2d(cur_channels, num_anchors, kernel_size=1, stride=1) - # 1x1 conv for predicting box2box transform deltas - self.anchor_deltas = nn.Conv2d(cur_channels, num_anchors * box_dim, kernel_size=1, stride=1) - - # Keeping the order of weights initialization same for backwards compatiblility. - for layer in self.modules(): - if isinstance(layer, nn.Conv2d): - nn.init.normal_(layer.weight, std=0.01) - nn.init.constant_(layer.bias, 0) - - def _get_rpn_conv(self, in_channels, out_channels): - return Conv2d( - in_channels, - out_channels, - kernel_size=3, - stride=1, - padding=1, - activation=nn.ReLU(), - ) - - @classmethod - def from_config(cls, cfg, input_shape): - # Standard RPN is shared across levels: - in_channels = [s.channels for s in input_shape] - assert len(set(in_channels)) == 1, "Each level must have the same channel!" - in_channels = in_channels[0] - - # RPNHead should take the same input as anchor generator - # NOTE: it assumes that creating an anchor generator does not have unwanted side effect. - anchor_generator = build_anchor_generator(cfg, input_shape) - num_anchors = anchor_generator.num_anchors - box_dim = anchor_generator.box_dim - assert ( - len(set(num_anchors)) == 1 - ), "Each level must have the same number of anchors per spatial position" - return { - "in_channels": in_channels, - "num_anchors": num_anchors[0], - "box_dim": box_dim, - "conv_dims": cfg.MODEL.RPN.CONV_DIMS, - } - - def forward(self, features: List[torch.Tensor]): - """ - Args: - features (list[Tensor]): list of feature maps - - Returns: - list[Tensor]: A list of L elements. - Element i is a tensor of shape (N, A, Hi, Wi) representing - the predicted objectness logits for all anchors. A is the number of cell anchors. - list[Tensor]: A list of L elements. Element i is a tensor of shape - (N, A*box_dim, Hi, Wi) representing the predicted "deltas" used to transform anchors - to proposals. - """ - pred_objectness_logits = [] - pred_anchor_deltas = [] - for x in features: - t = self.conv(x) - pred_objectness_logits.append(self.objectness_logits(t)) - pred_anchor_deltas.append(self.anchor_deltas(t)) - return pred_objectness_logits, pred_anchor_deltas - - -@PROPOSAL_GENERATOR_REGISTRY.register() -class RPN(nn.Module): - """ - Region Proposal Network, introduced by :paper:`Faster R-CNN`. - """ - - @configurable - def __init__( - self, - *, - in_features: List[str], - head: nn.Module, - anchor_generator: nn.Module, - anchor_matcher: Matcher, - box2box_transform: Box2BoxTransform, - batch_size_per_image: int, - positive_fraction: float, - pre_nms_topk: Tuple[float, float], - post_nms_topk: Tuple[float, float], - nms_thresh: float = 0.7, - min_box_size: float = 0.0, - anchor_boundary_thresh: float = -1.0, - loss_weight: Union[float, Dict[str, float]] = 1.0, - box_reg_loss_type: str = "smooth_l1", - smooth_l1_beta: float = 0.0, - ): - """ - NOTE: this interface is experimental. - - Args: - in_features (list[str]): list of names of input features to use - head (nn.Module): a module that predicts logits and regression deltas - for each level from a list of per-level features - anchor_generator (nn.Module): a module that creates anchors from a - list of features. Usually an instance of :class:`AnchorGenerator` - anchor_matcher (Matcher): label the anchors by matching them with ground truth. - box2box_transform (Box2BoxTransform): defines the transform from anchors boxes to - instance boxes - batch_size_per_image (int): number of anchors per image to sample for training - positive_fraction (float): fraction of foreground anchors to sample for training - pre_nms_topk (tuple[float]): (train, test) that represents the - number of top k proposals to select before NMS, in - training and testing. - post_nms_topk (tuple[float]): (train, test) that represents the - number of top k proposals to select after NMS, in - training and testing. - nms_thresh (float): NMS threshold used to de-duplicate the predicted proposals - min_box_size (float): remove proposal boxes with any side smaller than this threshold, - in the unit of input image pixels - anchor_boundary_thresh (float): legacy option - loss_weight (float|dict): weights to use for losses. Can be single float for weighting - all rpn losses together, or a dict of individual weightings. Valid dict keys are: - "loss_rpn_cls" - applied to classification loss - "loss_rpn_loc" - applied to box regression loss - box_reg_loss_type (str): Loss type to use. Supported losses: "smooth_l1", "giou". - smooth_l1_beta (float): beta parameter for the smooth L1 regression loss. Default to - use L1 loss. Only used when `box_reg_loss_type` is "smooth_l1" - """ - super().__init__() - self.in_features = in_features - self.rpn_head = head - self.anchor_generator = anchor_generator - self.anchor_matcher = anchor_matcher - self.box2box_transform = box2box_transform - self.batch_size_per_image = batch_size_per_image - self.positive_fraction = positive_fraction - # Map from self.training state to train/test settings - self.pre_nms_topk = {True: pre_nms_topk[0], False: pre_nms_topk[1]} - self.post_nms_topk = {True: post_nms_topk[0], False: post_nms_topk[1]} - self.nms_thresh = nms_thresh - self.min_box_size = float(min_box_size) - self.anchor_boundary_thresh = anchor_boundary_thresh - if isinstance(loss_weight, float): - loss_weight = {"loss_rpn_cls": loss_weight, "loss_rpn_loc": loss_weight} - self.loss_weight = loss_weight - self.box_reg_loss_type = box_reg_loss_type - self.smooth_l1_beta = smooth_l1_beta - - @classmethod - def from_config(cls, cfg, input_shape: Dict[str, ShapeSpec]): - in_features = cfg.MODEL.RPN.IN_FEATURES - ret = { - "in_features": in_features, - "min_box_size": cfg.MODEL.PROPOSAL_GENERATOR.MIN_SIZE, - "nms_thresh": cfg.MODEL.RPN.NMS_THRESH, - "batch_size_per_image": cfg.MODEL.RPN.BATCH_SIZE_PER_IMAGE, - "positive_fraction": cfg.MODEL.RPN.POSITIVE_FRACTION, - "loss_weight": { - "loss_rpn_cls": cfg.MODEL.RPN.LOSS_WEIGHT, - "loss_rpn_loc": cfg.MODEL.RPN.BBOX_REG_LOSS_WEIGHT * cfg.MODEL.RPN.LOSS_WEIGHT, - }, - "anchor_boundary_thresh": cfg.MODEL.RPN.BOUNDARY_THRESH, - "box2box_transform": Box2BoxTransform(weights=cfg.MODEL.RPN.BBOX_REG_WEIGHTS), - "box_reg_loss_type": cfg.MODEL.RPN.BBOX_REG_LOSS_TYPE, - "smooth_l1_beta": cfg.MODEL.RPN.SMOOTH_L1_BETA, - } - - ret["pre_nms_topk"] = (cfg.MODEL.RPN.PRE_NMS_TOPK_TRAIN, cfg.MODEL.RPN.PRE_NMS_TOPK_TEST) - ret["post_nms_topk"] = (cfg.MODEL.RPN.POST_NMS_TOPK_TRAIN, cfg.MODEL.RPN.POST_NMS_TOPK_TEST) - - ret["anchor_generator"] = build_anchor_generator(cfg, [input_shape[f] for f in in_features]) - ret["anchor_matcher"] = Matcher( - cfg.MODEL.RPN.IOU_THRESHOLDS, cfg.MODEL.RPN.IOU_LABELS, allow_low_quality_matches=True - ) - ret["head"] = build_rpn_head(cfg, [input_shape[f] for f in in_features]) - return ret - - def _subsample_labels(self, label): - """ - Randomly sample a subset of positive and negative examples, and overwrite - the label vector to the ignore value (-1) for all elements that are not - included in the sample. - - Args: - labels (Tensor): a vector of -1, 0, 1. Will be modified in-place and returned. - """ - pos_idx, neg_idx = subsample_labels( - label, self.batch_size_per_image, self.positive_fraction, 0 - ) - # Fill with the ignore label (-1), then set positive and negative labels - label.fill_(-1) - label.scatter_(0, pos_idx, 1) - label.scatter_(0, neg_idx, 0) - return label - - @torch.jit.unused - @torch.no_grad() - def label_and_sample_anchors( - self, anchors: List[Boxes], gt_instances: List[Instances] - ) -> Tuple[List[torch.Tensor], List[torch.Tensor]]: - """ - Args: - anchors (list[Boxes]): anchors for each feature map. - gt_instances: the ground-truth instances for each image. - - Returns: - list[Tensor]: - List of #img tensors. i-th element is a vector of labels whose length is - the total number of anchors across all feature maps R = sum(Hi * Wi * A). - Label values are in {-1, 0, 1}, with meanings: -1 = ignore; 0 = negative - class; 1 = positive class. - list[Tensor]: - i-th element is a Rx4 tensor. The values are the matched gt boxes for each - anchor. Values are undefined for those anchors not labeled as 1. - """ - anchors = Boxes.cat(anchors) - - gt_boxes = [x.gt_boxes for x in gt_instances] - image_sizes = [x.image_size for x in gt_instances] - del gt_instances - - gt_labels = [] - matched_gt_boxes = [] - for image_size_i, gt_boxes_i in zip(image_sizes, gt_boxes): - """ - image_size_i: (h, w) for the i-th image - gt_boxes_i: ground-truth boxes for i-th image - """ - - match_quality_matrix = retry_if_cuda_oom(pairwise_iou)(gt_boxes_i, anchors) - matched_idxs, gt_labels_i = retry_if_cuda_oom(self.anchor_matcher)(match_quality_matrix) - # Matching is memory-expensive and may result in CPU tensors. But the result is small - gt_labels_i = gt_labels_i.to(device=gt_boxes_i.device) - del match_quality_matrix - - if self.anchor_boundary_thresh >= 0: - # Discard anchors that go out of the boundaries of the image - # NOTE: This is legacy functionality that is turned off by default in Detectron2 - anchors_inside_image = anchors.inside_box(image_size_i, self.anchor_boundary_thresh) - gt_labels_i[~anchors_inside_image] = -1 - - # A vector of labels (-1, 0, 1) for each anchor - gt_labels_i = self._subsample_labels(gt_labels_i) - - if len(gt_boxes_i) == 0: - # These values won't be used anyway since the anchor is labeled as background - matched_gt_boxes_i = torch.zeros_like(anchors.tensor) - else: - # TODO wasted indexing computation for ignored boxes - matched_gt_boxes_i = gt_boxes_i[matched_idxs].tensor - - gt_labels.append(gt_labels_i) # N,AHW - matched_gt_boxes.append(matched_gt_boxes_i) - return gt_labels, matched_gt_boxes - - @torch.jit.unused - def losses( - self, - anchors: List[Boxes], - pred_objectness_logits: List[torch.Tensor], - gt_labels: List[torch.Tensor], - pred_anchor_deltas: List[torch.Tensor], - gt_boxes: List[torch.Tensor], - ) -> Dict[str, torch.Tensor]: - """ - Return the losses from a set of RPN predictions and their associated ground-truth. - - Args: - anchors (list[Boxes or RotatedBoxes]): anchors for each feature map, each - has shape (Hi*Wi*A, B), where B is box dimension (4 or 5). - pred_objectness_logits (list[Tensor]): A list of L elements. - Element i is a tensor of shape (N, Hi*Wi*A) representing - the predicted objectness logits for all anchors. - gt_labels (list[Tensor]): Output of :meth:`label_and_sample_anchors`. - pred_anchor_deltas (list[Tensor]): A list of L elements. Element i is a tensor of shape - (N, Hi*Wi*A, 4 or 5) representing the predicted "deltas" used to transform anchors - to proposals. - gt_boxes (list[Tensor]): Output of :meth:`label_and_sample_anchors`. - - Returns: - dict[loss name -> loss value]: A dict mapping from loss name to loss value. - Loss names are: `loss_rpn_cls` for objectness classification and - `loss_rpn_loc` for proposal localization. - """ - num_images = len(gt_labels) - gt_labels = torch.stack(gt_labels) # (N, sum(Hi*Wi*Ai)) - - # Log the number of positive/negative anchors per-image that's used in training - pos_mask = gt_labels == 1 - num_pos_anchors = pos_mask.sum().item() - num_neg_anchors = (gt_labels == 0).sum().item() - storage = get_event_storage() - storage.put_scalar("rpn/num_pos_anchors", num_pos_anchors / num_images) - storage.put_scalar("rpn/num_neg_anchors", num_neg_anchors / num_images) - - localization_loss = _dense_box_regression_loss( - anchors, - self.box2box_transform, - pred_anchor_deltas, - gt_boxes, - pos_mask, - box_reg_loss_type=self.box_reg_loss_type, - smooth_l1_beta=self.smooth_l1_beta, - ) - - valid_mask = gt_labels >= 0 - objectness_loss = F.binary_cross_entropy_with_logits( - cat(pred_objectness_logits, dim=1)[valid_mask], - gt_labels[valid_mask].to(torch.float32), - reduction="sum", - ) - normalizer = self.batch_size_per_image * num_images - losses = { - "loss_rpn_cls": objectness_loss / normalizer, - # The original Faster R-CNN paper uses a slightly different normalizer - # for loc loss. But it doesn't matter in practice - "loss_rpn_loc": localization_loss / normalizer, - } - losses = {k: v * self.loss_weight.get(k, 1.0) for k, v in losses.items()} - return losses - - def forward( - self, - images: ImageList, - features: Dict[str, torch.Tensor], - gt_instances: Optional[List[Instances]] = None, - ): - """ - Args: - images (ImageList): input images of length `N` - features (dict[str, Tensor]): input data as a mapping from feature - map name to tensor. Axis 0 represents the number of images `N` in - the input data; axes 1-3 are channels, height, and width, which may - vary between feature maps (e.g., if a feature pyramid is used). - gt_instances (list[Instances], optional): a length `N` list of `Instances`s. - Each `Instances` stores ground-truth instances for the corresponding image. - - Returns: - proposals: list[Instances]: contains fields "proposal_boxes", "objectness_logits" - loss: dict[Tensor] or None - """ - features = [features[f] for f in self.in_features] - anchors = self.anchor_generator(features) - - pred_objectness_logits, pred_anchor_deltas = self.rpn_head(features) - # Transpose the Hi*Wi*A dimension to the middle: - pred_objectness_logits = [ - # (N, A, Hi, Wi) -> (N, Hi, Wi, A) -> (N, Hi*Wi*A) - score.permute(0, 2, 3, 1).flatten(1) - for score in pred_objectness_logits - ] - pred_anchor_deltas = [ - # (N, A*B, Hi, Wi) -> (N, A, B, Hi, Wi) -> (N, Hi, Wi, A, B) -> (N, Hi*Wi*A, B) - x.view(x.shape[0], -1, self.anchor_generator.box_dim, x.shape[-2], x.shape[-1]) - .permute(0, 3, 4, 1, 2) - .flatten(1, -2) - for x in pred_anchor_deltas - ] - - if self.training: - assert gt_instances is not None, "RPN requires gt_instances in training!" - gt_labels, gt_boxes = self.label_and_sample_anchors(anchors, gt_instances) - losses = self.losses( - anchors, pred_objectness_logits, gt_labels, pred_anchor_deltas, gt_boxes - ) - else: - losses = {} - proposals = self.predict_proposals( - anchors, pred_objectness_logits, pred_anchor_deltas, images.image_sizes - ) - return proposals, losses - - def predict_proposals( - self, - anchors: List[Boxes], - pred_objectness_logits: List[torch.Tensor], - pred_anchor_deltas: List[torch.Tensor], - image_sizes: List[Tuple[int, int]], - ): - """ - Decode all the predicted box regression deltas to proposals. Find the top proposals - by applying NMS and removing boxes that are too small. - - Returns: - proposals (list[Instances]): list of N Instances. The i-th Instances - stores post_nms_topk object proposals for image i, sorted by their - objectness score in descending order. - """ - # The proposals are treated as fixed for joint training with roi heads. - # This approach ignores the derivative w.r.t. the proposal boxes’ coordinates that - # are also network responses. - with torch.no_grad(): - pred_proposals = self._decode_proposals(anchors, pred_anchor_deltas) - return find_top_rpn_proposals( - pred_proposals, - pred_objectness_logits, - image_sizes, - self.nms_thresh, - self.pre_nms_topk[self.training], - self.post_nms_topk[self.training], - self.min_box_size, - self.training, - ) - - def _decode_proposals(self, anchors: List[Boxes], pred_anchor_deltas: List[torch.Tensor]): - """ - Transform anchors into proposals by applying the predicted anchor deltas. - - Returns: - proposals (list[Tensor]): A list of L tensors. Tensor i has shape - (N, Hi*Wi*A, B) - """ - N = pred_anchor_deltas[0].shape[0] - proposals = [] - # For each feature map - for anchors_i, pred_anchor_deltas_i in zip(anchors, pred_anchor_deltas): - B = anchors_i.tensor.size(1) - pred_anchor_deltas_i = pred_anchor_deltas_i.reshape(-1, B) - # Expand anchors to shape (N*Hi*Wi*A, B) - anchors_i = anchors_i.tensor.unsqueeze(0).expand(N, -1, -1).reshape(-1, B) - proposals_i = self.box2box_transform.apply_deltas(pred_anchor_deltas_i, anchors_i) - # Append feature map proposals with shape (N, Hi*Wi*A, B) - proposals.append(proposals_i.view(N, -1, B)) - return proposals diff --git a/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/tests/config/test_lazy_config.py b/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/tests/config/test_lazy_config.py deleted file mode 100644 index 6ff5b6dc117744a9d978e0aff324bddeb496409b..0000000000000000000000000000000000000000 --- a/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/tests/config/test_lazy_config.py +++ /dev/null @@ -1,79 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import os -import unittest -import tempfile -from itertools import count - -from detectron2.config import LazyConfig, LazyCall as L -from omegaconf import DictConfig - - -class TestLazyPythonConfig(unittest.TestCase): - def setUp(self): - self.root_filename = os.path.join(os.path.dirname(__file__), "root_cfg.py") - - def test_load(self): - cfg = LazyConfig.load(self.root_filename) - - self.assertEqual(cfg.dir1a_dict.a, "modified") - self.assertEqual(cfg.dir1b_dict.a, 1) - self.assertEqual(cfg.lazyobj.x, "base_a_1") - - cfg.lazyobj.x = "new_x" - # reload - cfg = LazyConfig.load(self.root_filename) - self.assertEqual(cfg.lazyobj.x, "base_a_1") - - def test_save_load(self): - cfg = LazyConfig.load(self.root_filename) - with tempfile.TemporaryDirectory(prefix="detectron2") as d: - fname = os.path.join(d, "test_config.yaml") - LazyConfig.save(cfg, fname) - cfg2 = LazyConfig.load(fname) - - self.assertEqual(cfg2.lazyobj._target_, "itertools.count") - self.assertEqual(cfg.lazyobj._target_, count) - cfg2.lazyobj.pop("_target_") - cfg.lazyobj.pop("_target_") - # the rest are equal - self.assertEqual(cfg, cfg2) - - def test_failed_save(self): - cfg = DictConfig({"x": lambda: 3}, flags={"allow_objects": True}) - with tempfile.TemporaryDirectory(prefix="detectron2") as d: - fname = os.path.join(d, "test_config.yaml") - LazyConfig.save(cfg, fname) - self.assertTrue(os.path.exists(fname)) - self.assertTrue(os.path.exists(fname + ".pkl")) - - def test_overrides(self): - cfg = LazyConfig.load(self.root_filename) - LazyConfig.apply_overrides(cfg, ["lazyobj.x=123", 'dir1b_dict.a="123"']) - self.assertEqual(cfg.dir1b_dict.a, "123") - self.assertEqual(cfg.lazyobj.x, 123) - - def test_invalid_overrides(self): - cfg = LazyConfig.load(self.root_filename) - with self.assertRaises(KeyError): - LazyConfig.apply_overrides(cfg, ["lazyobj.x.xxx=123"]) - - def test_to_py(self): - cfg = LazyConfig.load(self.root_filename) - cfg.lazyobj.x = {"a": 1, "b": 2, "c": L(count)(x={"r": "a", "s": 2.4, "t": [1, 2, 3, "z"]})} - cfg.list = ["a", 1, "b", 3.2] - py_str = LazyConfig.to_py(cfg) - expected = """cfg.dir1a_dict.a = "modified" -cfg.dir1a_dict.b = 2 -cfg.dir1b_dict.a = 1 -cfg.dir1b_dict.b = 2 -cfg.lazyobj = itertools.count( - x={ - "a": 1, - "b": 2, - "c": itertools.count(x={"r": "a", "s": 2.4, "t": [1, 2, 3, "z"]}), - }, - y="base_a_1_from_b", -) -cfg.list = ["a", 1, "b", 3.2] -""" - self.assertEqual(py_str, expected) diff --git a/spaces/Tuana/find-the-animal/utils/config.py b/spaces/Tuana/find-the-animal/utils/config.py deleted file mode 100644 index 2017963f8c761a645cfcc452001690f5979b78d7..0000000000000000000000000000000000000000 --- a/spaces/Tuana/find-the-animal/utils/config.py +++ /dev/null @@ -1 +0,0 @@ -INDEX_DIR = "data/index" \ No newline at end of file diff --git a/spaces/Vern0n/pls_work/Dockerfile b/spaces/Vern0n/pls_work/Dockerfile deleted file mode 100644 index 6c01c09373883afcb4ea34ae2d316cd596e1737b..0000000000000000000000000000000000000000 --- a/spaces/Vern0n/pls_work/Dockerfile +++ /dev/null @@ -1,21 +0,0 @@ -FROM node:18-bullseye-slim - -RUN apt-get update && \ - -apt-get install -y git - -RUN git clone https://gitgud.io/khanon/oai-reverse-proxy.git /app - -WORKDIR /app - -RUN npm install - -COPY Dockerfile greeting.md* .env* ./ - -RUN npm run build - -EXPOSE 7860 - -ENV NODE_ENV=production - -CMD [ "npm", "start" ] \ No newline at end of file diff --git a/spaces/Vinnybustacap/Gryphe-MythoLogic-13b/app.py b/spaces/Vinnybustacap/Gryphe-MythoLogic-13b/app.py deleted file mode 100644 index 366af980d86a1b9d026ed59fe4d987a4ee0e61c2..0000000000000000000000000000000000000000 --- a/spaces/Vinnybustacap/Gryphe-MythoLogic-13b/app.py +++ /dev/null @@ -1,3 +0,0 @@ -import gradio as gr - -gr.Interface.load("models/Gryphe/MythoLogic-13b").launch() \ No newline at end of file diff --git a/spaces/XzJosh/Nana7mi-Bert-VITS2/README.md b/spaces/XzJosh/Nana7mi-Bert-VITS2/README.md deleted file mode 100644 index af1d898ea1f7d6675042e50d5874d5fcec8744c8..0000000000000000000000000000000000000000 --- a/spaces/XzJosh/Nana7mi-Bert-VITS2/README.md +++ /dev/null @@ -1,5 +0,0 @@ ---- -license: mit -sdk: gradio -title: AI七海 ---- \ No newline at end of file diff --git a/spaces/YeOldHermit/Super-Resolution-Anime-Diffusion/Waifu2x/utils/__init__.py b/spaces/YeOldHermit/Super-Resolution-Anime-Diffusion/Waifu2x/utils/__init__.py deleted file mode 100644 index fb1623a14865e1d1b1e79275a3d5595642f92d9b..0000000000000000000000000000000000000000 --- a/spaces/YeOldHermit/Super-Resolution-Anime-Diffusion/Waifu2x/utils/__init__.py +++ /dev/null @@ -1,8 +0,0 @@ -# -*- coding: utf-8 -*- -# file: __init__.py -# time: 05/12/2022 -# author: yangheng -# github: https://github.com/yangheng95 -# huggingface: https://huggingface.co/yangheng -# google scholar: https://scholar.google.com/citations?user=NPq5a_0AAAAJ&hl=en -# Copyright (C) 2021. All Rights Reserved. diff --git a/spaces/YeOldHermit/Super-Resolution-Anime-Diffusion/diffusers/pipelines/ddpm/__init__.py b/spaces/YeOldHermit/Super-Resolution-Anime-Diffusion/diffusers/pipelines/ddpm/__init__.py deleted file mode 100644 index 8889bdae1224e91916e0f8454bafba0ee566f3b9..0000000000000000000000000000000000000000 --- a/spaces/YeOldHermit/Super-Resolution-Anime-Diffusion/diffusers/pipelines/ddpm/__init__.py +++ /dev/null @@ -1,2 +0,0 @@ -# flake8: noqa -from .pipeline_ddpm import DDPMPipeline diff --git a/spaces/YotamNitzan/domain-expansion/README.md b/spaces/YotamNitzan/domain-expansion/README.md deleted file mode 100644 index 9dc5ab00f325c2428c4c6fdac3ce3c7526bede3a..0000000000000000000000000000000000000000 --- a/spaces/YotamNitzan/domain-expansion/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Domain Expansion -emoji: 👁 -colorFrom: pink -colorTo: yellow -sdk: docker -pinned: false -tags: -- making-demos -duplicated_from: alvanlii/domain-expansion ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/YuAnthony/Audio-Caption/data_handling/clotho_test_set.py b/spaces/YuAnthony/Audio-Caption/data_handling/clotho_test_set.py deleted file mode 100644 index c272a11b3aed808112c4e9b94ba7c6ba9d3fcfd9..0000000000000000000000000000000000000000 --- a/spaces/YuAnthony/Audio-Caption/data_handling/clotho_test_set.py +++ /dev/null @@ -1,17 +0,0 @@ -import glob -import numpy as np -import os -from torch.utils.data import Dataset - - -class ClothoTestset(Dataset): - def __init__(self, data_dir): - super(ClothoTestset, self).__init__() - self.data_dir = data_dir - self.data = glob.glob(f'{data_dir}/*.npy') - - def __len__(self): - return len(self.data) - - def __getitem__(self, item): # return: mel, filename (with out extension) - return np.load(self.data[item]), os.path.splitext(os.path.basename(self.data[item]))[0] \ No newline at end of file diff --git a/spaces/Yusin/ChatGPT-Speech/text/mandarin.py b/spaces/Yusin/ChatGPT-Speech/text/mandarin.py deleted file mode 100644 index 8bc31aea94e1abe111f9bb78c878c1c71e55d4ba..0000000000000000000000000000000000000000 --- a/spaces/Yusin/ChatGPT-Speech/text/mandarin.py +++ /dev/null @@ -1,170 +0,0 @@ -import os -import re -import sys - -import jieba -import cn2an -import logging -from pypinyin import lazy_pinyin, BOPOMOFO - -# logging.getLogger('jieba').setLevel(logging.WARNING) -# jieba.set_dictionary(os.path.dirname(sys.argv[0]) + '/jieba/dict.txt') - -# List of (Latin alphabet, bopomofo) pairs: -_latin_to_bopomofo = [(re.compile('%s' % x[0], re.IGNORECASE), x[1]) for x in [ - ('a', 'ㄟˉ'), - ('b', 'ㄅㄧˋ'), - ('c', 'ㄙㄧˉ'), - ('d', 'ㄉㄧˋ'), - ('e', 'ㄧˋ'), - ('f', 'ㄝˊㄈㄨˋ'), - ('g', 'ㄐㄧˋ'), - ('h', 'ㄝˇㄑㄩˋ'), - ('i', 'ㄞˋ'), - ('j', 'ㄐㄟˋ'), - ('k', 'ㄎㄟˋ'), - ('l', 'ㄝˊㄛˋ'), - ('m', 'ㄝˊㄇㄨˋ'), - ('n', 'ㄣˉ'), - ('o', 'ㄡˉ'), - ('p', 'ㄆㄧˉ'), - ('q', 'ㄎㄧㄡˉ'), - ('r', 'ㄚˋ'), - ('s', 'ㄝˊㄙˋ'), - ('t', 'ㄊㄧˋ'), - ('u', 'ㄧㄡˉ'), - ('v', 'ㄨㄧˉ'), - ('w', 'ㄉㄚˋㄅㄨˋㄌㄧㄡˋ'), - ('x', 'ㄝˉㄎㄨˋㄙˋ'), - ('y', 'ㄨㄞˋ'), - ('z', 'ㄗㄟˋ') -]] - -# List of (bopomofo, romaji) pairs: -_bopomofo_to_romaji = [(re.compile('%s' % x[0]), x[1]) for x in [ - ('ㄅㄛ', 'p⁼wo'), - ('ㄆㄛ', 'pʰwo'), - ('ㄇㄛ', 'mwo'), - ('ㄈㄛ', 'fwo'), - ('ㄅ', 'p⁼'), - ('ㄆ', 'pʰ'), - ('ㄇ', 'm'), - ('ㄈ', 'f'), - ('ㄉ', 't⁼'), - ('ㄊ', 'tʰ'), - ('ㄋ', 'n'), - ('ㄌ', 'l'), - ('ㄍ', 'k⁼'), - ('ㄎ', 'kʰ'), - ('ㄏ', 'h'), - ('ㄐ', 'ʧ⁼'), - ('ㄑ', 'ʧʰ'), - ('ㄒ', 'ʃ'), - ('ㄓ', 'ʦ`⁼'), - ('ㄔ', 'ʦ`ʰ'), - ('ㄕ', 's`'), - ('ㄖ', 'ɹ`'), - ('ㄗ', 'ʦ⁼'), - ('ㄘ', 'ʦʰ'), - ('ㄙ', 's'), - ('ㄚ', 'a'), - ('ㄛ', 'o'), - ('ㄜ', 'ə'), - ('ㄝ', 'e'), - ('ㄞ', 'ai'), - ('ㄟ', 'ei'), - ('ㄠ', 'au'), - ('ㄡ', 'ou'), - ('ㄧㄢ', 'yeNN'), - ('ㄢ', 'aNN'), - ('ㄧㄣ', 'iNN'), - ('ㄣ', 'əNN'), - ('ㄤ', 'aNg'), - ('ㄧㄥ', 'iNg'), - ('ㄨㄥ', 'uNg'), - ('ㄩㄥ', 'yuNg'), - ('ㄥ', 'əNg'), - ('ㄦ', 'əɻ'), - ('ㄧ', 'i'), - ('ㄨ', 'u'), - ('ㄩ', 'ɥ'), - ('ˉ', '→'), - ('ˊ', '↑'), - ('ˇ', '↓↑'), - ('ˋ', '↓'), - ('˙', ''), - (',', ','), - ('。', '.'), - ('!', '!'), - ('?', '?'), - ('—', '-') -]] - -# List of (romaji, ipa) pairs: -_romaji_to_ipa = [(re.compile('%s' % x[0], re.IGNORECASE), x[1]) for x in [ - ('ʃy', 'ʃ'), - ('ʧʰy', 'ʧʰ'), - ('ʧ⁼y', 'ʧ⁼'), - ('NN', 'n'), - ('Ng', 'ŋ'), - ('y', 'j'), - ('h', 'x') -]] - - -def number_to_chinese(text): - numbers = re.findall(r'\d+(?:\.?\d+)?', text) - for number in numbers: - text = text.replace(number, cn2an.an2cn(number), 1) - return text - - -def chinese_to_bopomofo(text): - text = text.replace('、', ',').replace(';', ',').replace(':', ',') - words = jieba.lcut(text, cut_all=False) - text = '' - for word in words: - bopomofos = lazy_pinyin(word, BOPOMOFO) - if not re.search('[\u4e00-\u9fff]', word): - text += word - continue - for i in range(len(bopomofos)): - if re.match('[\u3105-\u3129]', bopomofos[i][-1]): - bopomofos[i] += 'ˉ' - if text != '': - text += ' ' - text += ''.join(bopomofos) - return text - - -def latin_to_bopomofo(text): - for regex, replacement in _latin_to_bopomofo: - text = re.sub(regex, replacement, text) - return text - - -def bopomofo_to_romaji(text): - for regex, replacement in _bopomofo_to_romaji: - text = re.sub(regex, replacement, text) - return text - - -def chinese_to_romaji(text): - text = number_to_chinese(text) - text = chinese_to_bopomofo(text) - text = latin_to_bopomofo(text) - text = bopomofo_to_romaji(text) - text = re.sub('i[aoe]', lambda x: 'y' + x.group(0)[1:], text) - text = re.sub('u[aoəe]', lambda x: 'w' + x.group(0)[1:], text) - text = re.sub('([ʦsɹ]`[⁼ʰ]?)([→↓↑ ]+|$)', lambda x: x.group(1) + - 'ɹ`' + x.group(2), text).replace('ɻ', 'ɹ`') - text = re.sub('([ʦs][⁼ʰ]?)([→↓↑ ]+|$)', - lambda x: x.group(1) + 'ɹ' + x.group(2), text) - return text - - -def chinese_to_lazy_ipa(text): - text = chinese_to_romaji(text) - for regex, replacement in _romaji_to_ipa: - text = re.sub(regex, replacement, text) - return text diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet/models/builder.py b/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet/models/builder.py deleted file mode 100644 index 81c927e507a7c1625ffb114de10e93c94927af25..0000000000000000000000000000000000000000 --- a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet/models/builder.py +++ /dev/null @@ -1,77 +0,0 @@ -import warnings - -from mmcv.utils import Registry, build_from_cfg -from torch import nn - -BACKBONES = Registry('backbone') -NECKS = Registry('neck') -ROI_EXTRACTORS = Registry('roi_extractor') -SHARED_HEADS = Registry('shared_head') -HEADS = Registry('head') -LOSSES = Registry('loss') -DETECTORS = Registry('detector') - - -def build(cfg, registry, default_args=None): - """Build a module. - - Args: - cfg (dict, list[dict]): The config of modules, is is either a dict - or a list of configs. - registry (:obj:`Registry`): A registry the module belongs to. - default_args (dict, optional): Default arguments to build the module. - Defaults to None. - - Returns: - nn.Module: A built nn module. - """ - if isinstance(cfg, list): - modules = [ - build_from_cfg(cfg_, registry, default_args) for cfg_ in cfg - ] - return nn.Sequential(*modules) - else: - return build_from_cfg(cfg, registry, default_args) - - -def build_backbone(cfg): - """Build backbone.""" - return build(cfg, BACKBONES) - - -def build_neck(cfg): - """Build neck.""" - return build(cfg, NECKS) - - -def build_roi_extractor(cfg): - """Build roi extractor.""" - return build(cfg, ROI_EXTRACTORS) - - -def build_shared_head(cfg): - """Build shared head.""" - return build(cfg, SHARED_HEADS) - - -def build_head(cfg): - """Build head.""" - return build(cfg, HEADS) - - -def build_loss(cfg): - """Build loss.""" - return build(cfg, LOSSES) - - -def build_detector(cfg, train_cfg=None, test_cfg=None): - """Build detector.""" - if train_cfg is not None or test_cfg is not None: - warnings.warn( - 'train_cfg and test_cfg is deprecated, ' - 'please specify them in model', UserWarning) - assert cfg.get('train_cfg') is None or train_cfg is None, \ - 'train_cfg specified in both outer field and model field ' - assert cfg.get('test_cfg') is None or test_cfg is None, \ - 'test_cfg specified in both outer field and model field ' - return build(cfg, DETECTORS, dict(train_cfg=train_cfg, test_cfg=test_cfg)) diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer_base/mmseg/core/evaluation/class_names.py b/spaces/abhishek/sketch-to-image/annotator/uniformer_base/mmseg/core/evaluation/class_names.py deleted file mode 100644 index ffae816cf980ce4b03e491cc0c4298cb823797e6..0000000000000000000000000000000000000000 --- a/spaces/abhishek/sketch-to-image/annotator/uniformer_base/mmseg/core/evaluation/class_names.py +++ /dev/null @@ -1,152 +0,0 @@ -import annotator.uniformer.mmcv as mmcv - - -def cityscapes_classes(): - """Cityscapes class names for external use.""" - return [ - 'road', 'sidewalk', 'building', 'wall', 'fence', 'pole', - 'traffic light', 'traffic sign', 'vegetation', 'terrain', 'sky', - 'person', 'rider', 'car', 'truck', 'bus', 'train', 'motorcycle', - 'bicycle' - ] - - -def ade_classes(): - """ADE20K class names for external use.""" - return [ - 'wall', 'building', 'sky', 'floor', 'tree', 'ceiling', 'road', 'bed ', - 'windowpane', 'grass', 'cabinet', 'sidewalk', 'person', 'earth', - 'door', 'table', 'mountain', 'plant', 'curtain', 'chair', 'car', - 'water', 'painting', 'sofa', 'shelf', 'house', 'sea', 'mirror', 'rug', - 'field', 'armchair', 'seat', 'fence', 'desk', 'rock', 'wardrobe', - 'lamp', 'bathtub', 'railing', 'cushion', 'base', 'box', 'column', - 'signboard', 'chest of drawers', 'counter', 'sand', 'sink', - 'skyscraper', 'fireplace', 'refrigerator', 'grandstand', 'path', - 'stairs', 'runway', 'case', 'pool table', 'pillow', 'screen door', - 'stairway', 'river', 'bridge', 'bookcase', 'blind', 'coffee table', - 'toilet', 'flower', 'book', 'hill', 'bench', 'countertop', 'stove', - 'palm', 'kitchen island', 'computer', 'swivel chair', 'boat', 'bar', - 'arcade machine', 'hovel', 'bus', 'towel', 'light', 'truck', 'tower', - 'chandelier', 'awning', 'streetlight', 'booth', 'television receiver', - 'airplane', 'dirt track', 'apparel', 'pole', 'land', 'bannister', - 'escalator', 'ottoman', 'bottle', 'buffet', 'poster', 'stage', 'van', - 'ship', 'fountain', 'conveyer belt', 'canopy', 'washer', 'plaything', - 'swimming pool', 'stool', 'barrel', 'basket', 'waterfall', 'tent', - 'bag', 'minibike', 'cradle', 'oven', 'ball', 'food', 'step', 'tank', - 'trade name', 'microwave', 'pot', 'animal', 'bicycle', 'lake', - 'dishwasher', 'screen', 'blanket', 'sculpture', 'hood', 'sconce', - 'vase', 'traffic light', 'tray', 'ashcan', 'fan', 'pier', 'crt screen', - 'plate', 'monitor', 'bulletin board', 'shower', 'radiator', 'glass', - 'clock', 'flag' - ] - - -def voc_classes(): - """Pascal VOC class names for external use.""" - return [ - 'background', 'aeroplane', 'bicycle', 'bird', 'boat', 'bottle', 'bus', - 'car', 'cat', 'chair', 'cow', 'diningtable', 'dog', 'horse', - 'motorbike', 'person', 'pottedplant', 'sheep', 'sofa', 'train', - 'tvmonitor' - ] - - -def cityscapes_palette(): - """Cityscapes palette for external use.""" - return [[128, 64, 128], [244, 35, 232], [70, 70, 70], [102, 102, 156], - [190, 153, 153], [153, 153, 153], [250, 170, 30], [220, 220, 0], - [107, 142, 35], [152, 251, 152], [70, 130, 180], [220, 20, 60], - [255, 0, 0], [0, 0, 142], [0, 0, 70], [0, 60, 100], [0, 80, 100], - [0, 0, 230], [119, 11, 32]] - - -def ade_palette(): - """ADE20K palette for external use.""" - return [[120, 120, 120], [180, 120, 120], [6, 230, 230], [80, 50, 50], - [4, 200, 3], [120, 120, 80], [140, 140, 140], [204, 5, 255], - [230, 230, 230], [4, 250, 7], [224, 5, 255], [235, 255, 7], - [150, 5, 61], [120, 120, 70], [8, 255, 51], [255, 6, 82], - [143, 255, 140], [204, 255, 4], [255, 51, 7], [204, 70, 3], - [0, 102, 200], [61, 230, 250], [255, 6, 51], [11, 102, 255], - [255, 7, 71], [255, 9, 224], [9, 7, 230], [220, 220, 220], - [255, 9, 92], [112, 9, 255], [8, 255, 214], [7, 255, 224], - [255, 184, 6], [10, 255, 71], [255, 41, 10], [7, 255, 255], - [224, 255, 8], [102, 8, 255], [255, 61, 6], [255, 194, 7], - [255, 122, 8], [0, 255, 20], [255, 8, 41], [255, 5, 153], - [6, 51, 255], [235, 12, 255], [160, 150, 20], [0, 163, 255], - [140, 140, 140], [250, 10, 15], [20, 255, 0], [31, 255, 0], - [255, 31, 0], [255, 224, 0], [153, 255, 0], [0, 0, 255], - [255, 71, 0], [0, 235, 255], [0, 173, 255], [31, 0, 255], - [11, 200, 200], [255, 82, 0], [0, 255, 245], [0, 61, 255], - [0, 255, 112], [0, 255, 133], [255, 0, 0], [255, 163, 0], - [255, 102, 0], [194, 255, 0], [0, 143, 255], [51, 255, 0], - [0, 82, 255], [0, 255, 41], [0, 255, 173], [10, 0, 255], - [173, 255, 0], [0, 255, 153], [255, 92, 0], [255, 0, 255], - [255, 0, 245], [255, 0, 102], [255, 173, 0], [255, 0, 20], - [255, 184, 184], [0, 31, 255], [0, 255, 61], [0, 71, 255], - [255, 0, 204], [0, 255, 194], [0, 255, 82], [0, 10, 255], - [0, 112, 255], [51, 0, 255], [0, 194, 255], [0, 122, 255], - [0, 255, 163], [255, 153, 0], [0, 255, 10], [255, 112, 0], - [143, 255, 0], [82, 0, 255], [163, 255, 0], [255, 235, 0], - [8, 184, 170], [133, 0, 255], [0, 255, 92], [184, 0, 255], - [255, 0, 31], [0, 184, 255], [0, 214, 255], [255, 0, 112], - [92, 255, 0], [0, 224, 255], [112, 224, 255], [70, 184, 160], - [163, 0, 255], [153, 0, 255], [71, 255, 0], [255, 0, 163], - [255, 204, 0], [255, 0, 143], [0, 255, 235], [133, 255, 0], - [255, 0, 235], [245, 0, 255], [255, 0, 122], [255, 245, 0], - [10, 190, 212], [214, 255, 0], [0, 204, 255], [20, 0, 255], - [255, 255, 0], [0, 153, 255], [0, 41, 255], [0, 255, 204], - [41, 0, 255], [41, 255, 0], [173, 0, 255], [0, 245, 255], - [71, 0, 255], [122, 0, 255], [0, 255, 184], [0, 92, 255], - [184, 255, 0], [0, 133, 255], [255, 214, 0], [25, 194, 194], - [102, 255, 0], [92, 0, 255]] - - -def voc_palette(): - """Pascal VOC palette for external use.""" - return [[0, 0, 0], [128, 0, 0], [0, 128, 0], [128, 128, 0], [0, 0, 128], - [128, 0, 128], [0, 128, 128], [128, 128, 128], [64, 0, 0], - [192, 0, 0], [64, 128, 0], [192, 128, 0], [64, 0, 128], - [192, 0, 128], [64, 128, 128], [192, 128, 128], [0, 64, 0], - [128, 64, 0], [0, 192, 0], [128, 192, 0], [0, 64, 128]] - - -dataset_aliases = { - 'cityscapes': ['cityscapes'], - 'ade': ['ade', 'ade20k'], - 'voc': ['voc', 'pascal_voc', 'voc12', 'voc12aug'] -} - - -def get_classes(dataset): - """Get class names of a dataset.""" - alias2name = {} - for name, aliases in dataset_aliases.items(): - for alias in aliases: - alias2name[alias] = name - - if mmcv.is_str(dataset): - if dataset in alias2name: - labels = eval(alias2name[dataset] + '_classes()') - else: - raise ValueError(f'Unrecognized dataset: {dataset}') - else: - raise TypeError(f'dataset must a str, but got {type(dataset)}') - return labels - - -def get_palette(dataset): - """Get class palette (RGB) of a dataset.""" - alias2name = {} - for name, aliases in dataset_aliases.items(): - for alias in aliases: - alias2name[alias] = name - - if mmcv.is_str(dataset): - if dataset in alias2name: - labels = eval(alias2name[dataset] + '_palette()') - else: - raise ValueError(f'Unrecognized dataset: {dataset}') - else: - raise TypeError(f'dataset must a str, but got {type(dataset)}') - return labels diff --git a/spaces/abrar-lohia/text-2-character-anim/VQTrans/GPT_eval_multi.py b/spaces/abrar-lohia/text-2-character-anim/VQTrans/GPT_eval_multi.py deleted file mode 100644 index b5e3ebcb1199e42cf16748e60863b554a0046f00..0000000000000000000000000000000000000000 --- a/spaces/abrar-lohia/text-2-character-anim/VQTrans/GPT_eval_multi.py +++ /dev/null @@ -1,121 +0,0 @@ -import os -import torch -import numpy as np -from torch.utils.tensorboard import SummaryWriter -import json -import clip - -import options.option_transformer as option_trans -import models.vqvae as vqvae -import utils.utils_model as utils_model -import utils.eval_trans as eval_trans -from dataset import dataset_TM_eval -import models.t2m_trans as trans -from options.get_eval_option import get_opt -from models.evaluator_wrapper import EvaluatorModelWrapper -import warnings -warnings.filterwarnings('ignore') - -##### ---- Exp dirs ---- ##### -args = option_trans.get_args_parser() -torch.manual_seed(args.seed) - -args.out_dir = os.path.join(args.out_dir, f'{args.exp_name}') -os.makedirs(args.out_dir, exist_ok = True) - -##### ---- Logger ---- ##### -logger = utils_model.get_logger(args.out_dir) -writer = SummaryWriter(args.out_dir) -logger.info(json.dumps(vars(args), indent=4, sort_keys=True)) - -from utils.word_vectorizer import WordVectorizer -w_vectorizer = WordVectorizer('./glove', 'our_vab') -val_loader = dataset_TM_eval.DATALoader(args.dataname, True, 32, w_vectorizer) - -dataset_opt_path = 'checkpoints/kit/Comp_v6_KLD005/opt.txt' if args.dataname == 'kit' else 'checkpoints/t2m/Comp_v6_KLD005/opt.txt' - -wrapper_opt = get_opt(dataset_opt_path, torch.device('cuda')) -eval_wrapper = EvaluatorModelWrapper(wrapper_opt) - -##### ---- Network ---- ##### - -## load clip model and datasets -clip_model, clip_preprocess = clip.load("ViT-B/32", device=torch.device('cuda'), jit=False, download_root='/apdcephfs_cq2/share_1290939/maelyszhang/.cache/clip') # Must set jit=False for training -clip.model.convert_weights(clip_model) # Actually this line is unnecessary since clip by default already on float16 -clip_model.eval() -for p in clip_model.parameters(): - p.requires_grad = False - -net = vqvae.HumanVQVAE(args, ## use args to define different parameters in different quantizers - args.nb_code, - args.code_dim, - args.output_emb_width, - args.down_t, - args.stride_t, - args.width, - args.depth, - args.dilation_growth_rate) - - -trans_encoder = trans.Text2Motion_Transformer(num_vq=args.nb_code, - embed_dim=args.embed_dim_gpt, - clip_dim=args.clip_dim, - block_size=args.block_size, - num_layers=args.num_layers, - n_head=args.n_head_gpt, - drop_out_rate=args.drop_out_rate, - fc_rate=args.ff_rate) - - -print ('loading checkpoint from {}'.format(args.resume_pth)) -ckpt = torch.load(args.resume_pth, map_location='cpu') -net.load_state_dict(ckpt['net'], strict=True) -net.eval() -net.cuda() - -if args.resume_trans is not None: - print ('loading transformer checkpoint from {}'.format(args.resume_trans)) - ckpt = torch.load(args.resume_trans, map_location='cpu') - trans_encoder.load_state_dict(ckpt['trans'], strict=True) -trans_encoder.train() -trans_encoder.cuda() - - -fid = [] -div = [] -top1 = [] -top2 = [] -top3 = [] -matching = [] -multi = [] -repeat_time = 20 - - -for i in range(repeat_time): - best_fid, best_iter, best_div, best_top1, best_top2, best_top3, best_matching, best_multi, writer, logger = eval_trans.evaluation_transformer_test(args.out_dir, val_loader, net, trans_encoder, logger, writer, 0, best_fid=1000, best_iter=0, best_div=100, best_top1=0, best_top2=0, best_top3=0, best_matching=100, best_multi=0, clip_model=clip_model, eval_wrapper=eval_wrapper, draw=False, savegif=False, save=False, savenpy=(i==0)) - fid.append(best_fid) - div.append(best_div) - top1.append(best_top1) - top2.append(best_top2) - top3.append(best_top3) - matching.append(best_matching) - multi.append(best_multi) - -print('final result:') -print('fid: ', sum(fid)/repeat_time) -print('div: ', sum(div)/repeat_time) -print('top1: ', sum(top1)/repeat_time) -print('top2: ', sum(top2)/repeat_time) -print('top3: ', sum(top3)/repeat_time) -print('matching: ', sum(matching)/repeat_time) -print('multi: ', sum(multi)/repeat_time) - -fid = np.array(fid) -div = np.array(div) -top1 = np.array(top1) -top2 = np.array(top2) -top3 = np.array(top3) -matching = np.array(matching) -multi = np.array(multi) -msg_final = f"FID. {np.mean(fid):.3f}, conf. {np.std(fid)*1.96/np.sqrt(repeat_time):.3f}, Diversity. {np.mean(div):.3f}, conf. {np.std(div)*1.96/np.sqrt(repeat_time):.3f}, TOP1. {np.mean(top1):.3f}, conf. {np.std(top1)*1.96/np.sqrt(repeat_time):.3f}, TOP2. {np.mean(top2):.3f}, conf. {np.std(top2)*1.96/np.sqrt(repeat_time):.3f}, TOP3. {np.mean(top3):.3f}, conf. {np.std(top3)*1.96/np.sqrt(repeat_time):.3f}, Matching. {np.mean(matching):.3f}, conf. {np.std(matching)*1.96/np.sqrt(repeat_time):.3f}, Multi. {np.mean(multi):.3f}, conf. {np.std(multi)*1.96/np.sqrt(repeat_time):.3f}" -logger.info(msg_final) \ No newline at end of file diff --git a/spaces/abrar-lohia/text-2-character-anim/VQTrans/utils/utils_model.py b/spaces/abrar-lohia/text-2-character-anim/VQTrans/utils/utils_model.py deleted file mode 100644 index b3653a47ddb96f2ba27aae73b4eef8be904e9bf0..0000000000000000000000000000000000000000 --- a/spaces/abrar-lohia/text-2-character-anim/VQTrans/utils/utils_model.py +++ /dev/null @@ -1,66 +0,0 @@ -import numpy as np -import torch -import torch.optim as optim -import logging -import os -import sys - -def getCi(accLog): - - mean = np.mean(accLog) - std = np.std(accLog) - ci95 = 1.96*std/np.sqrt(len(accLog)) - - return mean, ci95 - -def get_logger(out_dir): - logger = logging.getLogger('Exp') - logger.setLevel(logging.INFO) - formatter = logging.Formatter("%(asctime)s %(levelname)s %(message)s") - - file_path = os.path.join(out_dir, "run.log") - file_hdlr = logging.FileHandler(file_path) - file_hdlr.setFormatter(formatter) - - strm_hdlr = logging.StreamHandler(sys.stdout) - strm_hdlr.setFormatter(formatter) - - logger.addHandler(file_hdlr) - logger.addHandler(strm_hdlr) - return logger - -## Optimizer -def initial_optim(decay_option, lr, weight_decay, net, optimizer) : - - if optimizer == 'adamw' : - optimizer_adam_family = optim.AdamW - elif optimizer == 'adam' : - optimizer_adam_family = optim.Adam - if decay_option == 'all': - #optimizer = optimizer_adam_family(net.parameters(), lr=lr, betas=(0.9, 0.999), weight_decay=weight_decay) - optimizer = optimizer_adam_family(net.parameters(), lr=lr, betas=(0.5, 0.9), weight_decay=weight_decay) - - elif decay_option == 'noVQ': - all_params = set(net.parameters()) - no_decay = set([net.vq_layer]) - - decay = all_params - no_decay - optimizer = optimizer_adam_family([ - {'params': list(no_decay), 'weight_decay': 0}, - {'params': list(decay), 'weight_decay' : weight_decay}], lr=lr) - - return optimizer - - -def get_motion_with_trans(motion, velocity) : - ''' - motion : torch.tensor, shape (batch_size, T, 72), with the global translation = 0 - velocity : torch.tensor, shape (batch_size, T, 3), contain the information of velocity = 0 - - ''' - trans = torch.cumsum(velocity, dim=1) - trans = trans - trans[:, :1] ## the first root is initialized at 0 (just for visualization) - trans = trans.repeat((1, 1, 21)) - motion_with_trans = motion + trans - return motion_with_trans - \ No newline at end of file diff --git a/spaces/adirik/stylemc-demo/encoder4editing/datasets/__init__.py b/spaces/adirik/stylemc-demo/encoder4editing/datasets/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/adirik/stylemc-demo/torch_utils/ops/upfirdn2d.cpp b/spaces/adirik/stylemc-demo/torch_utils/ops/upfirdn2d.cpp deleted file mode 100644 index 2d7177fc60040751d20e9a8da0301fa3ab64968a..0000000000000000000000000000000000000000 --- a/spaces/adirik/stylemc-demo/torch_utils/ops/upfirdn2d.cpp +++ /dev/null @@ -1,103 +0,0 @@ -// Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved. -// -// NVIDIA CORPORATION and its licensors retain all intellectual property -// and proprietary rights in and to this software, related documentation -// and any modifications thereto. Any use, reproduction, disclosure or -// distribution of this software and related documentation without an express -// license agreement from NVIDIA CORPORATION is strictly prohibited. - -#include -#include -#include -#include "upfirdn2d.h" - -//------------------------------------------------------------------------ - -static torch::Tensor upfirdn2d(torch::Tensor x, torch::Tensor f, int upx, int upy, int downx, int downy, int padx0, int padx1, int pady0, int pady1, bool flip, float gain) -{ - // Validate arguments. - TORCH_CHECK(x.is_cuda(), "x must reside on CUDA device"); - TORCH_CHECK(f.device() == x.device(), "f must reside on the same device as x"); - TORCH_CHECK(f.dtype() == torch::kFloat, "f must be float32"); - TORCH_CHECK(x.numel() <= INT_MAX, "x is too large"); - TORCH_CHECK(f.numel() <= INT_MAX, "f is too large"); - TORCH_CHECK(x.dim() == 4, "x must be rank 4"); - TORCH_CHECK(f.dim() == 2, "f must be rank 2"); - TORCH_CHECK(f.size(0) >= 1 && f.size(1) >= 1, "f must be at least 1x1"); - TORCH_CHECK(upx >= 1 && upy >= 1, "upsampling factor must be at least 1"); - TORCH_CHECK(downx >= 1 && downy >= 1, "downsampling factor must be at least 1"); - - // Create output tensor. - const at::cuda::OptionalCUDAGuard device_guard(device_of(x)); - int outW = ((int)x.size(3) * upx + padx0 + padx1 - (int)f.size(1) + downx) / downx; - int outH = ((int)x.size(2) * upy + pady0 + pady1 - (int)f.size(0) + downy) / downy; - TORCH_CHECK(outW >= 1 && outH >= 1, "output must be at least 1x1"); - torch::Tensor y = torch::empty({x.size(0), x.size(1), outH, outW}, x.options(), x.suggest_memory_format()); - TORCH_CHECK(y.numel() <= INT_MAX, "output is too large"); - - // Initialize CUDA kernel parameters. - upfirdn2d_kernel_params p; - p.x = x.data_ptr(); - p.f = f.data_ptr(); - p.y = y.data_ptr(); - p.up = make_int2(upx, upy); - p.down = make_int2(downx, downy); - p.pad0 = make_int2(padx0, pady0); - p.flip = (flip) ? 1 : 0; - p.gain = gain; - p.inSize = make_int4((int)x.size(3), (int)x.size(2), (int)x.size(1), (int)x.size(0)); - p.inStride = make_int4((int)x.stride(3), (int)x.stride(2), (int)x.stride(1), (int)x.stride(0)); - p.filterSize = make_int2((int)f.size(1), (int)f.size(0)); - p.filterStride = make_int2((int)f.stride(1), (int)f.stride(0)); - p.outSize = make_int4((int)y.size(3), (int)y.size(2), (int)y.size(1), (int)y.size(0)); - p.outStride = make_int4((int)y.stride(3), (int)y.stride(2), (int)y.stride(1), (int)y.stride(0)); - p.sizeMajor = (p.inStride.z == 1) ? p.inSize.w : p.inSize.w * p.inSize.z; - p.sizeMinor = (p.inStride.z == 1) ? p.inSize.z : 1; - - // Choose CUDA kernel. - upfirdn2d_kernel_spec spec; - AT_DISPATCH_FLOATING_TYPES_AND_HALF(x.scalar_type(), "upfirdn2d_cuda", [&] - { - spec = choose_upfirdn2d_kernel(p); - }); - - // Set looping options. - p.loopMajor = (p.sizeMajor - 1) / 16384 + 1; - p.loopMinor = spec.loopMinor; - p.loopX = spec.loopX; - p.launchMinor = (p.sizeMinor - 1) / p.loopMinor + 1; - p.launchMajor = (p.sizeMajor - 1) / p.loopMajor + 1; - - // Compute grid size. - dim3 blockSize, gridSize; - if (spec.tileOutW < 0) // large - { - blockSize = dim3(4, 32, 1); - gridSize = dim3( - ((p.outSize.y - 1) / blockSize.x + 1) * p.launchMinor, - (p.outSize.x - 1) / (blockSize.y * p.loopX) + 1, - p.launchMajor); - } - else // small - { - blockSize = dim3(256, 1, 1); - gridSize = dim3( - ((p.outSize.y - 1) / spec.tileOutH + 1) * p.launchMinor, - (p.outSize.x - 1) / (spec.tileOutW * p.loopX) + 1, - p.launchMajor); - } - - // Launch CUDA kernel. - void* args[] = {&p}; - AT_CUDA_CHECK(cudaLaunchKernel(spec.kernel, gridSize, blockSize, args, 0, at::cuda::getCurrentCUDAStream())); - return y; -} - -//------------------------------------------------------------------------ - -PYBIND11_MODULE(TORCH_EXTENSION_NAME, m) -{ - m.def("upfirdn2d", &upfirdn2d); -} - -//------------------------------------------------------------------------ diff --git a/spaces/afffffdf/QSign/unidbg-fetch-qsign/bin/unidbg-fetch-qsign.bat b/spaces/afffffdf/QSign/unidbg-fetch-qsign/bin/unidbg-fetch-qsign.bat deleted file mode 100644 index 4e44bab8aa65d16e35e935f1273de2e98ce80cf9..0000000000000000000000000000000000000000 --- a/spaces/afffffdf/QSign/unidbg-fetch-qsign/bin/unidbg-fetch-qsign.bat +++ /dev/null @@ -1,89 +0,0 @@ -@rem -@rem Copyright 2015 the original author or authors. -@rem -@rem Licensed under the Apache License, Version 2.0 (the "License"); -@rem you may not use this file except in compliance with the License. -@rem You may obtain a copy of the License at -@rem -@rem https://www.apache.org/licenses/LICENSE-2.0 -@rem -@rem Unless required by applicable law or agreed to in writing, software -@rem distributed under the License is distributed on an "AS IS" BASIS, -@rem WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -@rem See the License for the specific language governing permissions and -@rem limitations under the License. -@rem - -@if "%DEBUG%" == "" @echo off -@rem ########################################################################## -@rem -@rem unidbg-fetch-qsign startup script for Windows -@rem -@rem ########################################################################## - -@rem Set local scope for the variables with windows NT shell -if "%OS%"=="Windows_NT" setlocal - -set DIRNAME=%~dp0 -if "%DIRNAME%" == "" set DIRNAME=. -set APP_BASE_NAME=%~n0 -set APP_HOME=%DIRNAME%.. - -@rem Resolve any "." and ".." in APP_HOME to make it shorter. -for %%i in ("%APP_HOME%") do set APP_HOME=%%~fi - -@rem Add default JVM options here. You can also use JAVA_OPTS and UNIDBG_FETCH_QSIGN_OPTS to pass JVM options to this script. -set DEFAULT_JVM_OPTS= - -@rem Find java.exe -if defined JAVA_HOME goto findJavaFromJavaHome - -set JAVA_EXE=java.exe -%JAVA_EXE% -version >NUL 2>&1 -if "%ERRORLEVEL%" == "0" goto execute - -echo. -echo ERROR: JAVA_HOME is not set and no 'java' command could be found in your PATH. -echo. -echo Please set the JAVA_HOME variable in your environment to match the -echo location of your Java installation. - -goto fail - -:findJavaFromJavaHome -set JAVA_HOME=%JAVA_HOME:"=% -set JAVA_EXE=%JAVA_HOME%/bin/java.exe - -if exist "%JAVA_EXE%" goto execute - -echo. -echo ERROR: JAVA_HOME is set to an invalid directory: %JAVA_HOME% -echo. -echo Please set the JAVA_HOME variable in your environment to match the -echo location of your Java installation. - -goto fail - -:execute -@rem Setup the command line - -set CLASSPATH=%APP_HOME%\lib\unidbg-fetch-qsign-1.1.0.jar;%APP_HOME%\lib\unidbg-fix.jar;%APP_HOME%\lib\ktor-server-content-negotiation-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-serialization-kotlinx-json-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-server-netty-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-server-host-common-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-server-core-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-serialization-kotlinx-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-serialization-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-events-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-websockets-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-http-cio-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-http-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-network-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-utils-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-io-jvm-2.3.1.jar;%APP_HOME%\lib\kotlin-stdlib-jdk8-1.8.22.jar;%APP_HOME%\lib\kotlinx-serialization-json-jvm-1.5.1.jar;%APP_HOME%\lib\kotlinx-serialization-protobuf-jvm-1.5.1.jar;%APP_HOME%\lib\kotlinx-serialization-core-jvm-1.5.1.jar;%APP_HOME%\lib\logback-classic-1.2.11.jar;%APP_HOME%\lib\kotlinx-coroutines-jdk8-1.7.1.jar;%APP_HOME%\lib\kotlinx-coroutines-core-jvm-1.7.1.jar;%APP_HOME%\lib\kotlin-stdlib-jdk7-1.8.22.jar;%APP_HOME%\lib\kotlin-reflect-1.8.10.jar;%APP_HOME%\lib\kotlin-stdlib-1.8.22.jar;%APP_HOME%\lib\slf4j-api-1.7.36.jar;%APP_HOME%\lib\kotlin-stdlib-common-1.8.22.jar;%APP_HOME%\lib\config-1.4.2.jar;%APP_HOME%\lib\jansi-2.4.0.jar;%APP_HOME%\lib\netty-codec-http2-4.1.92.Final.jar;%APP_HOME%\lib\alpn-api-1.1.3.v20160715.jar;%APP_HOME%\lib\netty-transport-native-kqueue-4.1.92.Final.jar;%APP_HOME%\lib\netty-transport-native-epoll-4.1.92.Final.jar;%APP_HOME%\lib\logback-core-1.2.11.jar;%APP_HOME%\lib\annotations-23.0.0.jar;%APP_HOME%\lib\netty-codec-http-4.1.92.Final.jar;%APP_HOME%\lib\netty-handler-4.1.92.Final.jar;%APP_HOME%\lib\netty-codec-4.1.92.Final.jar;%APP_HOME%\lib\netty-transport-classes-kqueue-4.1.92.Final.jar;%APP_HOME%\lib\netty-transport-classes-epoll-4.1.92.Final.jar;%APP_HOME%\lib\netty-transport-native-unix-common-4.1.92.Final.jar;%APP_HOME%\lib\netty-transport-4.1.92.Final.jar;%APP_HOME%\lib\netty-buffer-4.1.92.Final.jar;%APP_HOME%\lib\netty-resolver-4.1.92.Final.jar;%APP_HOME%\lib\netty-common-4.1.92.Final.jar - - -@rem Execute unidbg-fetch-qsign -"%JAVA_EXE%" %DEFAULT_JVM_OPTS% %JAVA_OPTS% %UNIDBG_FETCH_QSIGN_OPTS% -classpath "%CLASSPATH%" MainKt %* - -:end -@rem End local scope for the variables with windows NT shell -if "%ERRORLEVEL%"=="0" goto mainEnd - -:fail -rem Set variable UNIDBG_FETCH_QSIGN_EXIT_CONSOLE if you need the _script_ return code instead of -rem the _cmd.exe /c_ return code! -if not "" == "%UNIDBG_FETCH_QSIGN_EXIT_CONSOLE%" exit 1 -exit /b 1 - -:mainEnd -if "%OS%"=="Windows_NT" endlocal - -:omega diff --git a/spaces/agunes/ChatGPT4/app.py b/spaces/agunes/ChatGPT4/app.py deleted file mode 100644 index 7e09e57ef928fd2451fd0ed1295d0994ca75d026..0000000000000000000000000000000000000000 --- a/spaces/agunes/ChatGPT4/app.py +++ /dev/null @@ -1,193 +0,0 @@ -import gradio as gr -import os -import json -import requests - -#Streaming endpoint -API_URL = "https://api.openai.com/v1/chat/completions" #os.getenv("API_URL") + "/generate_stream" - -#Huggingface provided GPT4 OpenAI API Key -OPENAI_API_KEY = os.getenv("OPENAI_API_KEY") - -#Inferenec function -def predict(system_msg, inputs, top_p, temperature, chat_counter, chatbot=[], history=[]): - - headers = { - "Content-Type": "application/json", - "Authorization": f"Bearer {OPENAI_API_KEY}" - } - print(f"system message is ^^ {system_msg}") - if system_msg.strip() == '': - initial_message = [{"role": "user", "content": f"{inputs}"},] - multi_turn_message = [] - else: - initial_message= [{"role": "system", "content": system_msg}, - {"role": "user", "content": f"{inputs}"},] - multi_turn_message = [{"role": "system", "content": system_msg},] - - if chat_counter == 0 : - payload = { - "model": "gpt-4", - "messages": initial_message , - "temperature" : 1.0, - "top_p":1.0, - "n" : 1, - "stream": True, - "presence_penalty":0, - "frequency_penalty":0, - } - print(f"chat_counter - {chat_counter}") - else: #if chat_counter != 0 : - messages=multi_turn_message # Of the type of - [{"role": "system", "content": system_msg},] - for data in chatbot: - user = {} - user["role"] = "user" - user["content"] = data[0] - assistant = {} - assistant["role"] = "assistant" - assistant["content"] = data[1] - messages.append(user) - messages.append(assistant) - temp = {} - temp["role"] = "user" - temp["content"] = inputs - messages.append(temp) - #messages - payload = { - "model": "gpt-4", - "messages": messages, # Of the type of [{"role": "user", "content": f"{inputs}"}], - "temperature" : temperature, #1.0, - "top_p": top_p, #1.0, - "n" : 1, - "stream": True, - "presence_penalty":0, - "frequency_penalty":0,} - - chat_counter+=1 - - history.append(inputs) - print(f"Logging : payload is - {payload}") - # make a POST request to the API endpoint using the requests.post method, passing in stream=True - response = requests.post(API_URL, headers=headers, json=payload, stream=True) - print(f"Logging : response code - {response}") - token_counter = 0 - partial_words = "" - - counter=0 - for chunk in response.iter_lines(): - #Skipping first chunk - if counter == 0: - counter+=1 - continue - # check whether each line is non-empty - if chunk.decode() : - chunk = chunk.decode() - # decode each line as response data is in bytes - if len(chunk) > 12 and "content" in json.loads(chunk[6:])['choices'][0]['delta']: - partial_words = partial_words + json.loads(chunk[6:])['choices'][0]["delta"]["content"] - if token_counter == 0: - history.append(" " + partial_words) - else: - history[-1] = partial_words - chat = [(history[i], history[i + 1]) for i in range(0, len(history) - 1, 2) ] # convert to tuples of list - token_counter+=1 - yield chat, history, chat_counter, response # resembles {chatbot: chat, state: history} - -#Resetting to blank -def reset_textbox(): - return gr.update(value='') - -#to set a component as visible=False -def set_visible_false(): - return gr.update(visible=False) - -#to set a component as visible=True -def set_visible_true(): - return gr.update(visible=True) - -title = """

      🔥GPT4 with ChatCompletions API +🚀Gradio-Streaming

      """ - -#display message for themes feature -theme_addon_msg = """
      🌟 Discover Gradio Themes with this Demo, featuring v3.22.0! Gradio v3.23.0 also enables seamless Theme sharing. You can develop or modify a theme, and send it to the hub using simple theme.push_to_hub(). -
      🏆Participate in Gradio's Theme Building Hackathon to exhibit your creative flair and win fabulous rewards! Join here - Gradio-Themes-Party🎨 🏆
      -""" - -#Using info to add additional information about System message in GPT4 -system_msg_info = """A conversation could begin with a system message to gently instruct the assistant. -System message helps set the behavior of the AI Assistant. For example, the assistant could be instructed with 'You are a helpful assistant.'""" - -#Modifying existing Gradio Theme -theme = gr.themes.Soft(primary_hue="zinc", secondary_hue="green", neutral_hue="green", - text_size=gr.themes.sizes.text_lg) - -with gr.Blocks(css = """#col_container { margin-left: auto; margin-right: auto;} #chatbot {height: 520px; overflow: auto;}""", - theme=theme) as demo: - gr.HTML(title) - gr.HTML("""

      🔥This Huggingface Gradio Demo provides you full access to GPT4 API (4096 token limit). 🎉🥳🎉You don't need any OPENAI API key🙌

      """) - gr.HTML(theme_addon_msg) - gr.HTML('''
      Duplicate SpaceDuplicate the Space and run securely with your OpenAI API Key
      ''') - - with gr.Column(elem_id = "col_container"): - #GPT4 API Key is provided by Huggingface - with gr.Accordion(label="System message:", open=False): - system_msg = gr.Textbox(label="Instruct the AI Assistant to set its beaviour", info = system_msg_info, value="") - accordion_msg = gr.HTML(value="🚧 To set System message you will have to refresh the app", visible=False) - chatbot = gr.Chatbot(label='GPT4', elem_id="chatbot") - inputs = gr.Textbox(placeholder= "Hi there!", label= "Type an input and press Enter") - state = gr.State([]) - with gr.Row(): - with gr.Column(scale=7): - b1 = gr.Button().style(full_width=True) - with gr.Column(scale=3): - server_status_code = gr.Textbox(label="Status code from OpenAI server", ) - - #top_p, temperature - with gr.Accordion("Parameters", open=False): - top_p = gr.Slider( minimum=-0, maximum=1.0, value=1.0, step=0.05, interactive=True, label="Top-p (nucleus sampling)",) - temperature = gr.Slider( minimum=-0, maximum=5.0, value=1.0, step=0.1, interactive=True, label="Temperature",) - chat_counter = gr.Number(value=0, visible=False, precision=0) - - #Event handling - inputs.submit( predict, [system_msg, inputs, top_p, temperature, chat_counter, chatbot, state], [chatbot, state, chat_counter, server_status_code],) #openai_api_key - b1.click( predict, [system_msg, inputs, top_p, temperature, chat_counter, chatbot, state], [chatbot, state, chat_counter, server_status_code],) #openai_api_key - - inputs.submit(set_visible_false, [], [system_msg]) - b1.click(set_visible_false, [], [system_msg]) - inputs.submit(set_visible_true, [], [accordion_msg]) - b1.click(set_visible_true, [], [accordion_msg]) - - b1.click(reset_textbox, [], [inputs]) - inputs.submit(reset_textbox, [], [inputs]) - - #Examples - with gr.Accordion(label="Examples for System message:", open=False): - gr.Examples( - examples = [["""You are an AI programming assistant. - - - Follow the user's requirements carefully and to the letter. - - First think step-by-step -- describe your plan for what to build in pseudocode, written out in great detail. - - Then output the code in a single code block. - - Minimize any other prose."""], ["""You are ComedianGPT who is a helpful assistant. You answer everything with a joke and witty replies."""], - ["You are ChefGPT, a helpful assistant who answers questions with culinary expertise and a pinch of humor."], - ["You are FitnessGuruGPT, a fitness expert who shares workout tips and motivation with a playful twist."], - ["You are SciFiGPT, an AI assistant who discusses science fiction topics with a blend of knowledge and wit."], - ["You are PhilosopherGPT, a thoughtful assistant who responds to inquiries with philosophical insights and a touch of humor."], - ["You are EcoWarriorGPT, a helpful assistant who shares environment-friendly advice with a lighthearted approach."], - ["You are MusicMaestroGPT, a knowledgeable AI who discusses music and its history with a mix of facts and playful banter."], - ["You are SportsFanGPT, an enthusiastic assistant who talks about sports and shares amusing anecdotes."], - ["You are TechWhizGPT, a tech-savvy AI who can help users troubleshoot issues and answer questions with a dash of humor."], - ["You are FashionistaGPT, an AI fashion expert who shares style advice and trends with a sprinkle of wit."], - ["You are ArtConnoisseurGPT, an AI assistant who discusses art and its history with a blend of knowledge and playful commentary."], - ["You are a helpful assistant that provides detailed and accurate information."], - ["You are an assistant that speaks like Shakespeare."], - ["You are a friendly assistant who uses casual language and humor."], - ["You are a financial advisor who gives expert advice on investments and budgeting."], - ["You are a health and fitness expert who provides advice on nutrition and exercise."], - ["You are a travel consultant who offers recommendations for destinations, accommodations, and attractions."], - ["You are a movie critic who shares insightful opinions on films and their themes."], - ["You are a history enthusiast who loves to discuss historical events and figures."], - ["You are a tech-savvy assistant who can help users troubleshoot issues and answer questions about gadgets and software."], - ["You are an AI poet who can compose creative and evocative poems on any given topic."],], - inputs = system_msg,) - -demo.queue(max_size=99, concurrency_count=20).launch(debug=True) \ No newline at end of file diff --git a/spaces/akhaliq/openlm-research-open_llama_13b/README.md b/spaces/akhaliq/openlm-research-open_llama_13b/README.md deleted file mode 100644 index ec7b1fb4f350749f64dbe94dff5791c7cbf8f3fb..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/openlm-research-open_llama_13b/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Openlm-research-open Llama 13b -emoji: 🚀 -colorFrom: purple -colorTo: gray -sdk: gradio -sdk_version: 3.35.2 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/alamin655/Personas/styles.css b/spaces/alamin655/Personas/styles.css deleted file mode 100644 index ea5f28e61617064df2b252f61c599ec3775cb3a5..0000000000000000000000000000000000000000 --- a/spaces/alamin655/Personas/styles.css +++ /dev/null @@ -1,56 +0,0 @@ -/* Style the overall app container */ -div.css-k1ih3n.egzxvld4 { - padding: 1rem 1rem 1rem; - display: flex; - overflow: visible; - flex-grow: 1; /* This allows the chat window to be anchored at the bottom */ -} -/* Hide the streamlit injected data-iframe-height div */ -div.css-qcqlej.egzxvld3 { - display: none; -} -.css-ocqkz7 { - flex-grow: 0; -} - -/* Style the app so the scrollbar is anchored to the bottom */ -section.css-k1vhr4.egzxvld5 { - display: flex; -} - -/* Style prompt_json_view_placeholder header so it is aligned. */ -div.css-k1ih3n.egzxvld4 > div:nth-child(1) > div:nth-child(1) > div:nth-child(7) > div:nth-child(1) p { - margin-left: 8px; -} - -/* Style prompt_json_view_placeholder so overflow is scrollable */ -div.css-k1ih3n.egzxvld4 > div:nth-child(1) > div:nth-child(1) > div:nth-child(7) > div:nth-child(1) > div:nth-child(1) > div:nth-child(1) > div:nth-child(2) > div:nth-child(1) > div:nth-child(2) { - overflow-x: hidden; - overflow-y: scroll; - max-height: 955px; - margin-top: 8px; - margin-left: 8px; -} - -/* Style prompt_string_placeholder so overflow is scrollable */ -div.css-k1ih3n.egzxvld4 > div:nth-child(1) > div:nth-child(1) > div:nth-child(7) > div:nth-child(2) .stCodeBlock { - overflow-x: hidden; - overflow-y: scroll; - max-height: 955px; - margin-top: 8px; -} - -/* Remove "Press enter to apply" from text input */ -.stTextInput div.css-1if5ada.effi0qh1 { - visibility: hidden; -} - -/* Make markdown code wrapped */ -code.language-markdown { - white-space: pre-wrap !important ; -} - -/* Make padding smaller on st.sidebar */ -div.css-1vq4p4l.e1fqkh3o4 { - padding-top: 2rem; -} \ No newline at end of file diff --git a/spaces/aleloved02/Salesforce-codet5-large/README.md b/spaces/aleloved02/Salesforce-codet5-large/README.md deleted file mode 100644 index 33a4af76aa96c569bdfbf50cdc8568fd5e37464a..0000000000000000000000000000000000000000 --- a/spaces/aleloved02/Salesforce-codet5-large/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Salesforce Codet5 Large -emoji: 📈 -colorFrom: gray -colorTo: gray -sdk: gradio -sdk_version: 3.17.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/alex-mindspace/gpt-agents/swarmai/__init__.py b/spaces/alex-mindspace/gpt-agents/swarmai/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/allknowingroger/Image-Models-Test196/app.py b/spaces/allknowingroger/Image-Models-Test196/app.py deleted file mode 100644 index 50eff76e0dbe1fc399444043ceb71330e6585eff..0000000000000000000000000000000000000000 --- a/spaces/allknowingroger/Image-Models-Test196/app.py +++ /dev/null @@ -1,144 +0,0 @@ -import gradio as gr -# import os -# import sys -# from pathlib import Path -import time - -models =[ - "theexcitedgirl/my-garden-flowers-nxt", - "Danxie/lora-trained-xl-colab", - "VHDSDK/tcoaalt2", - "meowXin/lora-trained-xl-colab", - "hjhjhqw/my-lion", - "artificialguybr/TshirtDesignRedmond-V2", - "artificialguybr/ColoringBookRedmond", - "KyriaAnnwyn/lora-trained-NoahSanchez_baseRVsamples_long-xl", - "theexcitedgirl/my-pet-dog", -] - - -model_functions = {} -model_idx = 1 -for model_path in models: - try: - model_functions[model_idx] = gr.Interface.load(f"models/{model_path}", live=False, preprocess=True, postprocess=False) - except Exception as error: - def the_fn(txt): - return None - model_functions[model_idx] = gr.Interface(fn=the_fn, inputs=["text"], outputs=["image"]) - model_idx+=1 - - -def send_it_idx(idx): - def send_it_fn(prompt): - output = (model_functions.get(str(idx)) or model_functions.get(str(1)))(prompt) - return output - return send_it_fn - -def get_prompts(prompt_text): - return prompt_text - -def clear_it(val): - if int(val) != 0: - val = 0 - else: - val = 0 - pass - return val - -def all_task_end(cnt,t_stamp): - to = t_stamp + 60 - et = time.time() - if et > to and t_stamp != 0: - d = gr.update(value=0) - tog = gr.update(value=1) - #print(f'to: {to} et: {et}') - else: - if cnt != 0: - d = gr.update(value=et) - else: - d = gr.update(value=0) - tog = gr.update(value=0) - #print (f'passing: to: {to} et: {et}') - pass - return d, tog - -def all_task_start(): - print("\n\n\n\n\n\n\n") - t = time.gmtime() - t_stamp = time.time() - current_time = time.strftime("%H:%M:%S", t) - return gr.update(value=t_stamp), gr.update(value=t_stamp), gr.update(value=0) - -def clear_fn(): - nn = len(models) - return tuple([None, *[None for _ in range(nn)]]) - - - -with gr.Blocks(title="SD Models") as my_interface: - with gr.Column(scale=12): - # with gr.Row(): - # gr.Markdown("""- Primary prompt: 你想画的内容(英文单词,如 a cat, 加英文逗号效果更好;点 Improve 按钮进行完善)\n- Real prompt: 完善后的提示词,出现后再点右边的 Run 按钮开始运行""") - with gr.Row(): - with gr.Row(scale=6): - primary_prompt=gr.Textbox(label="Prompt", value="") - # real_prompt=gr.Textbox(label="Real prompt") - with gr.Row(scale=6): - # improve_prompts_btn=gr.Button("Improve") - with gr.Row(): - run=gr.Button("Run",variant="primary") - clear_btn=gr.Button("Clear") - with gr.Row(): - sd_outputs = {} - model_idx = 1 - for model_path in models: - with gr.Column(scale=3, min_width=320): - with gr.Box(): - sd_outputs[model_idx] = gr.Image(label=model_path) - pass - model_idx += 1 - pass - pass - - with gr.Row(visible=False): - start_box=gr.Number(interactive=False) - end_box=gr.Number(interactive=False) - tog_box=gr.Textbox(value=0,interactive=False) - - start_box.change( - all_task_end, - [start_box, end_box], - [start_box, tog_box], - every=1, - show_progress=False) - - primary_prompt.submit(all_task_start, None, [start_box, end_box, tog_box]) - run.click(all_task_start, None, [start_box, end_box, tog_box]) - runs_dict = {} - model_idx = 1 - for model_path in models: - runs_dict[model_idx] = run.click(model_functions[model_idx], inputs=[primary_prompt], outputs=[sd_outputs[model_idx]]) - model_idx += 1 - pass - pass - - # improve_prompts_btn_clicked=improve_prompts_btn.click( - # get_prompts, - # inputs=[primary_prompt], - # outputs=[primary_prompt], - # cancels=list(runs_dict.values())) - clear_btn.click( - clear_fn, - None, - [primary_prompt, *list(sd_outputs.values())], - cancels=[*list(runs_dict.values())]) - tog_box.change( - clear_it, - tog_box, - tog_box, - cancels=[*list(runs_dict.values())]) - -my_interface.queue(concurrency_count=600, status_update_rate=1) -my_interface.launch(inline=True, show_api=False) - \ No newline at end of file diff --git a/spaces/andreped/AeroPath/demo/src/convert.py b/spaces/andreped/AeroPath/demo/src/convert.py deleted file mode 100644 index d4b97da96035bdd6ce953f5f0e0ebddd03d42ad7..0000000000000000000000000000000000000000 --- a/spaces/andreped/AeroPath/demo/src/convert.py +++ /dev/null @@ -1,35 +0,0 @@ -import nibabel as nib -from nibabel.processing import resample_to_output -from skimage.measure import marching_cubes - - -def nifti_to_obj(path, output="prediction.obj"): - # load NIFTI into numpy array - image = nib.load(path) - resampled = resample_to_output(image, [1, 1, 1], order=1) - data = resampled.get_fdata().astype("uint8") - - # Create a material with a red diffuse color (RGB value) - red_material = "newmtl RedMaterial\nKd 1 0 0" # Red diffuse color (RGB) - - # extract surface - verts, faces, normals, values = marching_cubes(data, 0) - faces += 1 - - with open(output, "w") as thefile: - # Write the material definition to the OBJ file - thefile.write(red_material + "\n") - - for item in verts: - # thefile.write('usemtl RedMaterial\n') - thefile.write("v {0} {1} {2}\n".format(item[0], item[1], item[2])) - - for item in normals: - thefile.write("vn {0} {1} {2}\n".format(item[0], item[1], item[2])) - - for item in faces: - thefile.write( - "f {0}//{0} {1}//{1} {2}//{2}\n".format( - item[0], item[1], item[2] - ) - ) diff --git a/spaces/animeartstudio/AnimeArtmodels2/app.py b/spaces/animeartstudio/AnimeArtmodels2/app.py deleted file mode 100644 index 6370af2203740a7ca018e2e68c35ea71ad7b68aa..0000000000000000000000000000000000000000 --- a/spaces/animeartstudio/AnimeArtmodels2/app.py +++ /dev/null @@ -1,216 +0,0 @@ -import gradio as gr -import os -import sys -from pathlib import Path - -models = [ - {"name": "Anything 5.0", "url": "stablediffusionapi/anything-v5"}, - {"name": "Counterfeit 2.0", "url": "gsdf/Counterfeit-V2.0"}, - {"name": "Counterfeit 3.0", "url": "stablediffusionapi/counterfeit-v30"}, - {"name": "Dreamlike Anime", "url": "dreamlike-art/dreamlike-anime-1.0"}, - {"name": "MeinaMix 7", "url": "Nacholmo/meinamixv7-diffusers"}, - {"name": "Openjourney 4", "url": "prompthero/openjourney-v4"}, - {"name": "OpenNiji", "url": "Korakoe/OpenNiji"}, - {"name": "Pastel Mix", "url": "andite/pastel-mix"}, - {"name": "Picasso Diffusion 1.1", "url": "aipicasso/picasso-diffusion-1-1"}, - {"name": "Rev Anim", "url": "stablediffusionapi/rev-anim"}, - {"name": "TMND mix", "url": "stablediffusionapi/tmnd-mix"}, - {"name": "Waifu Diffusion", "url": "hakurei/waifu-diffusion"}, - {"name": "-------- TOP MODELS -------", "url": "WarriorMama777/AbyssOrangeMix"}, - {"name": "Abyss Orange Mix 2", "url": "WarriorMama777/AbyssOrangeMix2"}, - {"name": "Anything 3.0", "url": "Linaqruf/anything-v3.0"}, - {"name": "Anything 3.1", "url": "cag/anything-v3-1"}, - {"name": "Anything 3X", "url": "iZELX1/Anything-V3-X"}, - {"name": "Anything 4.0", "url": "andite/anything-v4.0"}, - {"name": "Anything 5.0", "url": "stablediffusionapi/anything-v5"}, - {"name": "Chillout App Factory","url": "stablediffusionapi/chillout-app-factory"}, - {"name": "Classic Anime", "url": "nitrosocke/classic-anim-diffusion"}, - {"name": "Cool Japan Diffusion 2.1.2", "url": "aipicasso/cool-japan-diffusion-2-1-2"}, - {"name": "Counterfeit 2.0", "url": "gsdf/Counterfeit-V2.0"}, - {"name": "Counterfeit 3.0", "url": "stablediffusionapi/counterfeit-v30"}, - {"name": "CyberPunk Anime", "url": "DGSpitzer/Cyberpunk-Anime-Diffusion"}, - {"name": "Dark Sushi Mix", "url": "stablediffusionapi/dark-sushi-mix"}, - {"name": "Dreamlike Anime", "url": "dreamlike-art/dreamlike-anime-1.0"}, - {"name": "Eimis Anime Diffusion", "url": "eimiss/EimisAnimeDiffusion_1.0v"}, - {"name": "Ghibli Diffusion", "url": "nitrosocke/Ghibli-Diffusion"}, - {"name": "GrapeFruit", "url": "iZELX1/Grapefruit"}, - {"name": "GuoFeng 3", "url": "xiaolxl/GuoFeng3"}, - {"name": "Meina Pastel", "url": "stablediffusionapi/meinapastel"}, - {"name": "MeinaMix 7", "url": "Nacholmo/meinamixv7-diffusers"}, - {"name": "Openjourney 4", "url": "prompthero/openjourney-v4"}, - {"name": "OpenNiji", "url": "Korakoe/OpenNiji"}, - {"name": "Pastel Mix", "url": "andite/pastel-mix"}, - {"name": "Picasso Diffusion 1.1", "url": "aipicasso/picasso-diffusion-1-1"}, - {"name": "Protogen 2.2", "url": "darkstorm2150/Protogen_v2.2_Official_Release"}, - {"name": "Protogen Infinity", "url": "darkstorm2150/Protogen_Infinity_Official_Release"}, - {"name": "Protogen X 3.4", "url": "darkstorm2150/Protogen_x3.4_Official_Release"}, - {"name": "Rev Anim", "url": "stablediffusionapi/rev-anim"}, - {"name": "TMND mix", "url": "stablediffusionapi/tmnd-mix"}, - {"name": "Waifu Diffusion", "url": "hakurei/waifu-diffusion"}, - {"name": "-------- ALL ANIME MODELS -------", "url": "WarriorMama777/AbyssOrangeMix"}, - {"name": "7 Pa", "url": "AIARTCHAN/7pa"}, - {"name": "A Certain Model", "url": "JosephusCheung/ACertainModel"}, - {"name": "A Certain Thing", "url": "JosephusCheung/ACertainThing"}, - {"name": "A Certainity", "url": "JosephusCheung/ACertainty"}, - {"name": "Abyss Hell Hero", "url": "AIARTCHAN/AbyssHellHero"}, - {"name": "Abyss Maple 3", "url": "AIARTCHAN/AbyssMapleVer3"}, - {"name": "Abyss Orange Mix 2", "url": "WarriorMama777/AbyssOrangeMix2"}, - {"name": "Abyss Orange Mix", "url": "WarriorMama777/AbyssOrangeMix"}, - {"name": "AbyssHell 3", "url": "AIARTCHAN/AbyssHellVer3"}, - {"name": "All 526 Animated", "url": "stablediffusionapi/all-526-animated"}, - {"name": "Anidosmix 3", "url": "AIARTCHAN/anidosmixV2"}, - {"name": "Anime Kawai Diffusion", "url": "Ojimi/anime-kawai-diffusion"}, - {"name": "Anireal 3D V2", "url": "circulus/sd-anireal-3d-v2"}, - {"name": "AnyLORA", "url": "kubanemil/AnyLORA"}, - {"name": "Anything 2.1", "url": "swl-models/anything-v2.1"}, - {"name": "Anything 3.0 Light", "url": "mm00/anything-v3.0-light"}, - {"name": "Anything 3.0", "url": "Linaqruf/anything-v3.0"}, - {"name": "Anything 3.1", "url": "cag/anything-v3-1"}, - {"name": "Anything 3X", "url": "iZELX1/Anything-V3-X"}, - {"name": "Anything 4.0", "url": "andite/anything-v4.0"}, - {"name": "Anything 5.0", "url": "stablediffusionapi/anything-v5"}, - {"name": "Anything Else 4", "url": "stablediffusionapi/anythingelse-v4"}, - {"name": "Anything Else 5", "url": "stablediffusionapi/anything-v5"}, - {"name": "Arcane Diffusion", "url": "nitrosocke/Arcane-Diffusion"}, - {"name": "Archer Diffusion", "url": "nitrosocke/archer-diffusion"}, - {"name": "Asian Mix", "url": "D1b4l4p/AsianMix"}, - {"name": "Blood Orange Mix", "url": "WarriorMama777/BloodOrangeMix"}, - {"name": "CamelliaMix 2.5D","url": "stablediffusionapi/camelliamix25d"}, - {"name": "CamelliaMix Line","url": "stablediffusionapi/camelliamixline"}, - {"name": "CamelliaMix","url": "Powidl43/CamelliaMix"}, - {"name": "Cetusmix", "url": "stablediffusionapi/cetusmix"}, - {"name": "Chik Mix", "url": "stablediffusionapi/chikmix"}, - {"name": "Chikmix", "url": "stablediffusionapi/chikmix"}, - {"name": "Chillout App Factory","url": "stablediffusionapi/chillout-app-factory"}, - {"name": "Classic Anime", "url": "nitrosocke/classic-anim-diffusion"}, - {"name": "Cool Japan Diffusion 2.1.2", "url": "aipicasso/cool-japan-diffusion-2-1-2"}, - {"name": "Cosmic Babes", "url": "stablediffusionapi/cosmic-babes"}, - {"name": "Counterfeit 1.0", "url": "gsdf/counterfeit-v1.0"}, - {"name": "Counterfeit 2", "url": "gsdf/Counterfeit-V2.0"}, - {"name": "Counterfeit 2.0", "url": "gsdf/Counterfeit-V2.0"}, - {"name": "Counterfeit 3.0", "url": "stablediffusionapi/counterfeit-v30"}, - {"name": "CuteSexyRobutts", "url": "andite/cutesexyrobutts-diffusion"}, - {"name": "CyberPunk Anime", "url": "DGSpitzer/Cyberpunk-Anime-Diffusion"}, - {"name": "Dark Sushi Mix", "url": "stablediffusionapi/dark-sushi-mix"}, - {"name": "Dash Sushi 25d", "url": "stablediffusionapi/dark-sushi-25d"}, - {"name": "Dreamlike Anime", "url": "dreamlike-art/dreamlike-anime-1.0"}, - {"name": "DucHaiten Anime", "url": "DucHaiten/DucHaitenAnime"}, - {"name": "Eerie Orange Mix", "url": "WarriorMama777/EerieOrangeMix"}, - {"name": "Eimis Anime Diffusion", "url": "eimiss/EimisAnimeDiffusion_1.0v"}, - {"name": "Ghibli Diffusion", "url": "nitrosocke/Ghibli-Diffusion"}, - {"name": "GrapeFruit", "url": "iZELX1/Grapefruit"}, - {"name": "GuoFeng 3", "url": "xiaolxl/GuoFeng3"}, - {"name": "Guweiz Diffusion", "url": "andite/guweiz-diffusion"}, - {"name": "Hiten Diffusion", "url": "andite/hiten-diffusion"}, - {"name": "Icomix 2", "url": "stablediffusionapi/icomix-2"}, - {"name": "InkPunk Diffusion", "url": "Envvi/Inkpunk-Diffusion"}, - {"name": "Mama Orange Mixs", "url": "WarriorMama777/OrangeMixs"}, - {"name": "Mashuu Diffusion", "url": "andite/mashuu-diffusion"}, - {"name": "Meina Alter", "url": "stablediffusionapi/meinaalter"}, - {"name": "Meina Pastel", "url": "stablediffusionapi/meinapastel"}, - {"name": "MeinaMix 7", "url": "Nacholmo/meinamixv7-diffusers"}, - {"name": "Mignon Diffusion", "url": "andite/mignon-diffusion"}, - {"name": "MikaPikazo Diffusion", "url": "andite/mikapikazo-diffusion"}, - {"name": "Mikapikazo", "url": "andite/mikapikazo-diffusion"}, - {"name": "Mix Pro V4", "url": "AIARTCHAN/MIX-Pro-V4"}, - {"name": "NeverEnding-Dream", "url": "Lykon/NeverEnding-Dream"}, - {"name": "Openjourney 4", "url": "prompthero/openjourney-v4"}, - {"name": "OpenNiji", "url": "Korakoe/OpenNiji"}, - {"name": "Pastel Mix", "url": "andite/pastel-mix"}, - {"name": "Picasso Diffusion 1.1", "url": "aipicasso/picasso-diffusion-1-1"}, - {"name": "Piromizu Diffusion", "url": "andite/piromizu-diffusion"}, - {"name": "Protogen 2.2", "url": "darkstorm2150/Protogen_v2.2_Official_Release"}, - {"name": "Protogen Infinity", "url": "darkstorm2150/Protogen_Infinity_Official_Release"}, - {"name": "Protogen X 3.4", "url": "darkstorm2150/Protogen_x3.4_Official_Release"}, - {"name": "Rev Anim", "url": "stablediffusionapi/rev-anim"}, - {"name": "Rev Animated", "url": "coreml/coreml-ReV-Animated"}, - {"name": "Rev Animated", "url": "LottePeisch/RevAnimated-Diffusers"}, - {"name": "Something V 2.2","url": "NoCrypt/SomethingV2_2"}, - {"name": "Something V2","url": "NoCrypt/SomethingV2"}, - {"name": "Three Delicacy", "url": "stablediffusionapi/three-delicacy"}, - {"name": "Three Delicacy wonto", "url": "stablediffusionapi/three-delicacy-wonto"}, - {"name": "TMND mix", "url": "stablediffusionapi/tmnd-mix"}, - {"name": "Waifu Diffusion", "url": "hakurei/waifu-diffusion"} -] - -current_model = models[0] - -text_gen = gr.Interface.load("spaces/daspartho/prompt-extend") - -models2 = [] -for model in models: - model_url = f"models/{model['url']}" - loaded_model = gr.Interface.load(model_url, live=True, preprocess=True) - models2.append(loaded_model) - - -def text_it(inputs, text_gen=text_gen): - return text_gen(inputs) - - -def set_model(current_model_index): - global current_model - current_model = models[current_model_index] - return gr.update(value=f"{current_model['name']}") - - -def send_it(inputs, model_choice): - proc = models2[model_choice] - return proc(inputs) - - -with gr.Blocks() as myface: - gr.HTML( - - ) - - with gr.Row(): - with gr.Row(): - input_text = gr.Textbox(label="Prompt idea", placeholder="Eg. Cyberpunk anime princess", lines=1) - # Model selection dropdown - model_name1 = gr.Dropdown( - label="Choose Model", - choices=[m["name"] for m in models], - type="index", - value=current_model["name"], - interactive=True, - ) - with gr.Row(): - see_prompts = gr.Button("Generate Prompts") - run = gr.Button("Generate Images", variant="primary") - - with gr.Row(): - output1 = gr.Image(label="") - output2 = gr.Image(label="") - output3 = gr.Image(label="") - with gr.Row(): - magic1 = gr.Textbox(label="Generated Prompt", lines=2) - magic2 = gr.Textbox(label="Generated Prompt", lines=2) - magic3 = gr.Textbox(label="Generated Prompt", lines=2) - with gr.Row(): - output4 = gr.Image(label="") - output5 = gr.Image(label="") - output6 = gr.Image(label="") - with gr.Row(): - magic4 = gr.Textbox(label="Generated Prompt", lines=2) - magic5 = gr.Textbox(label="Generated Prompt", lines=2) - magic6 = gr.Textbox(label="Generated Prompt", lines=2) - - model_name1.change(set_model, inputs=model_name1, outputs=[output1, output2, output3, output4, output5, output6]) - - run.click(send_it, inputs=[magic1, model_name1], outputs=[output1]) - run.click(send_it, inputs=[magic2, model_name1], outputs=[output2]) - run.click(send_it, inputs=[magic3, model_name1], outputs=[output3]) - run.click(send_it, inputs=[magic4, model_name1], outputs=[output4]) - run.click(send_it, inputs=[magic5, model_name1], outputs=[output5]) - run.click(send_it, inputs=[magic6, model_name1], outputs=[output6]) - - see_prompts.click(text_it, inputs=[input_text], outputs=[magic1]) - see_prompts.click(text_it, inputs=[input_text], outputs=[magic2]) - see_prompts.click(text_it, inputs=[input_text], outputs=[magic3]) - see_prompts.click(text_it, inputs=[input_text], outputs=[magic4]) - see_prompts.click(text_it, inputs=[input_text], outputs=[magic5]) - see_prompts.click(text_it, inputs=[input_text], outputs=[magic6]) - -myface.queue(concurrency_count=200) -myface.launch(inline=True, show_api=False, max_threads=400) \ No newline at end of file diff --git a/spaces/antonovmaxim/text-generation-webui-space/docs/README.md b/spaces/antonovmaxim/text-generation-webui-space/docs/README.md deleted file mode 100644 index 65dadd7cfc906247d9c6995896ba6a144a0d31c1..0000000000000000000000000000000000000000 --- a/spaces/antonovmaxim/text-generation-webui-space/docs/README.md +++ /dev/null @@ -1,19 +0,0 @@ -# text-generation-webui documentation - -## Table of contents - -* [GPTQ models (4 bit mode)](GPTQ-models-(4-bit-mode).md) -* [LLaMA model](LLaMA-model.md) -* [Using LoRAs](Using-LoRAs.md) -* [llama.cpp models](llama.cpp-models.md) -* [RWKV model](RWKV-model.md) -* [Extensions](Extensions.md) -* [Chat mode](Chat-mode.md) -* [DeepSpeed](DeepSpeed.md) -* [FlexGen](FlexGen.md) -* [Spell book](Spell-book.md) -* [Low-VRAM-guide](Low-VRAM-guide.md) -* [System requirements](System-requirements.md) -* [Windows installation guide](Windows-installation-guide.md) -* [WSL installation guide](WSL-installation-guide.md) -* [Docker Compose](Docker.md) diff --git a/spaces/aodianyun/stable-diffusion-webui/modules/sd_vae_approx.py b/spaces/aodianyun/stable-diffusion-webui/modules/sd_vae_approx.py deleted file mode 100644 index ea4c4a3a72941c31a654a29ce90cf8d9c82ce674..0000000000000000000000000000000000000000 --- a/spaces/aodianyun/stable-diffusion-webui/modules/sd_vae_approx.py +++ /dev/null @@ -1,58 +0,0 @@ -import os - -import torch -from torch import nn -from modules import devices, paths - -sd_vae_approx_model = None - - -class VAEApprox(nn.Module): - def __init__(self): - super(VAEApprox, self).__init__() - self.conv1 = nn.Conv2d(4, 8, (7, 7)) - self.conv2 = nn.Conv2d(8, 16, (5, 5)) - self.conv3 = nn.Conv2d(16, 32, (3, 3)) - self.conv4 = nn.Conv2d(32, 64, (3, 3)) - self.conv5 = nn.Conv2d(64, 32, (3, 3)) - self.conv6 = nn.Conv2d(32, 16, (3, 3)) - self.conv7 = nn.Conv2d(16, 8, (3, 3)) - self.conv8 = nn.Conv2d(8, 3, (3, 3)) - - def forward(self, x): - extra = 11 - x = nn.functional.interpolate(x, (x.shape[2] * 2, x.shape[3] * 2)) - x = nn.functional.pad(x, (extra, extra, extra, extra)) - - for layer in [self.conv1, self.conv2, self.conv3, self.conv4, self.conv5, self.conv6, self.conv7, self.conv8, ]: - x = layer(x) - x = nn.functional.leaky_relu(x, 0.1) - - return x - - -def model(): - global sd_vae_approx_model - - if sd_vae_approx_model is None: - sd_vae_approx_model = VAEApprox() - sd_vae_approx_model.load_state_dict(torch.load(os.path.join(paths.models_path, "VAE-approx", "model.pt"), map_location='cpu' if devices.device.type != 'cuda' else None)) - sd_vae_approx_model.eval() - sd_vae_approx_model.to(devices.device, devices.dtype) - - return sd_vae_approx_model - - -def cheap_approximation(sample): - # https://discuss.huggingface.co/t/decoding-latents-to-rgb-without-upscaling/23204/2 - - coefs = torch.tensor([ - [0.298, 0.207, 0.208], - [0.187, 0.286, 0.173], - [-0.158, 0.189, 0.264], - [-0.184, -0.271, -0.473], - ]).to(sample.device) - - x_sample = torch.einsum("lxy,lr -> rxy", sample, coefs) - - return x_sample diff --git a/spaces/arbml/Ashaar/poetry_diacritizer/modules/layers.py b/spaces/arbml/Ashaar/poetry_diacritizer/modules/layers.py deleted file mode 100644 index 64d7d68f5d3a7d58c2615939220168a94bbd4475..0000000000000000000000000000000000000000 --- a/spaces/arbml/Ashaar/poetry_diacritizer/modules/layers.py +++ /dev/null @@ -1,70 +0,0 @@ -import torch -from torch import nn -from typing import Any - - -class BatchNormConv1d(nn.Module): - """ - A nn.Conv1d followed by an optional activation function, and nn.BatchNorm1d - """ - - def __init__( - self, - in_dim: int, - out_dim: int, - kernel_size: int, - stride: int, - padding: int, - activation: Any = None, - ): - super().__init__() - self.conv1d = nn.Conv1d( - in_dim, - out_dim, - kernel_size=kernel_size, - stride=stride, - padding=padding, - bias=False, - ) - self.bn = nn.BatchNorm1d(out_dim) - self.activation = activation - - def forward(self, x: Any): - x = self.conv1d(x) - if self.activation is not None: - x = self.activation(x) - return self.bn(x) - - -class LinearNorm(torch.nn.Module): - def __init__(self, in_dim, out_dim, bias=True, w_init_gain='linear'): - super().__init__() - self.linear_layer = torch.nn.Linear(in_dim, out_dim, bias=bias) - - torch.nn.init.xavier_uniform_( - self.linear_layer.weight, - gain=torch.nn.init.calculate_gain(w_init_gain)) - - def forward(self, x): - return self.linear_layer(x) - - -class ConvNorm(torch.nn.Module): - def __init__(self, in_channels, out_channels, kernel_size=1, stride=1, - padding=None, dilation=1, bias=True, w_init_gain='linear'): - super().__init__() - if padding is None: - assert(kernel_size % 2 == 1) - padding = int(dilation * (kernel_size - 1) / 2) - - self.conv = torch.nn.Conv1d(in_channels, out_channels, - kernel_size=kernel_size, stride=stride, - padding=padding, dilation=dilation, - bias=bias) - - torch.nn.init.xavier_uniform_( - self.conv.weight, gain=torch.nn.init.calculate_gain(w_init_gain)) - - def forward(self, signal): - conv_signal = self.conv(signal) - return conv_signal diff --git a/spaces/arnavkartikeya/SCRIPture-final/transform/randaugment.py b/spaces/arnavkartikeya/SCRIPture-final/transform/randaugment.py deleted file mode 100644 index 094d9f4cacc93146d2bab7311d9dc04feb07032c..0000000000000000000000000000000000000000 --- a/spaces/arnavkartikeya/SCRIPture-final/transform/randaugment.py +++ /dev/null @@ -1,340 +0,0 @@ -import cv2 -import numpy as np - - -## aug functions -def identity_func(img): - return img - - -def autocontrast_func(img, cutoff=0): - ''' - same output as PIL.ImageOps.autocontrast - ''' - n_bins = 256 - - def tune_channel(ch): - n = ch.size - cut = cutoff * n // 100 - if cut == 0: - high, low = ch.max(), ch.min() - else: - hist = cv2.calcHist([ch], [0], None, [n_bins], [0, n_bins]) - low = np.argwhere(np.cumsum(hist) > cut) - low = 0 if low.shape[0] == 0 else low[0] - high = np.argwhere(np.cumsum(hist[::-1]) > cut) - high = n_bins - 1 if high.shape[0] == 0 else n_bins - 1 - high[0] - if high <= low: - table = np.arange(n_bins) - else: - scale = (n_bins - 1) / (high - low) - offset = -low * scale - table = np.arange(n_bins) * scale + offset - table[table < 0] = 0 - table[table > n_bins - 1] = n_bins - 1 - table = table.clip(0, 255).astype(np.uint8) - return table[ch] - - channels = [tune_channel(ch) for ch in cv2.split(img)] - out = cv2.merge(channels) - return out - - -def equalize_func(img): - ''' - same output as PIL.ImageOps.equalize - PIL's implementation is different from cv2.equalize - ''' - n_bins = 256 - - def tune_channel(ch): - hist = cv2.calcHist([ch], [0], None, [n_bins], [0, n_bins]) - non_zero_hist = hist[hist != 0].reshape(-1) - step = np.sum(non_zero_hist[:-1]) // (n_bins - 1) - if step == 0: return ch - n = np.empty_like(hist) - n[0] = step // 2 - n[1:] = hist[:-1] - table = (np.cumsum(n) // step).clip(0, 255).astype(np.uint8) - return table[ch] - - channels = [tune_channel(ch) for ch in cv2.split(img)] - out = cv2.merge(channels) - return out - - -def rotate_func(img, degree, fill=(0, 0, 0)): - ''' - like PIL, rotate by degree, not radians - ''' - H, W = img.shape[0], img.shape[1] - center = W / 2, H / 2 - M = cv2.getRotationMatrix2D(center, degree, 1) - out = cv2.warpAffine(img, M, (W, H), borderValue=fill) - return out - - -def solarize_func(img, thresh=128): - ''' - same output as PIL.ImageOps.posterize - ''' - table = np.array([el if el < thresh else 255 - el for el in range(256)]) - table = table.clip(0, 255).astype(np.uint8) - out = table[img] - return out - - -def color_func(img, factor): - ''' - same output as PIL.ImageEnhance.Color - ''' - ## implementation according to PIL definition, quite slow - # degenerate = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)[:, :, np.newaxis] - # out = blend(degenerate, img, factor) - # M = ( - # np.eye(3) * factor - # + np.float32([0.114, 0.587, 0.299]).reshape(3, 1) * (1. - factor) - # )[np.newaxis, np.newaxis, :] - M = ( - np.float32([ - [0.886, -0.114, -0.114], - [-0.587, 0.413, -0.587], - [-0.299, -0.299, 0.701]]) * factor - + np.float32([[0.114], [0.587], [0.299]]) - ) - out = np.matmul(img, M).clip(0, 255).astype(np.uint8) - return out - - -def contrast_func(img, factor): - """ - same output as PIL.ImageEnhance.Contrast - """ - mean = np.sum(np.mean(img, axis=(0, 1)) * np.array([0.114, 0.587, 0.299])) - table = np.array([( - el - mean) * factor + mean - for el in range(256) - ]).clip(0, 255).astype(np.uint8) - out = table[img] - return out - - -def brightness_func(img, factor): - ''' - same output as PIL.ImageEnhance.Contrast - ''' - table = (np.arange(256, dtype=np.float32) * factor).clip(0, 255).astype(np.uint8) - out = table[img] - return out - - -def sharpness_func(img, factor): - ''' - The differences the this result and PIL are all on the 4 boundaries, the center - areas are same - ''' - kernel = np.ones((3, 3), dtype=np.float32) - kernel[1][1] = 5 - kernel /= 13 - degenerate = cv2.filter2D(img, -1, kernel) - if factor == 0.0: - out = degenerate - elif factor == 1.0: - out = img - else: - out = img.astype(np.float32) - degenerate = degenerate.astype(np.float32)[1:-1, 1:-1, :] - out[1:-1, 1:-1, :] = degenerate + factor * (out[1:-1, 1:-1, :] - degenerate) - out = out.astype(np.uint8) - return out - - -def shear_x_func(img, factor, fill=(0, 0, 0)): - H, W = img.shape[0], img.shape[1] - M = np.float32([[1, factor, 0], [0, 1, 0]]) - out = cv2.warpAffine(img, M, (W, H), borderValue=fill, flags=cv2.INTER_LINEAR).astype(np.uint8) - return out - - -def translate_x_func(img, offset, fill=(0, 0, 0)): - ''' - same output as PIL.Image.transform - ''' - H, W = img.shape[0], img.shape[1] - M = np.float32([[1, 0, -offset], [0, 1, 0]]) - out = cv2.warpAffine(img, M, (W, H), borderValue=fill, flags=cv2.INTER_LINEAR).astype(np.uint8) - return out - - -def translate_y_func(img, offset, fill=(0, 0, 0)): - ''' - same output as PIL.Image.transform - ''' - H, W = img.shape[0], img.shape[1] - M = np.float32([[1, 0, 0], [0, 1, -offset]]) - out = cv2.warpAffine(img, M, (W, H), borderValue=fill, flags=cv2.INTER_LINEAR).astype(np.uint8) - return out - - -def posterize_func(img, bits): - ''' - same output as PIL.ImageOps.posterize - ''' - out = np.bitwise_and(img, np.uint8(255 << (8 - bits))) - return out - - -def shear_y_func(img, factor, fill=(0, 0, 0)): - H, W = img.shape[0], img.shape[1] - M = np.float32([[1, 0, 0], [factor, 1, 0]]) - out = cv2.warpAffine(img, M, (W, H), borderValue=fill, flags=cv2.INTER_LINEAR).astype(np.uint8) - return out - - -def cutout_func(img, pad_size, replace=(0, 0, 0)): - replace = np.array(replace, dtype=np.uint8) - H, W = img.shape[0], img.shape[1] - rh, rw = np.random.random(2) - pad_size = pad_size // 2 - ch, cw = int(rh * H), int(rw * W) - x1, x2 = max(ch - pad_size, 0), min(ch + pad_size, H) - y1, y2 = max(cw - pad_size, 0), min(cw + pad_size, W) - out = img.copy() - out[x1:x2, y1:y2, :] = replace - return out - - -### level to args -def enhance_level_to_args(MAX_LEVEL): - def level_to_args(level): - return ((level / MAX_LEVEL) * 1.8 + 0.1,) - return level_to_args - - -def shear_level_to_args(MAX_LEVEL, replace_value): - def level_to_args(level): - level = (level / MAX_LEVEL) * 0.3 - if np.random.random() > 0.5: level = -level - return (level, replace_value) - - return level_to_args - - -def translate_level_to_args(translate_const, MAX_LEVEL, replace_value): - def level_to_args(level): - level = (level / MAX_LEVEL) * float(translate_const) - if np.random.random() > 0.5: level = -level - return (level, replace_value) - - return level_to_args - - -def cutout_level_to_args(cutout_const, MAX_LEVEL, replace_value): - def level_to_args(level): - level = int((level / MAX_LEVEL) * cutout_const) - return (level, replace_value) - - return level_to_args - - -def solarize_level_to_args(MAX_LEVEL): - def level_to_args(level): - level = int((level / MAX_LEVEL) * 256) - return (level, ) - return level_to_args - - -def none_level_to_args(level): - return () - - -def posterize_level_to_args(MAX_LEVEL): - def level_to_args(level): - level = int((level / MAX_LEVEL) * 4) - return (level, ) - return level_to_args - - -def rotate_level_to_args(MAX_LEVEL, replace_value): - def level_to_args(level): - level = (level / MAX_LEVEL) * 30 - if np.random.random() < 0.5: - level = -level - return (level, replace_value) - - return level_to_args - - -func_dict = { - 'Identity': identity_func, - 'AutoContrast': autocontrast_func, - 'Equalize': equalize_func, - 'Rotate': rotate_func, - 'Solarize': solarize_func, - 'Color': color_func, - 'Contrast': contrast_func, - 'Brightness': brightness_func, - 'Sharpness': sharpness_func, - 'ShearX': shear_x_func, - 'TranslateX': translate_x_func, - 'TranslateY': translate_y_func, - 'Posterize': posterize_func, - 'ShearY': shear_y_func, -} - -translate_const = 10 -MAX_LEVEL = 10 -replace_value = (128, 128, 128) -arg_dict = { - 'Identity': none_level_to_args, - 'AutoContrast': none_level_to_args, - 'Equalize': none_level_to_args, - 'Rotate': rotate_level_to_args(MAX_LEVEL, replace_value), - 'Solarize': solarize_level_to_args(MAX_LEVEL), - 'Color': enhance_level_to_args(MAX_LEVEL), - 'Contrast': enhance_level_to_args(MAX_LEVEL), - 'Brightness': enhance_level_to_args(MAX_LEVEL), - 'Sharpness': enhance_level_to_args(MAX_LEVEL), - 'ShearX': shear_level_to_args(MAX_LEVEL, replace_value), - 'TranslateX': translate_level_to_args( - translate_const, MAX_LEVEL, replace_value - ), - 'TranslateY': translate_level_to_args( - translate_const, MAX_LEVEL, replace_value - ), - 'Posterize': posterize_level_to_args(MAX_LEVEL), - 'ShearY': shear_level_to_args(MAX_LEVEL, replace_value), -} - - -class RandomAugment(object): - - def __init__(self, N=2, M=10, isPIL=False, augs=[]): - self.N = N - self.M = M - self.isPIL = isPIL - if augs: - self.augs = augs - else: - self.augs = list(arg_dict.keys()) - - def get_random_ops(self): - sampled_ops = np.random.choice(self.augs, self.N) - return [(op, 0.5, self.M) for op in sampled_ops] - - def __call__(self, img): - if self.isPIL: - img = np.array(img) - ops = self.get_random_ops() - for name, prob, level in ops: - if np.random.random() > prob: - continue - args = arg_dict[name](level) - img = func_dict[name](img, *args) - return img - - -if __name__ == '__main__': - a = RandomAugment() - img = np.random.randn(32, 32, 3) - a(img) \ No newline at end of file diff --git a/spaces/artificialguybr/OPENHERMES-V2.5-DEMO/app.py b/spaces/artificialguybr/OPENHERMES-V2.5-DEMO/app.py deleted file mode 100644 index 7cec91212a2384e8968c46c64be4143e3b557be8..0000000000000000000000000000000000000000 --- a/spaces/artificialguybr/OPENHERMES-V2.5-DEMO/app.py +++ /dev/null @@ -1,155 +0,0 @@ -import gradio as gr -import re -from transformers import AutoModelForCausalLM, AutoTokenizer -import torch -import bitsandbytes -import accelerate -model_name_or_path = "teknium/OpenHermes-2.5-Mistral-7B" -dtype = torch.bfloat16 -model = AutoModelForCausalLM.from_pretrained(model_name_or_path, - device_map="auto", - torch_dtype=dtype, - trust_remote_code=False, - load_in_4bit=True, - revision="main") -tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True) - -BASE_SYSTEM_MESSAGE = "I carefully provide accurate, factual, thoughtful, nuanced answers and am brilliant at reasoning." - -def clear_chat(chat_history_state, chat_message): - chat_history_state = [] - chat_message = '' - return chat_history_state, chat_message - -def user(message, history): - history = history or [] - history.append([message, ""]) - return "", history - -def regenerate(chatbot, chat_history_state, system_msg, max_tokens, temperature, top_p, top_k, repetition_penalty): - print("Regenerate function called") # Debug print - - if not chat_history_state: - print("Chat history is empty") # Debug print - return chatbot, chat_history_state, "" - - # Remove only the last assistant's message from the chat history - if len(chat_history_state) > 0: - print(f"Before: {chat_history_state[-1]}") # Debug print - chat_history_state[-1][1] = "" - print(f"After: {chat_history_state[-1]}") # Debug print - - # Re-run the chat function - new_history, _, _ = chat(chat_history_state, system_msg, max_tokens, temperature, top_p, top_k, repetition_penalty) - print(f"New history: {new_history}") # Debug print - - return new_history, new_history, "" - - -def chat(history, system_message, max_tokens, temperature, top_p, top_k, repetition_penalty): - print(f"Chat function called with history: {history}") - history = history or [] - - # Use BASE_SYSTEM_MESSAGE if system_message is empty - system_message_to_use = system_message if system_message.strip() else BASE_SYSTEM_MESSAGE - - # A última mensagem do usuário - user_prompt = history[-1][0] if history else "" - print(f"User prompt used for generation: {user_prompt}") # Debug print - # Preparar a entrada para o modelo - prompt_template = f'''system -{system_message_to_use.strip()} -user -{user_prompt} -assistant -''' - input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda() - - # Gerar a saída - output = model.generate( - input_ids=input_ids, - max_length=max_tokens, - temperature=temperature, - top_p=top_p, - top_k=top_k, - repetition_penalty=repetition_penalty - ) - - # Decodificar a saída - decoded_output = tokenizer.decode(output[0], skip_special_tokens=True) - assistant_response = decoded_output.split('assistant')[-1].strip() # Pegar apenas a última resposta do assistente - print(f"Generated assistant response: {assistant_response}") # Debug print - # Atualizar o histórico - if history: - history[-1][1] += assistant_response - else: - history.append(["", assistant_response]) - - print(f"Updated history: {history}") - return history, history, "" - - -start_message = "" - -with gr.Blocks() as demo: - with gr.Row(): - with gr.Column(): - gr.Markdown(""" - ## OpenHermes-V2.5 Finetuned on Mistral 7B - **Space created by [@artificialguybr](https://twitter.com/artificialguybr). Model by [@Teknium1](https://twitter.com/Teknium1). Thanks HF for GPU!** - **OpenHermes-V2.5 is currently SOTA in some benchmarks for 7B models.** - **Hermes 2 model was trained on 900,000 instructions, and surpasses all previous versions of Hermes 13B and below, and matches 70B on some benchmarks! Hermes 2 changes the game with strong multiturn chat skills, system prompt capabilities, and uses ChatML format. It's quality, diversity and scale is unmatched in the current OS LM landscape. Not only does it do well in benchmarks, but also in unmeasured capabilities, like Roleplaying, Tasks, and more.** - """) - with gr.Row(): - #chatbot = gr.Chatbot().style(height=500) - chatbot = gr.Chatbot(elem_id="chatbot") - with gr.Row(): - message = gr.Textbox( - label="What do you want to chat about?", - placeholder="Ask me anything.", - lines=3, - ) - with gr.Row(): - submit = gr.Button(value="Send message", variant="secondary", scale=1) - clear = gr.Button(value="New topic", variant="secondary", scale=0) - stop = gr.Button(value="Stop", variant="secondary", scale=0) - regen_btn = gr.Button(value="Regenerate", variant="secondary", scale=0) - with gr.Accordion("Show Model Parameters", open=False): - with gr.Row(): - with gr.Column(): - max_tokens = gr.Slider(20, 512, label="Max Tokens", step=20, value=500) - temperature = gr.Slider(0.0, 2.0, label="Temperature", step=0.1, value=0.7) - top_p = gr.Slider(0.0, 1.0, label="Top P", step=0.05, value=0.95) - top_k = gr.Slider(1, 100, label="Top K", step=1, value=40) - repetition_penalty = gr.Slider(1.0, 2.0, label="Repetition Penalty", step=0.1, value=1.1) - - system_msg = gr.Textbox( - start_message, label="System Message", interactive=True, visible=True, placeholder="System prompt. Provide instructions which you want the model to remember.", lines=5) - - chat_history_state = gr.State() - clear.click(clear_chat, inputs=[chat_history_state, message], outputs=[chat_history_state, message], queue=False) - clear.click(lambda: None, None, chatbot, queue=False) - - submit_click_event = submit.click( - fn=user, inputs=[message, chat_history_state], outputs=[message, chat_history_state], queue=True - ).then( - fn=chat, inputs=[chat_history_state, system_msg, max_tokens, temperature, top_p, top_k, repetition_penalty], outputs=[chatbot, chat_history_state, message], queue=True - ) - - # Corrected the clear button click event - clear.click( - fn=clear_chat, inputs=[chat_history_state, message], outputs=[chat_history_state, message], queue=False - ) - - # Stop button remains the same - stop.click(fn=None, inputs=None, outputs=None, cancels=[submit_click_event], queue=False) - regen_click_event = regen_btn.click( - fn=regenerate, - inputs=[chatbot, chat_history_state, system_msg, max_tokens, temperature, top_p, top_k, repetition_penalty], - outputs=[chatbot, chat_history_state, message], - queue=True - ) - - -demo.queue(max_size=128, concurrency_count=2) -demo.launch() \ No newline at end of file diff --git a/spaces/artificialguybr/video-dubbing/TTS/recipes/ljspeech/xtts_v1/train_gpt_xtts.py b/spaces/artificialguybr/video-dubbing/TTS/recipes/ljspeech/xtts_v1/train_gpt_xtts.py deleted file mode 100644 index 7d8f4064c5f510295d5698869acdbdd57a9faeff..0000000000000000000000000000000000000000 --- a/spaces/artificialguybr/video-dubbing/TTS/recipes/ljspeech/xtts_v1/train_gpt_xtts.py +++ /dev/null @@ -1,176 +0,0 @@ -import os - -from trainer import Trainer, TrainerArgs - -from TTS.config.shared_configs import BaseDatasetConfig -from TTS.tts.datasets import load_tts_samples -from TTS.tts.layers.xtts.trainer.gpt_trainer import GPTArgs, GPTTrainer, GPTTrainerConfig, XttsAudioConfig -from TTS.utils.manage import ModelManager - -# Logging parameters -RUN_NAME = "GPT_XTTS_LJSpeech_FT" -PROJECT_NAME = "XTTS_trainer" -DASHBOARD_LOGGER = "tensorboard" -LOGGER_URI = None - -# Set here the path that the checkpoints will be saved. Default: ./run/training/ -OUT_PATH = os.path.join(os.path.dirname(os.path.abspath(__file__)), "run", "training") - -# Training Parameters -OPTIMIZER_WD_ONLY_ON_WEIGHTS = True # for multi-gpu training please make it False -START_WITH_EVAL = True # if True it will star with evaluation -BATCH_SIZE = 3 # set here the batch size -GRAD_ACUMM_STEPS = 84 # set here the grad accumulation steps -# Note: we recommend that BATCH_SIZE * GRAD_ACUMM_STEPS need to be at least 252 for more efficient training. You can increase/decrease BATCH_SIZE but then set GRAD_ACUMM_STEPS accordingly. - -# Define here the dataset that you want to use for the fine-tuning on. -config_dataset = BaseDatasetConfig( - formatter="ljspeech", - dataset_name="ljspeech", - path="/raid/datasets/LJSpeech-1.1_24khz/", - meta_file_train="/raid/datasets/LJSpeech-1.1_24khz/metadata.csv", - language="en", -) - -# Add here the configs of the datasets -DATASETS_CONFIG_LIST = [config_dataset] - -# Define the path where XTTS v1.1.1 files will be downloaded -CHECKPOINTS_OUT_PATH = os.path.join(OUT_PATH, "XTTS_v1.1_original_model_files/") -os.makedirs(CHECKPOINTS_OUT_PATH, exist_ok=True) - - -# DVAE files -DVAE_CHECKPOINT_LINK = "https://coqui.gateway.scarf.sh/hf-coqui/XTTS-v1/v1.1.2/dvae.pth" -MEL_NORM_LINK = "https://coqui.gateway.scarf.sh/hf-coqui/XTTS-v1/v1.1.2/mel_stats.pth" - -# Set the path to the downloaded files -DVAE_CHECKPOINT = os.path.join(CHECKPOINTS_OUT_PATH, DVAE_CHECKPOINT_LINK.split("/")[-1]) -MEL_NORM_FILE = os.path.join(CHECKPOINTS_OUT_PATH, MEL_NORM_LINK.split("/")[-1]) - -# download DVAE files if needed -if not os.path.isfile(DVAE_CHECKPOINT) or not os.path.isfile(MEL_NORM_FILE): - print(" > Downloading DVAE files!") - ModelManager._download_model_files([MEL_NORM_LINK, DVAE_CHECKPOINT_LINK], CHECKPOINTS_OUT_PATH, progress_bar=True) - - -# Download XTTS v1.1 checkpoint if needed -TOKENIZER_FILE_LINK = "https://coqui.gateway.scarf.sh/hf-coqui/XTTS-v1/v1.1.2/vocab.json" -XTTS_CHECKPOINT_LINK = "https://coqui.gateway.scarf.sh/hf-coqui/XTTS-v1/v1.1.2/model.pth" - -# XTTS transfer learning parameters: You we need to provide the paths of XTTS model checkpoint that you want to do the fine tuning. -TOKENIZER_FILE = os.path.join(CHECKPOINTS_OUT_PATH, TOKENIZER_FILE_LINK.split("/")[-1]) # vocab.json file -XTTS_CHECKPOINT = os.path.join(CHECKPOINTS_OUT_PATH, XTTS_CHECKPOINT_LINK.split("/")[-1]) # model.pth file - -# download XTTS v1.1 files if needed -if not os.path.isfile(TOKENIZER_FILE) or not os.path.isfile(XTTS_CHECKPOINT): - print(" > Downloading XTTS v1.1 files!") - ModelManager._download_model_files( - [TOKENIZER_FILE_LINK, XTTS_CHECKPOINT_LINK], CHECKPOINTS_OUT_PATH, progress_bar=True - ) - - -# Training sentences generations -SPEAKER_REFERENCE = [ - "./tests/data/ljspeech/wavs/LJ001-0002.wav" # speaker reference to be used in training test sentences -] -LANGUAGE = config_dataset.language - - -def main(): - # init args and config - model_args = GPTArgs( - max_conditioning_length=132300, # 6 secs - min_conditioning_length=66150, # 3 secs - debug_loading_failures=False, - max_wav_length=255995, # ~11.6 seconds - max_text_length=200, - mel_norm_file=MEL_NORM_FILE, - dvae_checkpoint=DVAE_CHECKPOINT, - # tokenizer_file="/raid/datasets/xtts_models/vocab.json", # vocab path of the model that you want to fine-tune - # xtts_checkpoint="https://huggingface.co/coqui/XTTS-v1/resolve/hifigan/model.pth", - xtts_checkpoint=XTTS_CHECKPOINT, # checkpoint path of the model that you want to fine-tune - tokenizer_file=TOKENIZER_FILE, - gpt_num_audio_tokens=8194, - gpt_start_audio_token=8192, - gpt_stop_audio_token=8193, - ) - # define audio config - audio_config = XttsAudioConfig(sample_rate=22050, dvae_sample_rate=22050, output_sample_rate=24000) - # training parameters config - config = GPTTrainerConfig( - output_path=OUT_PATH, - model_args=model_args, - run_name=RUN_NAME, - project_name=PROJECT_NAME, - run_description=""" - GPT XTTS training - """, - dashboard_logger=DASHBOARD_LOGGER, - logger_uri=LOGGER_URI, - audio=audio_config, - batch_size=BATCH_SIZE, - batch_group_size=48, - eval_batch_size=BATCH_SIZE, - num_loader_workers=8, - eval_split_max_size=256, - print_step=50, - plot_step=100, - log_model_step=1000, - save_step=10000, - save_n_checkpoints=1, - save_checkpoints=True, - # target_loss="loss", - print_eval=False, - # Optimizer values like tortoise, pytorch implementation with modifications to not apply WD to non-weight parameters. - optimizer="AdamW", - optimizer_wd_only_on_weights=OPTIMIZER_WD_ONLY_ON_WEIGHTS, - optimizer_params={"betas": [0.9, 0.96], "eps": 1e-8, "weight_decay": 1e-2}, - lr=5e-06, # learning rate - lr_scheduler="MultiStepLR", - # it was adjusted accordly for the new step scheme - lr_scheduler_params={"milestones": [50000 * 18, 150000 * 18, 300000 * 18], "gamma": 0.5, "last_epoch": -1}, - test_sentences=[ - { - "text": "It took me quite a long time to develop a voice, and now that I have it I'm not going to be silent.", - "speaker_wav": SPEAKER_REFERENCE, - "language": LANGUAGE, - }, - { - "text": "This cake is great. It's so delicious and moist.", - "speaker_wav": SPEAKER_REFERENCE, - "language": LANGUAGE, - }, - ], - ) - - # init the model from config - model = GPTTrainer.init_from_config(config) - - # load training samples - train_samples, eval_samples = load_tts_samples( - DATASETS_CONFIG_LIST, - eval_split=True, - eval_split_max_size=config.eval_split_max_size, - eval_split_size=config.eval_split_size, - ) - - # init the trainer and 🚀 - trainer = Trainer( - TrainerArgs( - restore_path=None, # xtts checkpoint is restored via xtts_checkpoint key so no need of restore it using Trainer restore_path parameter - skip_train_epoch=False, - start_with_eval=START_WITH_EVAL, - grad_accum_steps=GRAD_ACUMM_STEPS, - ), - config, - output_path=OUT_PATH, - model=model, - train_samples=train_samples, - eval_samples=eval_samples, - ) - trainer.fit() - - -if __name__ == "__main__": - main() diff --git a/spaces/arxify/RVC-beta-v2-0618/infer_pack/models_onnx.py b/spaces/arxify/RVC-beta-v2-0618/infer_pack/models_onnx.py deleted file mode 100644 index b0ed4a7847b419beef014f9afa1048400a829ebe..0000000000000000000000000000000000000000 --- a/spaces/arxify/RVC-beta-v2-0618/infer_pack/models_onnx.py +++ /dev/null @@ -1,819 +0,0 @@ -import math, pdb, os -from time import time as ttime -import torch -from torch import nn -from torch.nn import functional as F -from infer_pack import modules -from infer_pack import attentions -from infer_pack import commons -from infer_pack.commons import init_weights, get_padding -from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm -from infer_pack.commons import init_weights -import numpy as np -from infer_pack import commons - - -class TextEncoder256(nn.Module): - def __init__( - self, - out_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - f0=True, - ): - super().__init__() - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.emb_phone = nn.Linear(256, hidden_channels) - self.lrelu = nn.LeakyReLU(0.1, inplace=True) - if f0 == True: - self.emb_pitch = nn.Embedding(256, hidden_channels) # pitch 256 - self.encoder = attentions.Encoder( - hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout - ) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, phone, pitch, lengths): - if pitch == None: - x = self.emb_phone(phone) - else: - x = self.emb_phone(phone) + self.emb_pitch(pitch) - x = x * math.sqrt(self.hidden_channels) # [b, t, h] - x = self.lrelu(x) - x = torch.transpose(x, 1, -1) # [b, h, t] - x_mask = torch.unsqueeze(commons.sequence_mask(lengths, x.size(2)), 1).to( - x.dtype - ) - x = self.encoder(x * x_mask, x_mask) - stats = self.proj(x) * x_mask - - m, logs = torch.split(stats, self.out_channels, dim=1) - return m, logs, x_mask - - -class TextEncoder768(nn.Module): - def __init__( - self, - out_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - f0=True, - ): - super().__init__() - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.emb_phone = nn.Linear(768, hidden_channels) - self.lrelu = nn.LeakyReLU(0.1, inplace=True) - if f0 == True: - self.emb_pitch = nn.Embedding(256, hidden_channels) # pitch 256 - self.encoder = attentions.Encoder( - hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout - ) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, phone, pitch, lengths): - if pitch == None: - x = self.emb_phone(phone) - else: - x = self.emb_phone(phone) + self.emb_pitch(pitch) - x = x * math.sqrt(self.hidden_channels) # [b, t, h] - x = self.lrelu(x) - x = torch.transpose(x, 1, -1) # [b, h, t] - x_mask = torch.unsqueeze(commons.sequence_mask(lengths, x.size(2)), 1).to( - x.dtype - ) - x = self.encoder(x * x_mask, x_mask) - stats = self.proj(x) * x_mask - - m, logs = torch.split(stats, self.out_channels, dim=1) - return m, logs, x_mask - - -class ResidualCouplingBlock(nn.Module): - def __init__( - self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - n_flows=4, - gin_channels=0, - ): - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.n_flows = n_flows - self.gin_channels = gin_channels - - self.flows = nn.ModuleList() - for i in range(n_flows): - self.flows.append( - modules.ResidualCouplingLayer( - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=gin_channels, - mean_only=True, - ) - ) - self.flows.append(modules.Flip()) - - def forward(self, x, x_mask, g=None, reverse=False): - if not reverse: - for flow in self.flows: - x, _ = flow(x, x_mask, g=g, reverse=reverse) - else: - for flow in reversed(self.flows): - x = flow(x, x_mask, g=g, reverse=reverse) - return x - - def remove_weight_norm(self): - for i in range(self.n_flows): - self.flows[i * 2].remove_weight_norm() - - -class PosteriorEncoder(nn.Module): - def __init__( - self, - in_channels, - out_channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=0, - ): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - - self.pre = nn.Conv1d(in_channels, hidden_channels, 1) - self.enc = modules.WN( - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=gin_channels, - ) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, x, x_lengths, g=None): - x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to( - x.dtype - ) - x = self.pre(x) * x_mask - x = self.enc(x, x_mask, g=g) - stats = self.proj(x) * x_mask - m, logs = torch.split(stats, self.out_channels, dim=1) - z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask - return z, m, logs, x_mask - - def remove_weight_norm(self): - self.enc.remove_weight_norm() - - -class Generator(torch.nn.Module): - def __init__( - self, - initial_channel, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=0, - ): - super(Generator, self).__init__() - self.num_kernels = len(resblock_kernel_sizes) - self.num_upsamples = len(upsample_rates) - self.conv_pre = Conv1d( - initial_channel, upsample_initial_channel, 7, 1, padding=3 - ) - resblock = modules.ResBlock1 if resblock == "1" else modules.ResBlock2 - - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)): - self.ups.append( - weight_norm( - ConvTranspose1d( - upsample_initial_channel // (2**i), - upsample_initial_channel // (2 ** (i + 1)), - k, - u, - padding=(k - u) // 2, - ) - ) - ) - - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = upsample_initial_channel // (2 ** (i + 1)) - for j, (k, d) in enumerate( - zip(resblock_kernel_sizes, resblock_dilation_sizes) - ): - self.resblocks.append(resblock(ch, k, d)) - - self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False) - self.ups.apply(init_weights) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1) - - def forward(self, x, g=None): - x = self.conv_pre(x) - if g is not None: - x = x + self.cond(g) - - for i in range(self.num_upsamples): - x = F.leaky_relu(x, modules.LRELU_SLOPE) - x = self.ups[i](x) - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i * self.num_kernels + j](x) - else: - xs += self.resblocks[i * self.num_kernels + j](x) - x = xs / self.num_kernels - x = F.leaky_relu(x) - x = self.conv_post(x) - x = torch.tanh(x) - - return x - - def remove_weight_norm(self): - for l in self.ups: - remove_weight_norm(l) - for l in self.resblocks: - l.remove_weight_norm() - - -class SineGen(torch.nn.Module): - """Definition of sine generator - SineGen(samp_rate, harmonic_num = 0, - sine_amp = 0.1, noise_std = 0.003, - voiced_threshold = 0, - flag_for_pulse=False) - samp_rate: sampling rate in Hz - harmonic_num: number of harmonic overtones (default 0) - sine_amp: amplitude of sine-wavefrom (default 0.1) - noise_std: std of Gaussian noise (default 0.003) - voiced_thoreshold: F0 threshold for U/V classification (default 0) - flag_for_pulse: this SinGen is used inside PulseGen (default False) - Note: when flag_for_pulse is True, the first time step of a voiced - segment is always sin(np.pi) or cos(0) - """ - - def __init__( - self, - samp_rate, - harmonic_num=0, - sine_amp=0.1, - noise_std=0.003, - voiced_threshold=0, - flag_for_pulse=False, - ): - super(SineGen, self).__init__() - self.sine_amp = sine_amp - self.noise_std = noise_std - self.harmonic_num = harmonic_num - self.dim = self.harmonic_num + 1 - self.sampling_rate = samp_rate - self.voiced_threshold = voiced_threshold - - def _f02uv(self, f0): - # generate uv signal - uv = torch.ones_like(f0) - uv = uv * (f0 > self.voiced_threshold) - return uv - - def forward(self, f0, upp): - """sine_tensor, uv = forward(f0) - input F0: tensor(batchsize=1, length, dim=1) - f0 for unvoiced steps should be 0 - output sine_tensor: tensor(batchsize=1, length, dim) - output uv: tensor(batchsize=1, length, 1) - """ - with torch.no_grad(): - f0 = f0[:, None].transpose(1, 2) - f0_buf = torch.zeros(f0.shape[0], f0.shape[1], self.dim, device=f0.device) - # fundamental component - f0_buf[:, :, 0] = f0[:, :, 0] - for idx in np.arange(self.harmonic_num): - f0_buf[:, :, idx + 1] = f0_buf[:, :, 0] * ( - idx + 2 - ) # idx + 2: the (idx+1)-th overtone, (idx+2)-th harmonic - rad_values = (f0_buf / self.sampling_rate) % 1 ###%1意味着n_har的乘积无法后处理优化 - rand_ini = torch.rand( - f0_buf.shape[0], f0_buf.shape[2], device=f0_buf.device - ) - rand_ini[:, 0] = 0 - rad_values[:, 0, :] = rad_values[:, 0, :] + rand_ini - tmp_over_one = torch.cumsum(rad_values, 1) # % 1 #####%1意味着后面的cumsum无法再优化 - tmp_over_one *= upp - tmp_over_one = F.interpolate( - tmp_over_one.transpose(2, 1), - scale_factor=upp, - mode="linear", - align_corners=True, - ).transpose(2, 1) - rad_values = F.interpolate( - rad_values.transpose(2, 1), scale_factor=upp, mode="nearest" - ).transpose( - 2, 1 - ) ####### - tmp_over_one %= 1 - tmp_over_one_idx = (tmp_over_one[:, 1:, :] - tmp_over_one[:, :-1, :]) < 0 - cumsum_shift = torch.zeros_like(rad_values) - cumsum_shift[:, 1:, :] = tmp_over_one_idx * -1.0 - sine_waves = torch.sin( - torch.cumsum(rad_values + cumsum_shift, dim=1) * 2 * np.pi - ) - sine_waves = sine_waves * self.sine_amp - uv = self._f02uv(f0) - uv = F.interpolate( - uv.transpose(2, 1), scale_factor=upp, mode="nearest" - ).transpose(2, 1) - noise_amp = uv * self.noise_std + (1 - uv) * self.sine_amp / 3 - noise = noise_amp * torch.randn_like(sine_waves) - sine_waves = sine_waves * uv + noise - return sine_waves, uv, noise - - -class SourceModuleHnNSF(torch.nn.Module): - """SourceModule for hn-nsf - SourceModule(sampling_rate, harmonic_num=0, sine_amp=0.1, - add_noise_std=0.003, voiced_threshod=0) - sampling_rate: sampling_rate in Hz - harmonic_num: number of harmonic above F0 (default: 0) - sine_amp: amplitude of sine source signal (default: 0.1) - add_noise_std: std of additive Gaussian noise (default: 0.003) - note that amplitude of noise in unvoiced is decided - by sine_amp - voiced_threshold: threhold to set U/V given F0 (default: 0) - Sine_source, noise_source = SourceModuleHnNSF(F0_sampled) - F0_sampled (batchsize, length, 1) - Sine_source (batchsize, length, 1) - noise_source (batchsize, length 1) - uv (batchsize, length, 1) - """ - - def __init__( - self, - sampling_rate, - harmonic_num=0, - sine_amp=0.1, - add_noise_std=0.003, - voiced_threshod=0, - is_half=True, - ): - super(SourceModuleHnNSF, self).__init__() - - self.sine_amp = sine_amp - self.noise_std = add_noise_std - self.is_half = is_half - # to produce sine waveforms - self.l_sin_gen = SineGen( - sampling_rate, harmonic_num, sine_amp, add_noise_std, voiced_threshod - ) - - # to merge source harmonics into a single excitation - self.l_linear = torch.nn.Linear(harmonic_num + 1, 1) - self.l_tanh = torch.nn.Tanh() - - def forward(self, x, upp=None): - sine_wavs, uv, _ = self.l_sin_gen(x, upp) - if self.is_half: - sine_wavs = sine_wavs.half() - sine_merge = self.l_tanh(self.l_linear(sine_wavs)) - return sine_merge, None, None # noise, uv - - -class GeneratorNSF(torch.nn.Module): - def __init__( - self, - initial_channel, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels, - sr, - is_half=False, - ): - super(GeneratorNSF, self).__init__() - self.num_kernels = len(resblock_kernel_sizes) - self.num_upsamples = len(upsample_rates) - - self.f0_upsamp = torch.nn.Upsample(scale_factor=np.prod(upsample_rates)) - self.m_source = SourceModuleHnNSF( - sampling_rate=sr, harmonic_num=0, is_half=is_half - ) - self.noise_convs = nn.ModuleList() - self.conv_pre = Conv1d( - initial_channel, upsample_initial_channel, 7, 1, padding=3 - ) - resblock = modules.ResBlock1 if resblock == "1" else modules.ResBlock2 - - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)): - c_cur = upsample_initial_channel // (2 ** (i + 1)) - self.ups.append( - weight_norm( - ConvTranspose1d( - upsample_initial_channel // (2**i), - upsample_initial_channel // (2 ** (i + 1)), - k, - u, - padding=(k - u) // 2, - ) - ) - ) - if i + 1 < len(upsample_rates): - stride_f0 = np.prod(upsample_rates[i + 1 :]) - self.noise_convs.append( - Conv1d( - 1, - c_cur, - kernel_size=stride_f0 * 2, - stride=stride_f0, - padding=stride_f0 // 2, - ) - ) - else: - self.noise_convs.append(Conv1d(1, c_cur, kernel_size=1)) - - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = upsample_initial_channel // (2 ** (i + 1)) - for j, (k, d) in enumerate( - zip(resblock_kernel_sizes, resblock_dilation_sizes) - ): - self.resblocks.append(resblock(ch, k, d)) - - self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False) - self.ups.apply(init_weights) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1) - - self.upp = np.prod(upsample_rates) - - def forward(self, x, f0, g=None): - har_source, noi_source, uv = self.m_source(f0, self.upp) - har_source = har_source.transpose(1, 2) - x = self.conv_pre(x) - if g is not None: - x = x + self.cond(g) - - for i in range(self.num_upsamples): - x = F.leaky_relu(x, modules.LRELU_SLOPE) - x = self.ups[i](x) - x_source = self.noise_convs[i](har_source) - x = x + x_source - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i * self.num_kernels + j](x) - else: - xs += self.resblocks[i * self.num_kernels + j](x) - x = xs / self.num_kernels - x = F.leaky_relu(x) - x = self.conv_post(x) - x = torch.tanh(x) - return x - - def remove_weight_norm(self): - for l in self.ups: - remove_weight_norm(l) - for l in self.resblocks: - l.remove_weight_norm() - - -sr2sr = { - "32k": 32000, - "40k": 40000, - "48k": 48000, -} - - -class SynthesizerTrnMsNSFsidM(nn.Module): - def __init__( - self, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - spk_embed_dim, - gin_channels, - sr, - version, - **kwargs - ): - super().__init__() - if type(sr) == type("strr"): - sr = sr2sr[sr] - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.gin_channels = gin_channels - # self.hop_length = hop_length# - self.spk_embed_dim = spk_embed_dim - if version == "v1": - self.enc_p = TextEncoder256( - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - ) - else: - self.enc_p = TextEncoder768( - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - ) - self.dec = GeneratorNSF( - inter_channels, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=gin_channels, - sr=sr, - is_half=kwargs["is_half"], - ) - self.enc_q = PosteriorEncoder( - spec_channels, - inter_channels, - hidden_channels, - 5, - 1, - 16, - gin_channels=gin_channels, - ) - self.flow = ResidualCouplingBlock( - inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels - ) - self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels) - self.speaker_map = None - print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim) - - def remove_weight_norm(self): - self.dec.remove_weight_norm() - self.flow.remove_weight_norm() - self.enc_q.remove_weight_norm() - - def construct_spkmixmap(self, n_speaker): - self.speaker_map = torch.zeros((n_speaker, 1, 1, self.gin_channels)) - for i in range(n_speaker): - self.speaker_map[i] = self.emb_g(torch.LongTensor([[i]])) - self.speaker_map = self.speaker_map.unsqueeze(0) - - def forward(self, phone, phone_lengths, pitch, nsff0, g, rnd, max_len=None): - if self.speaker_map is not None: # [N, S] * [S, B, 1, H] - g = g.reshape((g.shape[0], g.shape[1], 1, 1, 1)) # [N, S, B, 1, 1] - g = g * self.speaker_map # [N, S, B, 1, H] - g = torch.sum(g, dim=1) # [N, 1, B, 1, H] - g = g.transpose(0, -1).transpose(0, -2).squeeze(0) # [B, H, N] - else: - g = g.unsqueeze(0) - g = self.emb_g(g).transpose(1, 2) - - m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths) - z_p = (m_p + torch.exp(logs_p) * rnd) * x_mask - z = self.flow(z_p, x_mask, g=g, reverse=True) - o = self.dec((z * x_mask)[:, :, :max_len], nsff0, g=g) - return o - - -class MultiPeriodDiscriminator(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(MultiPeriodDiscriminator, self).__init__() - periods = [2, 3, 5, 7, 11, 17] - # periods = [3, 5, 7, 11, 17, 23, 37] - - discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)] - discs = discs + [ - DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods - ] - self.discriminators = nn.ModuleList(discs) - - def forward(self, y, y_hat): - y_d_rs = [] # - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for i, d in enumerate(self.discriminators): - y_d_r, fmap_r = d(y) - y_d_g, fmap_g = d(y_hat) - # for j in range(len(fmap_r)): - # print(i,j,y.shape,y_hat.shape,fmap_r[j].shape,fmap_g[j].shape) - y_d_rs.append(y_d_r) - y_d_gs.append(y_d_g) - fmap_rs.append(fmap_r) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - - -class MultiPeriodDiscriminatorV2(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(MultiPeriodDiscriminatorV2, self).__init__() - # periods = [2, 3, 5, 7, 11, 17] - periods = [2, 3, 5, 7, 11, 17, 23, 37] - - discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)] - discs = discs + [ - DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods - ] - self.discriminators = nn.ModuleList(discs) - - def forward(self, y, y_hat): - y_d_rs = [] # - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for i, d in enumerate(self.discriminators): - y_d_r, fmap_r = d(y) - y_d_g, fmap_g = d(y_hat) - # for j in range(len(fmap_r)): - # print(i,j,y.shape,y_hat.shape,fmap_r[j].shape,fmap_g[j].shape) - y_d_rs.append(y_d_r) - y_d_gs.append(y_d_g) - fmap_rs.append(fmap_r) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - - -class DiscriminatorS(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(DiscriminatorS, self).__init__() - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList( - [ - norm_f(Conv1d(1, 16, 15, 1, padding=7)), - norm_f(Conv1d(16, 64, 41, 4, groups=4, padding=20)), - norm_f(Conv1d(64, 256, 41, 4, groups=16, padding=20)), - norm_f(Conv1d(256, 1024, 41, 4, groups=64, padding=20)), - norm_f(Conv1d(1024, 1024, 41, 4, groups=256, padding=20)), - norm_f(Conv1d(1024, 1024, 5, 1, padding=2)), - ] - ) - self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1)) - - def forward(self, x): - fmap = [] - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class DiscriminatorP(torch.nn.Module): - def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False): - super(DiscriminatorP, self).__init__() - self.period = period - self.use_spectral_norm = use_spectral_norm - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList( - [ - norm_f( - Conv2d( - 1, - 32, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 32, - 128, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 128, - 512, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 512, - 1024, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 1024, - 1024, - (kernel_size, 1), - 1, - padding=(get_padding(kernel_size, 1), 0), - ) - ), - ] - ) - self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0))) - - def forward(self, x): - fmap = [] - - # 1d to 2d - b, c, t = x.shape - if t % self.period != 0: # pad first - n_pad = self.period - (t % self.period) - x = F.pad(x, (0, n_pad), "reflect") - t = t + n_pad - x = x.view(b, c, t // self.period, self.period) - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/altair/expr/core.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/altair/expr/core.py deleted file mode 100644 index 264c5a1956fbc90a3322f3d69a2f555d416a9d95..0000000000000000000000000000000000000000 --- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/altair/expr/core.py +++ /dev/null @@ -1,197 +0,0 @@ -from ..utils import SchemaBase - - -class DatumType(object): - """An object to assist in building Vega-Lite Expressions""" - - def __repr__(self): - return "datum" - - def __getattr__(self, attr): - if attr.startswith("__") and attr.endswith("__"): - raise AttributeError(attr) - return GetAttrExpression("datum", attr) - - def __getitem__(self, attr): - return GetItemExpression("datum", attr) - - def __call__(self, datum, **kwargs): - """Specify a datum for use in an encoding""" - return dict(datum=datum, **kwargs) - - -datum = DatumType() - - -def _js_repr(val): - """Return a javascript-safe string representation of val""" - if val is True: - return "true" - elif val is False: - return "false" - elif val is None: - return "null" - else: - return repr(val) - - -class Expression(SchemaBase): - """Expression - - Base object for enabling build-up of Javascript expressions using - a Python syntax. Calling ``repr(obj)`` will return a Javascript - representation of the object and the operations it encodes. - """ - - _schema = {"type": "string"} - - def to_dict(self, *args, **kwargs): - return repr(self) - - def __setattr__(self, attr, val): - # We don't need the setattr magic defined in SchemaBase - return object.__setattr__(self, attr, val) - - def __add__(self, other): - return BinaryExpression("+", self, other) - - def __radd__(self, other): - return BinaryExpression("+", other, self) - - def __sub__(self, other): - return BinaryExpression("-", self, other) - - def __rsub__(self, other): - return BinaryExpression("-", other, self) - - def __mul__(self, other): - return BinaryExpression("*", self, other) - - def __rmul__(self, other): - return BinaryExpression("*", other, self) - - def __truediv__(self, other): - return BinaryExpression("/", self, other) - - def __rtruediv__(self, other): - return BinaryExpression("/", other, self) - - __div__ = __truediv__ - - __rdiv__ = __rtruediv__ - - def __mod__(self, other): - return BinaryExpression("%", self, other) - - def __rmod__(self, other): - return BinaryExpression("%", other, self) - - def __pow__(self, other): - # "**" Javascript operator is not supported in all browsers - return FunctionExpression("pow", (self, other)) - - def __rpow__(self, other): - # "**" Javascript operator is not supported in all browsers - return FunctionExpression("pow", (other, self)) - - def __neg__(self): - return UnaryExpression("-", self) - - def __pos__(self): - return UnaryExpression("+", self) - - # comparison operators - - def __eq__(self, other): - return BinaryExpression("===", self, other) - - def __ne__(self, other): - return BinaryExpression("!==", self, other) - - def __gt__(self, other): - return BinaryExpression(">", self, other) - - def __lt__(self, other): - return BinaryExpression("<", self, other) - - def __ge__(self, other): - return BinaryExpression(">=", self, other) - - def __le__(self, other): - return BinaryExpression("<=", self, other) - - def __abs__(self): - return FunctionExpression("abs", (self,)) - - # logical operators - - def __and__(self, other): - return BinaryExpression("&&", self, other) - - def __rand__(self, other): - return BinaryExpression("&&", other, self) - - def __or__(self, other): - return BinaryExpression("||", self, other) - - def __ror__(self, other): - return BinaryExpression("||", other, self) - - def __invert__(self): - return UnaryExpression("!", self) - - # item access - def __getitem__(self, val): - return GetItemExpression(self, val) - - -class UnaryExpression(Expression): - def __init__(self, op, val): - super(UnaryExpression, self).__init__(op=op, val=val) - - def __repr__(self): - return "({op}{val})".format(op=self.op, val=_js_repr(self.val)) - - -class BinaryExpression(Expression): - def __init__(self, op, lhs, rhs): - super(BinaryExpression, self).__init__(op=op, lhs=lhs, rhs=rhs) - - def __repr__(self): - return "({lhs} {op} {rhs})".format( - op=self.op, lhs=_js_repr(self.lhs), rhs=_js_repr(self.rhs) - ) - - -class FunctionExpression(Expression): - def __init__(self, name, args): - super(FunctionExpression, self).__init__(name=name, args=args) - - def __repr__(self): - args = ",".join(_js_repr(arg) for arg in self.args) - return "{name}({args})".format(name=self.name, args=args) - - -class ConstExpression(Expression): - def __init__(self, name, doc): - self.__doc__ = """{}: {}""".format(name, doc) - super(ConstExpression, self).__init__(name=name, doc=doc) - - def __repr__(self): - return str(self.name) - - -class GetAttrExpression(Expression): - def __init__(self, group, name): - super(GetAttrExpression, self).__init__(group=group, name=name) - - def __repr__(self): - return "{}.{}".format(self.group, self.name) - - -class GetItemExpression(Expression): - def __init__(self, group, name): - super(GetItemExpression, self).__init__(group=group, name=name) - - def __repr__(self): - return "{}[{!r}]".format(self.group, self.name) diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/altair/vegalite/v3/schema/core.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/altair/vegalite/v3/schema/core.py deleted file mode 100644 index 7c101a7cc372bdc572f483732e26a0860762fcdc..0000000000000000000000000000000000000000 --- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/altair/vegalite/v3/schema/core.py +++ /dev/null @@ -1,16039 +0,0 @@ -# The contents of this file are automatically written by -# tools/generate_schema_wrapper.py. Do not modify directly. - -from altair.utils.schemapi import SchemaBase, Undefined, _subclasses - -import pkgutil -import json - -def load_schema(): - """Load the json schema associated with this module's functions""" - return json.loads(pkgutil.get_data(__name__, 'vega-lite-schema.json').decode('utf-8')) - - -class VegaLiteSchema(SchemaBase): - _rootschema = load_schema() - @classmethod - def _default_wrapper_classes(cls): - return _subclasses(VegaLiteSchema) - - -class Root(VegaLiteSchema): - """Root schema wrapper - - anyOf(:class:`TopLevelUnitSpec`, :class:`TopLevelFacetSpec`, :class:`TopLevelLayerSpec`, - :class:`TopLevelRepeatSpec`, :class:`TopLevelConcatSpec`, :class:`TopLevelVConcatSpec`, - :class:`TopLevelHConcatSpec`) - A Vega-Lite top-level specification. - This is the root class for all Vega-Lite specifications. - (The json schema is generated from this type.) - """ - _schema = VegaLiteSchema._rootschema - - def __init__(self, *args, **kwds): - super(Root, self).__init__(*args, **kwds) - - -class Aggregate(VegaLiteSchema): - """Aggregate schema wrapper - - anyOf(:class:`AggregateOp`, :class:`ArgmaxDef`, :class:`ArgminDef`) - """ - _schema = {'$ref': '#/definitions/Aggregate'} - - def __init__(self, *args, **kwds): - super(Aggregate, self).__init__(*args, **kwds) - - -class AggregateOp(Aggregate): - """AggregateOp schema wrapper - - enum('argmax', 'argmin', 'average', 'count', 'distinct', 'max', 'mean', 'median', 'min', - 'missing', 'q1', 'q3', 'ci0', 'ci1', 'stderr', 'stdev', 'stdevp', 'sum', 'valid', 'values', - 'variance', 'variancep') - """ - _schema = {'$ref': '#/definitions/AggregateOp'} - - def __init__(self, *args): - super(AggregateOp, self).__init__(*args) - - -class AggregatedFieldDef(VegaLiteSchema): - """AggregatedFieldDef schema wrapper - - Mapping(required=[op, as]) - - Attributes - ---------- - - op : :class:`AggregateOp` - The aggregation operation to apply to the fields (e.g., sum, average or count). - See the `full list of supported aggregation operations - `__ - for more information. - field : :class:`FieldName` - The data field for which to compute aggregate function. This is required for all - aggregation operations except ``"count"``. - as : :class:`FieldName` - The output field names to use for each aggregated field. - """ - _schema = {'$ref': '#/definitions/AggregatedFieldDef'} - - def __init__(self, op=Undefined, field=Undefined, **kwds): - super(AggregatedFieldDef, self).__init__(op=op, field=field, **kwds) - - -class Align(VegaLiteSchema): - """Align schema wrapper - - enum('left', 'center', 'right') - """ - _schema = {'$ref': '#/definitions/Align'} - - def __init__(self, *args): - super(Align, self).__init__(*args) - - -class AnyMark(VegaLiteSchema): - """AnyMark schema wrapper - - anyOf(:class:`CompositeMark`, :class:`CompositeMarkDef`, :class:`Mark`, :class:`MarkDef`) - """ - _schema = {'$ref': '#/definitions/AnyMark'} - - def __init__(self, *args, **kwds): - super(AnyMark, self).__init__(*args, **kwds) - - -class AreaConfig(VegaLiteSchema): - """AreaConfig schema wrapper - - Mapping(required=[]) - - Attributes - ---------- - - align : :class:`Align` - The horizontal alignment of the text. One of ``"left"``, ``"right"``, ``"center"``. - angle : float - The rotation angle of the text, in degrees. - baseline : :class:`TextBaseline` - The vertical alignment of the text. One of ``"top"``, ``"middle"``, ``"bottom"``. - - **Default value:** ``"middle"`` - color : :class:`Color` - Default color. Note that ``fill`` and ``stroke`` have higher precedence than - ``color`` and will override ``color``. - - **Default value:** :raw-html:`` - ``"#4682b4"`` - - **Note:** This property cannot be used in a `style config - `__. - cornerRadius : float - The radius in pixels of rounded rectangle corners. - - **Default value:** ``0`` - cursor : :class:`Cursor` - The mouse cursor used over the mark. Any valid `CSS cursor type - `__ can be used. - dir : :class:`Dir` - The direction of the text. One of ``"ltr"`` (left-to-right) or ``"rtl"`` - (right-to-left). This property determines on which side is truncated in response to - the limit parameter. - - **Default value:** ``"ltr"`` - dx : float - The horizontal offset, in pixels, between the text label and its anchor point. The - offset is applied after rotation by the *angle* property. - dy : float - The vertical offset, in pixels, between the text label and its anchor point. The - offset is applied after rotation by the *angle* property. - ellipsis : string - The ellipsis string for text truncated in response to the limit parameter. - - **Default value:** ``"…"`` - fill : :class:`Color` - Default Fill Color. This has higher precedence than ``config.color`` - - **Default value:** (None) - fillOpacity : float - The fill opacity (value between [0,1]). - - **Default value:** ``1`` - filled : boolean - Whether the mark's color should be used as fill color instead of stroke color. - - **Default value:** ``false`` for ``point``, ``line`` and ``rule`` ; otherwise, - ``true``. - - **Note:** This property cannot be used in a `style config - `__. - font : string - The typeface to set the text in (e.g., ``"Helvetica Neue"`` ). - fontSize : float - The font size, in pixels. - fontStyle : :class:`FontStyle` - The font style (e.g., ``"italic"`` ). - fontWeight : :class:`FontWeight` - The font weight. - This can be either a string (e.g ``"bold"``, ``"normal"`` ) or a number ( ``100``, - ``200``, ``300``, ..., ``900`` where ``"normal"`` = ``400`` and ``"bold"`` = ``700`` - ). - height : float - Height of the marks. - href : string - A URL to load upon mouse click. If defined, the mark acts as a hyperlink. - interpolate : :class:`Interpolate` - The line interpolation method to use for line and area marks. One of the following: - - - * ``"linear"`` : piecewise linear segments, as in a polyline. - * ``"linear-closed"`` : close the linear segments to form a polygon. - * ``"step"`` : alternate between horizontal and vertical segments, as in a step - function. - * ``"step-before"`` : alternate between vertical and horizontal segments, as in a - step function. - * ``"step-after"`` : alternate between horizontal and vertical segments, as in a - step function. - * ``"basis"`` : a B-spline, with control point duplication on the ends. - * ``"basis-open"`` : an open B-spline; may not intersect the start or end. - * ``"basis-closed"`` : a closed B-spline, as in a loop. - * ``"cardinal"`` : a Cardinal spline, with control point duplication on the ends. - * ``"cardinal-open"`` : an open Cardinal spline; may not intersect the start or end, - but will intersect other control points. - * ``"cardinal-closed"`` : a closed Cardinal spline, as in a loop. - * ``"bundle"`` : equivalent to basis, except the tension parameter is used to - straighten the spline. - * ``"monotone"`` : cubic interpolation that preserves monotonicity in y. - limit : float - The maximum length of the text mark in pixels. The text value will be automatically - truncated if the rendered size exceeds the limit. - - **Default value:** ``0``, indicating no limit - line : anyOf(boolean, :class:`OverlayMarkDef`) - A flag for overlaying line on top of area marks, or an object defining the - properties of the overlayed lines. - - - If this value is an empty object ( ``{}`` ) or ``true``, lines with default - properties will be used. - - If this value is ``false``, no lines would be automatically added to area marks. - - **Default value:** ``false``. - opacity : float - The overall opacity (value between [0,1]). - - **Default value:** ``0.7`` for non-aggregate plots with ``point``, ``tick``, - ``circle``, or ``square`` marks or layered ``bar`` charts and ``1`` otherwise. - order : anyOf(None, boolean) - For line and trail marks, this ``order`` property can be set to ``null`` or - ``false`` to make the lines use the original order in the data sources. - orient : :class:`Orientation` - The orientation of a non-stacked bar, tick, area, and line charts. - The value is either horizontal (default) or vertical. - - - * For bar, rule and tick, this determines whether the size of the bar and tick - should be applied to x or y dimension. - * For area, this property determines the orient property of the Vega output. - * For line and trail marks, this property determines the sort order of the points in - the line - if ``config.sortLineBy`` is not specified. - For stacked charts, this is always determined by the orientation of the stack; - therefore explicitly specified value will be ignored. - point : anyOf(boolean, :class:`OverlayMarkDef`, enum('transparent')) - A flag for overlaying points on top of line or area marks, or an object defining the - properties of the overlayed points. - - - If this property is ``"transparent"``, transparent points will be used (for - enhancing tooltips and selections). - - If this property is an empty object ( ``{}`` ) or ``true``, filled points with - default properties will be used. - - If this property is ``false``, no points would be automatically added to line or - area marks. - - **Default value:** ``false``. - radius : float - Polar coordinate radial offset, in pixels, of the text label from the origin - determined by the ``x`` and ``y`` properties. - shape : string - Shape of the point marks. Supported values include: - - - * plotting shapes: ``"circle"``, ``"square"``, ``"cross"``, ``"diamond"``, - ``"triangle-up"``, ``"triangle-down"``, ``"triangle-right"``, or - ``"triangle-left"``. - * the line symbol ``"stroke"`` - * centered directional shapes ``"arrow"``, ``"wedge"``, or ``"triangle"`` - * a custom `SVG path string - `__ (For correct - sizing, custom shape paths should be defined within a square bounding box with - coordinates ranging from -1 to 1 along both the x and y dimensions.) - - **Default value:** ``"circle"`` - size : float - Default size for marks. - - - * For ``point`` / ``circle`` / ``square``, this represents the pixel area of the - marks. For example: in the case of circles, the radius is determined in part by - the square root of the size value. - * For ``bar``, this represents the band size of the bar, in pixels. - * For ``text``, this represents the font size, in pixels. - - **Default value:** ``30`` for point, circle, square marks; ``rangeStep`` - 1 for bar - marks with discrete dimensions; ``5`` for bar marks with continuous dimensions; - ``11`` for text marks. - stroke : :class:`Color` - Default Stroke Color. This has higher precedence than ``config.color`` - - **Default value:** (None) - strokeCap : :class:`StrokeCap` - The stroke cap for line ending style. One of ``"butt"``, ``"round"``, or - ``"square"``. - - **Default value:** ``"square"`` - strokeDash : List(float) - An array of alternating stroke, space lengths for creating dashed or dotted lines. - strokeDashOffset : float - The offset (in pixels) into which to begin drawing with the stroke dash array. - strokeJoin : :class:`StrokeJoin` - The stroke line join method. One of ``"miter"``, ``"round"`` or ``"bevel"``. - - **Default value:** ``"miter"`` - strokeMiterLimit : float - The miter limit at which to bevel a line join. - strokeOpacity : float - The stroke opacity (value between [0,1]). - - **Default value:** ``1`` - strokeWidth : float - The stroke width, in pixels. - tension : float - Depending on the interpolation type, sets the tension parameter (for line and area - marks). - text : string - Placeholder text if the ``text`` channel is not specified - theta : float - Polar coordinate angle, in radians, of the text label from the origin determined by - the ``x`` and ``y`` properties. Values for ``theta`` follow the same convention of - ``arc`` mark ``startAngle`` and ``endAngle`` properties: angles are measured in - radians, with ``0`` indicating "north". - tooltip : anyOf(:class:`Value`, :class:`TooltipContent`, None) - The tooltip text string to show upon mouse hover or an object defining which fields - should the tooltip be derived from. - - - * If ``tooltip`` is ``{"content": "encoding"}``, then all fields from ``encoding`` - will be used. - * If ``tooltip`` is ``{"content": "data"}``, then all fields that appear in the - highlighted data point will be used. - * If set to ``null``, then no tooltip will be used. - width : float - Width of the marks. - x : anyOf(float, enum('width')) - X coordinates of the marks, or width of horizontal ``"bar"`` and ``"area"`` without - specified ``x2`` or ``width``. - - The ``value`` of this channel can be a number or a string ``"width"`` for the width - of the plot. - x2 : anyOf(float, enum('width')) - X2 coordinates for ranged ``"area"``, ``"bar"``, ``"rect"``, and ``"rule"``. - - The ``value`` of this channel can be a number or a string ``"width"`` for the width - of the plot. - y : anyOf(float, enum('height')) - Y coordinates of the marks, or height of vertical ``"bar"`` and ``"area"`` without - specified ``y2`` or ``height``. - - The ``value`` of this channel can be a number or a string ``"height"`` for the - height of the plot. - y2 : anyOf(float, enum('width')) - Y2 coordinates for ranged ``"area"``, ``"bar"``, ``"rect"``, and ``"rule"``. - - The ``value`` of this channel can be a number or a string ``"height"`` for the - height of the plot. - """ - _schema = {'$ref': '#/definitions/AreaConfig'} - - def __init__(self, align=Undefined, angle=Undefined, baseline=Undefined, color=Undefined, - cornerRadius=Undefined, cursor=Undefined, dir=Undefined, dx=Undefined, dy=Undefined, - ellipsis=Undefined, fill=Undefined, fillOpacity=Undefined, filled=Undefined, - font=Undefined, fontSize=Undefined, fontStyle=Undefined, fontWeight=Undefined, - height=Undefined, href=Undefined, interpolate=Undefined, limit=Undefined, - line=Undefined, opacity=Undefined, order=Undefined, orient=Undefined, point=Undefined, - radius=Undefined, shape=Undefined, size=Undefined, stroke=Undefined, - strokeCap=Undefined, strokeDash=Undefined, strokeDashOffset=Undefined, - strokeJoin=Undefined, strokeMiterLimit=Undefined, strokeOpacity=Undefined, - strokeWidth=Undefined, tension=Undefined, text=Undefined, theta=Undefined, - tooltip=Undefined, width=Undefined, x=Undefined, x2=Undefined, y=Undefined, - y2=Undefined, **kwds): - super(AreaConfig, self).__init__(align=align, angle=angle, baseline=baseline, color=color, - cornerRadius=cornerRadius, cursor=cursor, dir=dir, dx=dx, - dy=dy, ellipsis=ellipsis, fill=fill, fillOpacity=fillOpacity, - filled=filled, font=font, fontSize=fontSize, - fontStyle=fontStyle, fontWeight=fontWeight, height=height, - href=href, interpolate=interpolate, limit=limit, line=line, - opacity=opacity, order=order, orient=orient, point=point, - radius=radius, shape=shape, size=size, stroke=stroke, - strokeCap=strokeCap, strokeDash=strokeDash, - strokeDashOffset=strokeDashOffset, strokeJoin=strokeJoin, - strokeMiterLimit=strokeMiterLimit, strokeOpacity=strokeOpacity, - strokeWidth=strokeWidth, tension=tension, text=text, - theta=theta, tooltip=tooltip, width=width, x=x, x2=x2, y=y, - y2=y2, **kwds) - - -class ArgmaxDef(Aggregate): - """ArgmaxDef schema wrapper - - Mapping(required=[argmax]) - - Attributes - ---------- - - argmax : string - - """ - _schema = {'$ref': '#/definitions/ArgmaxDef'} - - def __init__(self, argmax=Undefined, **kwds): - super(ArgmaxDef, self).__init__(argmax=argmax, **kwds) - - -class ArgminDef(Aggregate): - """ArgminDef schema wrapper - - Mapping(required=[argmin]) - - Attributes - ---------- - - argmin : string - - """ - _schema = {'$ref': '#/definitions/ArgminDef'} - - def __init__(self, argmin=Undefined, **kwds): - super(ArgminDef, self).__init__(argmin=argmin, **kwds) - - -class AutoSizeParams(VegaLiteSchema): - """AutoSizeParams schema wrapper - - Mapping(required=[]) - - Attributes - ---------- - - contains : enum('content', 'padding') - Determines how size calculation should be performed, one of ``"content"`` or - ``"padding"``. The default setting ( ``"content"`` ) interprets the width and height - settings as the data rectangle (plotting) dimensions, to which padding is then - added. In contrast, the ``"padding"`` setting includes the padding within the view - size calculations, such that the width and height settings indicate the **total** - intended size of the view. - - **Default value** : ``"content"`` - resize : boolean - A boolean flag indicating if autosize layout should be re-calculated on every view - update. - - **Default value** : ``false`` - type : :class:`AutosizeType` - The sizing format type. One of ``"pad"``, ``"fit"`` or ``"none"``. See the `autosize - type `__ documentation for - descriptions of each. - - **Default value** : ``"pad"`` - """ - _schema = {'$ref': '#/definitions/AutoSizeParams'} - - def __init__(self, contains=Undefined, resize=Undefined, type=Undefined, **kwds): - super(AutoSizeParams, self).__init__(contains=contains, resize=resize, type=type, **kwds) - - -class AutosizeType(VegaLiteSchema): - """AutosizeType schema wrapper - - enum('pad', 'fit', 'none') - """ - _schema = {'$ref': '#/definitions/AutosizeType'} - - def __init__(self, *args): - super(AutosizeType, self).__init__(*args) - - -class Axis(VegaLiteSchema): - """Axis schema wrapper - - Mapping(required=[]) - - Attributes - ---------- - - bandPosition : float - An interpolation fraction indicating where, for ``band`` scales, axis ticks should - be positioned. A value of ``0`` places ticks at the left edge of their bands. A - value of ``0.5`` places ticks in the middle of their bands. - - **Default value:** ``0.5`` - domain : boolean - A boolean flag indicating if the domain (the axis baseline) should be included as - part of the axis. - - **Default value:** ``true`` - domainColor : :class:`Color` - Color of axis domain line. - - **Default value:** ``"gray"``. - domainDash : List(float) - An array of alternating [stroke, space] lengths for dashed domain lines. - domainDashOffset : float - The pixel offset at which to start drawing with the domain dash array. - domainOpacity : float - Opacity of the axis domain line. - domainWidth : float - Stroke width of axis domain line - - **Default value:** ``1`` - format : string - The text formatting pattern for labels of guides (axes, legends, headers) and text - marks. - - - * If the format type is ``"number"`` (e.g., for quantitative fields), this is D3's - `number format pattern `__. - * If the format type is ``"time"`` (e.g., for temporal fields), this is D3's `time - format pattern `__. - - See the `format documentation `__ - for more examples. - - **Default value:** Derived from `numberFormat - `__ config for number - format and from `timeFormat - `__ config for time - format. - formatType : enum('number', 'time') - The format type for labels ( ``"number"`` or ``"time"`` ). - - **Default value:** - - - * ``"time"`` for temporal fields and ordinal and nomimal fields with ``timeUnit``. - * ``"number"`` for quantitative fields as well as ordinal and nomimal fields without - ``timeUnit``. - grid : boolean - A boolean flag indicating if grid lines should be included as part of the axis - - **Default value:** ``true`` for `continuous scales - `__ that are not - binned; otherwise, ``false``. - gridColor : :class:`Color` - Color of gridlines. - - **Default value:** ``"lightGray"``. - gridDash : List(float) - An array of alternating [stroke, space] lengths for dashed grid lines. - gridDashOffset : float - The pixel offset at which to start drawing with the grid dash array. - gridOpacity : float - The stroke opacity of grid (value between [0,1]) - - **Default value:** ``1`` - gridWidth : float - The grid width, in pixels. - - **Default value:** ``1`` - labelAlign : :class:`Align` - Horizontal text alignment of axis tick labels, overriding the default setting for - the current axis orientation. - labelAngle : float - The rotation angle of the axis labels. - - **Default value:** ``-90`` for nominal and ordinal fields; ``0`` otherwise. - labelBaseline : :class:`TextBaseline` - Vertical text baseline of axis tick labels, overriding the default setting for the - current axis orientation. Can be ``"top"``, ``"middle"``, ``"bottom"``, or - ``"alphabetic"``. - labelBound : anyOf(float, boolean) - Indicates if labels should be hidden if they exceed the axis range. If ``false`` - (the default) no bounds overlap analysis is performed. If ``true``, labels will be - hidden if they exceed the axis range by more than 1 pixel. If this property is a - number, it specifies the pixel tolerance: the maximum amount by which a label - bounding box may exceed the axis range. - - **Default value:** ``false``. - labelColor : :class:`Color` - The color of the tick label, can be in hex color code or regular color name. - labelFlush : anyOf(boolean, float) - Indicates if the first and last axis labels should be aligned flush with the scale - range. Flush alignment for a horizontal axis will left-align the first label and - right-align the last label. For vertical axes, bottom and top text baselines are - applied instead. If this property is a number, it also indicates the number of - pixels by which to offset the first and last labels; for example, a value of 2 will - flush-align the first and last labels and also push them 2 pixels outward from the - center of the axis. The additional adjustment can sometimes help the labels better - visually group with corresponding axis ticks. - - **Default value:** ``true`` for axis of a continuous x-scale. Otherwise, ``false``. - labelFlushOffset : float - Indicates the number of pixels by which to offset flush-adjusted labels. For - example, a value of ``2`` will push flush-adjusted labels 2 pixels outward from the - center of the axis. Offsets can help the labels better visually group with - corresponding axis ticks. - - **Default value:** ``0``. - labelFont : string - The font of the tick label. - labelFontSize : float - The font size of the label, in pixels. - labelFontStyle : :class:`FontStyle` - Font style of the title. - labelFontWeight : :class:`FontWeight` - Font weight of axis tick labels. - labelLimit : float - Maximum allowed pixel width of axis tick labels. - - **Default value:** ``180`` - labelOpacity : float - The opacity of the labels. - labelOverlap : :class:`LabelOverlap` - The strategy to use for resolving overlap of axis labels. If ``false`` (the - default), no overlap reduction is attempted. If set to ``true`` or ``"parity"``, a - strategy of removing every other label is used (this works well for standard linear - axes). If set to ``"greedy"``, a linear scan of the labels is performed, removing - any labels that overlaps with the last visible label (this often works better for - log-scaled axes). - - **Default value:** ``true`` for non-nominal fields with non-log scales; ``"greedy"`` - for log scales; otherwise ``false``. - labelPadding : float - The padding, in pixels, between axis and text labels. - - **Default value:** ``2`` - labelSeparation : float - The minimum separation that must be between label bounding boxes for them to be - considered non-overlapping (default ``0`` ). This property is ignored if - *labelOverlap* resolution is not enabled. - labels : boolean - A boolean flag indicating if labels should be included as part of the axis. - - **Default value:** ``true``. - maxExtent : float - The maximum extent in pixels that axis ticks and labels should use. This determines - a maximum offset value for axis titles. - - **Default value:** ``undefined``. - minExtent : float - The minimum extent in pixels that axis ticks and labels should use. This determines - a minimum offset value for axis titles. - - **Default value:** ``30`` for y-axis; ``undefined`` for x-axis. - offset : float - The offset, in pixels, by which to displace the axis from the edge of the enclosing - group or data rectangle. - - **Default value:** derived from the `axis config - `__ 's - ``offset`` ( ``0`` by default) - orient : :class:`AxisOrient` - The orientation of the axis. One of ``"top"``, ``"bottom"``, ``"left"`` or - ``"right"``. The orientation can be used to further specialize the axis type (e.g., - a y-axis oriented towards the right edge of the chart). - - **Default value:** ``"bottom"`` for x-axes and ``"left"`` for y-axes. - position : float - The anchor position of the axis in pixels. For x-axes with top or bottom - orientation, this sets the axis group x coordinate. For y-axes with left or right - orientation, this sets the axis group y coordinate. - - **Default value** : ``0`` - tickColor : :class:`Color` - The color of the axis's tick. - - **Default value:** ``"gray"`` - tickCount : float - A desired number of ticks, for axes visualizing quantitative scales. The resulting - number may be different so that values are "nice" (multiples of 2, 5, 10) and lie - within the underlying scale's range. - tickDash : List(float) - An array of alternating [stroke, space] lengths for dashed tick mark lines. - tickDashOffset : float - The pixel offset at which to start drawing with the tick mark dash array. - tickExtra : boolean - Boolean flag indicating if an extra axis tick should be added for the initial - position of the axis. This flag is useful for styling axes for ``band`` scales such - that ticks are placed on band boundaries rather in the middle of a band. Use in - conjunction with ``"bandPosition": 1`` and an axis ``"padding"`` value of ``0``. - tickMinStep : float - The minimum desired step between axis ticks, in terms of scale domain values. For - example, a value of ``1`` indicates that ticks should not be less than 1 unit apart. - If ``tickMinStep`` is specified, the ``tickCount`` value will be adjusted, if - necessary, to enforce the minimum step value. - - **Default value** : ``undefined`` - tickOffset : float - Position offset in pixels to apply to ticks, labels, and gridlines. - tickOpacity : float - Opacity of the ticks. - tickRound : boolean - Boolean flag indicating if pixel position values should be rounded to the nearest - integer. - - **Default value:** ``true`` - tickSize : float - The size in pixels of axis ticks. - - **Default value:** ``5`` - tickWidth : float - The width, in pixels, of ticks. - - **Default value:** ``1`` - ticks : boolean - Boolean value that determines whether the axis should include ticks. - - **Default value:** ``true`` - title : anyOf(string, None) - A title for the field. If ``null``, the title will be removed. - - **Default value:** derived from the field's name and transformation function ( - ``aggregate``, ``bin`` and ``timeUnit`` ). If the field has an aggregate function, - the function is displayed as part of the title (e.g., ``"Sum of Profit"`` ). If the - field is binned or has a time unit applied, the applied function is shown in - parentheses (e.g., ``"Profit (binned)"``, ``"Transaction Date (year-month)"`` ). - Otherwise, the title is simply the field name. - - **Notes** : - - 1) You can customize the default field title format by providing the `fieldTitle - `__ property in - the `config `__ or `fieldTitle - function via the compile function's options - `__. - - 2) If both field definition's ``title`` and axis, header, or legend ``title`` are - defined, axis/header/legend title will be used. - titleAlign : :class:`Align` - Horizontal text alignment of axis titles. - titleAnchor : :class:`TitleAnchor` - Text anchor position for placing axis titles. - titleAngle : float - Angle in degrees of axis titles. - titleBaseline : :class:`TextBaseline` - Vertical text baseline for axis titles. - titleColor : :class:`Color` - Color of the title, can be in hex color code or regular color name. - titleFont : string - Font of the title. (e.g., ``"Helvetica Neue"`` ). - titleFontSize : float - Font size of the title. - titleFontStyle : :class:`FontStyle` - Font style of the title. - titleFontWeight : :class:`FontWeight` - Font weight of the title. - This can be either a string (e.g ``"bold"``, ``"normal"`` ) or a number ( ``100``, - ``200``, ``300``, ..., ``900`` where ``"normal"`` = ``400`` and ``"bold"`` = ``700`` - ). - titleLimit : float - Maximum allowed pixel width of axis titles. - titleOpacity : float - Opacity of the axis title. - titlePadding : float - The padding, in pixels, between title and axis. - titleX : float - X-coordinate of the axis title relative to the axis group. - titleY : float - Y-coordinate of the axis title relative to the axis group. - values : anyOf(List(float), List(string), List(boolean), List(:class:`DateTime`)) - Explicitly set the visible axis tick values. - zindex : float - A non-negative integer indicating the z-index of the axis. - If zindex is 0, axes should be drawn behind all chart elements. - To put them in front, use ``"zindex = 1"``. - - **Default value:** ``1`` (in front of the marks) for actual axis and ``0`` (behind - the marks) for grids. - """ - _schema = {'$ref': '#/definitions/Axis'} - - def __init__(self, bandPosition=Undefined, domain=Undefined, domainColor=Undefined, - domainDash=Undefined, domainDashOffset=Undefined, domainOpacity=Undefined, - domainWidth=Undefined, format=Undefined, formatType=Undefined, grid=Undefined, - gridColor=Undefined, gridDash=Undefined, gridDashOffset=Undefined, - gridOpacity=Undefined, gridWidth=Undefined, labelAlign=Undefined, labelAngle=Undefined, - labelBaseline=Undefined, labelBound=Undefined, labelColor=Undefined, - labelFlush=Undefined, labelFlushOffset=Undefined, labelFont=Undefined, - labelFontSize=Undefined, labelFontStyle=Undefined, labelFontWeight=Undefined, - labelLimit=Undefined, labelOpacity=Undefined, labelOverlap=Undefined, - labelPadding=Undefined, labelSeparation=Undefined, labels=Undefined, - maxExtent=Undefined, minExtent=Undefined, offset=Undefined, orient=Undefined, - position=Undefined, tickColor=Undefined, tickCount=Undefined, tickDash=Undefined, - tickDashOffset=Undefined, tickExtra=Undefined, tickMinStep=Undefined, - tickOffset=Undefined, tickOpacity=Undefined, tickRound=Undefined, tickSize=Undefined, - tickWidth=Undefined, ticks=Undefined, title=Undefined, titleAlign=Undefined, - titleAnchor=Undefined, titleAngle=Undefined, titleBaseline=Undefined, - titleColor=Undefined, titleFont=Undefined, titleFontSize=Undefined, - titleFontStyle=Undefined, titleFontWeight=Undefined, titleLimit=Undefined, - titleOpacity=Undefined, titlePadding=Undefined, titleX=Undefined, titleY=Undefined, - values=Undefined, zindex=Undefined, **kwds): - super(Axis, self).__init__(bandPosition=bandPosition, domain=domain, domainColor=domainColor, - domainDash=domainDash, domainDashOffset=domainDashOffset, - domainOpacity=domainOpacity, domainWidth=domainWidth, format=format, - formatType=formatType, grid=grid, gridColor=gridColor, - gridDash=gridDash, gridDashOffset=gridDashOffset, - gridOpacity=gridOpacity, gridWidth=gridWidth, labelAlign=labelAlign, - labelAngle=labelAngle, labelBaseline=labelBaseline, - labelBound=labelBound, labelColor=labelColor, labelFlush=labelFlush, - labelFlushOffset=labelFlushOffset, labelFont=labelFont, - labelFontSize=labelFontSize, labelFontStyle=labelFontStyle, - labelFontWeight=labelFontWeight, labelLimit=labelLimit, - labelOpacity=labelOpacity, labelOverlap=labelOverlap, - labelPadding=labelPadding, labelSeparation=labelSeparation, - labels=labels, maxExtent=maxExtent, minExtent=minExtent, - offset=offset, orient=orient, position=position, tickColor=tickColor, - tickCount=tickCount, tickDash=tickDash, - tickDashOffset=tickDashOffset, tickExtra=tickExtra, - tickMinStep=tickMinStep, tickOffset=tickOffset, - tickOpacity=tickOpacity, tickRound=tickRound, tickSize=tickSize, - tickWidth=tickWidth, ticks=ticks, title=title, titleAlign=titleAlign, - titleAnchor=titleAnchor, titleAngle=titleAngle, - titleBaseline=titleBaseline, titleColor=titleColor, - titleFont=titleFont, titleFontSize=titleFontSize, - titleFontStyle=titleFontStyle, titleFontWeight=titleFontWeight, - titleLimit=titleLimit, titleOpacity=titleOpacity, - titlePadding=titlePadding, titleX=titleX, titleY=titleY, - values=values, zindex=zindex, **kwds) - - -class AxisConfig(VegaLiteSchema): - """AxisConfig schema wrapper - - Mapping(required=[]) - - Attributes - ---------- - - bandPosition : float - An interpolation fraction indicating where, for ``band`` scales, axis ticks should - be positioned. A value of ``0`` places ticks at the left edge of their bands. A - value of ``0.5`` places ticks in the middle of their bands. - - **Default value:** ``0.5`` - domain : boolean - A boolean flag indicating if the domain (the axis baseline) should be included as - part of the axis. - - **Default value:** ``true`` - domainColor : :class:`Color` - Color of axis domain line. - - **Default value:** ``"gray"``. - domainDash : List(float) - An array of alternating [stroke, space] lengths for dashed domain lines. - domainDashOffset : float - The pixel offset at which to start drawing with the domain dash array. - domainOpacity : float - Opacity of the axis domain line. - domainWidth : float - Stroke width of axis domain line - - **Default value:** ``1`` - grid : boolean - A boolean flag indicating if grid lines should be included as part of the axis - - **Default value:** ``true`` for `continuous scales - `__ that are not - binned; otherwise, ``false``. - gridColor : :class:`Color` - Color of gridlines. - - **Default value:** ``"lightGray"``. - gridDash : List(float) - An array of alternating [stroke, space] lengths for dashed grid lines. - gridDashOffset : float - The pixel offset at which to start drawing with the grid dash array. - gridOpacity : float - The stroke opacity of grid (value between [0,1]) - - **Default value:** ``1`` - gridWidth : float - The grid width, in pixels. - - **Default value:** ``1`` - labelAlign : :class:`Align` - Horizontal text alignment of axis tick labels, overriding the default setting for - the current axis orientation. - labelAngle : float - The rotation angle of the axis labels. - - **Default value:** ``-90`` for nominal and ordinal fields; ``0`` otherwise. - labelBaseline : :class:`TextBaseline` - Vertical text baseline of axis tick labels, overriding the default setting for the - current axis orientation. Can be ``"top"``, ``"middle"``, ``"bottom"``, or - ``"alphabetic"``. - labelBound : anyOf(float, boolean) - Indicates if labels should be hidden if they exceed the axis range. If ``false`` - (the default) no bounds overlap analysis is performed. If ``true``, labels will be - hidden if they exceed the axis range by more than 1 pixel. If this property is a - number, it specifies the pixel tolerance: the maximum amount by which a label - bounding box may exceed the axis range. - - **Default value:** ``false``. - labelColor : :class:`Color` - The color of the tick label, can be in hex color code or regular color name. - labelFlush : anyOf(boolean, float) - Indicates if the first and last axis labels should be aligned flush with the scale - range. Flush alignment for a horizontal axis will left-align the first label and - right-align the last label. For vertical axes, bottom and top text baselines are - applied instead. If this property is a number, it also indicates the number of - pixels by which to offset the first and last labels; for example, a value of 2 will - flush-align the first and last labels and also push them 2 pixels outward from the - center of the axis. The additional adjustment can sometimes help the labels better - visually group with corresponding axis ticks. - - **Default value:** ``true`` for axis of a continuous x-scale. Otherwise, ``false``. - labelFlushOffset : float - Indicates the number of pixels by which to offset flush-adjusted labels. For - example, a value of ``2`` will push flush-adjusted labels 2 pixels outward from the - center of the axis. Offsets can help the labels better visually group with - corresponding axis ticks. - - **Default value:** ``0``. - labelFont : string - The font of the tick label. - labelFontSize : float - The font size of the label, in pixels. - labelFontStyle : :class:`FontStyle` - Font style of the title. - labelFontWeight : :class:`FontWeight` - Font weight of axis tick labels. - labelLimit : float - Maximum allowed pixel width of axis tick labels. - - **Default value:** ``180`` - labelOpacity : float - The opacity of the labels. - labelOverlap : :class:`LabelOverlap` - The strategy to use for resolving overlap of axis labels. If ``false`` (the - default), no overlap reduction is attempted. If set to ``true`` or ``"parity"``, a - strategy of removing every other label is used (this works well for standard linear - axes). If set to ``"greedy"``, a linear scan of the labels is performed, removing - any labels that overlaps with the last visible label (this often works better for - log-scaled axes). - - **Default value:** ``true`` for non-nominal fields with non-log scales; ``"greedy"`` - for log scales; otherwise ``false``. - labelPadding : float - The padding, in pixels, between axis and text labels. - - **Default value:** ``2`` - labelSeparation : float - The minimum separation that must be between label bounding boxes for them to be - considered non-overlapping (default ``0`` ). This property is ignored if - *labelOverlap* resolution is not enabled. - labels : boolean - A boolean flag indicating if labels should be included as part of the axis. - - **Default value:** ``true``. - maxExtent : float - The maximum extent in pixels that axis ticks and labels should use. This determines - a maximum offset value for axis titles. - - **Default value:** ``undefined``. - minExtent : float - The minimum extent in pixels that axis ticks and labels should use. This determines - a minimum offset value for axis titles. - - **Default value:** ``30`` for y-axis; ``undefined`` for x-axis. - orient : :class:`AxisOrient` - The orientation of the axis. One of ``"top"``, ``"bottom"``, ``"left"`` or - ``"right"``. The orientation can be used to further specialize the axis type (e.g., - a y-axis oriented towards the right edge of the chart). - - **Default value:** ``"bottom"`` for x-axes and ``"left"`` for y-axes. - shortTimeLabels : boolean - Whether month names and weekday names should be abbreviated. - - **Default value:** ``false`` - tickColor : :class:`Color` - The color of the axis's tick. - - **Default value:** ``"gray"`` - tickDash : List(float) - An array of alternating [stroke, space] lengths for dashed tick mark lines. - tickDashOffset : float - The pixel offset at which to start drawing with the tick mark dash array. - tickExtra : boolean - Boolean flag indicating if an extra axis tick should be added for the initial - position of the axis. This flag is useful for styling axes for ``band`` scales such - that ticks are placed on band boundaries rather in the middle of a band. Use in - conjunction with ``"bandPosition": 1`` and an axis ``"padding"`` value of ``0``. - tickOffset : float - Position offset in pixels to apply to ticks, labels, and gridlines. - tickOpacity : float - Opacity of the ticks. - tickRound : boolean - Boolean flag indicating if pixel position values should be rounded to the nearest - integer. - - **Default value:** ``true`` - tickSize : float - The size in pixels of axis ticks. - - **Default value:** ``5`` - tickWidth : float - The width, in pixels, of ticks. - - **Default value:** ``1`` - ticks : boolean - Boolean value that determines whether the axis should include ticks. - - **Default value:** ``true`` - title : None - Set to null to disable title for the axis, legend, or header. - titleAlign : :class:`Align` - Horizontal text alignment of axis titles. - titleAnchor : :class:`TitleAnchor` - Text anchor position for placing axis titles. - titleAngle : float - Angle in degrees of axis titles. - titleBaseline : :class:`TextBaseline` - Vertical text baseline for axis titles. - titleColor : :class:`Color` - Color of the title, can be in hex color code or regular color name. - titleFont : string - Font of the title. (e.g., ``"Helvetica Neue"`` ). - titleFontSize : float - Font size of the title. - titleFontStyle : :class:`FontStyle` - Font style of the title. - titleFontWeight : :class:`FontWeight` - Font weight of the title. - This can be either a string (e.g ``"bold"``, ``"normal"`` ) or a number ( ``100``, - ``200``, ``300``, ..., ``900`` where ``"normal"`` = ``400`` and ``"bold"`` = ``700`` - ). - titleLimit : float - Maximum allowed pixel width of axis titles. - titleOpacity : float - Opacity of the axis title. - titlePadding : float - The padding, in pixels, between title and axis. - titleX : float - X-coordinate of the axis title relative to the axis group. - titleY : float - Y-coordinate of the axis title relative to the axis group. - """ - _schema = {'$ref': '#/definitions/AxisConfig'} - - def __init__(self, bandPosition=Undefined, domain=Undefined, domainColor=Undefined, - domainDash=Undefined, domainDashOffset=Undefined, domainOpacity=Undefined, - domainWidth=Undefined, grid=Undefined, gridColor=Undefined, gridDash=Undefined, - gridDashOffset=Undefined, gridOpacity=Undefined, gridWidth=Undefined, - labelAlign=Undefined, labelAngle=Undefined, labelBaseline=Undefined, - labelBound=Undefined, labelColor=Undefined, labelFlush=Undefined, - labelFlushOffset=Undefined, labelFont=Undefined, labelFontSize=Undefined, - labelFontStyle=Undefined, labelFontWeight=Undefined, labelLimit=Undefined, - labelOpacity=Undefined, labelOverlap=Undefined, labelPadding=Undefined, - labelSeparation=Undefined, labels=Undefined, maxExtent=Undefined, minExtent=Undefined, - orient=Undefined, shortTimeLabels=Undefined, tickColor=Undefined, tickDash=Undefined, - tickDashOffset=Undefined, tickExtra=Undefined, tickOffset=Undefined, - tickOpacity=Undefined, tickRound=Undefined, tickSize=Undefined, tickWidth=Undefined, - ticks=Undefined, title=Undefined, titleAlign=Undefined, titleAnchor=Undefined, - titleAngle=Undefined, titleBaseline=Undefined, titleColor=Undefined, - titleFont=Undefined, titleFontSize=Undefined, titleFontStyle=Undefined, - titleFontWeight=Undefined, titleLimit=Undefined, titleOpacity=Undefined, - titlePadding=Undefined, titleX=Undefined, titleY=Undefined, **kwds): - super(AxisConfig, self).__init__(bandPosition=bandPosition, domain=domain, - domainColor=domainColor, domainDash=domainDash, - domainDashOffset=domainDashOffset, domainOpacity=domainOpacity, - domainWidth=domainWidth, grid=grid, gridColor=gridColor, - gridDash=gridDash, gridDashOffset=gridDashOffset, - gridOpacity=gridOpacity, gridWidth=gridWidth, - labelAlign=labelAlign, labelAngle=labelAngle, - labelBaseline=labelBaseline, labelBound=labelBound, - labelColor=labelColor, labelFlush=labelFlush, - labelFlushOffset=labelFlushOffset, labelFont=labelFont, - labelFontSize=labelFontSize, labelFontStyle=labelFontStyle, - labelFontWeight=labelFontWeight, labelLimit=labelLimit, - labelOpacity=labelOpacity, labelOverlap=labelOverlap, - labelPadding=labelPadding, labelSeparation=labelSeparation, - labels=labels, maxExtent=maxExtent, minExtent=minExtent, - orient=orient, shortTimeLabels=shortTimeLabels, - tickColor=tickColor, tickDash=tickDash, - tickDashOffset=tickDashOffset, tickExtra=tickExtra, - tickOffset=tickOffset, tickOpacity=tickOpacity, - tickRound=tickRound, tickSize=tickSize, tickWidth=tickWidth, - ticks=ticks, title=title, titleAlign=titleAlign, - titleAnchor=titleAnchor, titleAngle=titleAngle, - titleBaseline=titleBaseline, titleColor=titleColor, - titleFont=titleFont, titleFontSize=titleFontSize, - titleFontStyle=titleFontStyle, titleFontWeight=titleFontWeight, - titleLimit=titleLimit, titleOpacity=titleOpacity, - titlePadding=titlePadding, titleX=titleX, titleY=titleY, **kwds) - - -class AxisOrient(VegaLiteSchema): - """AxisOrient schema wrapper - - enum('top', 'bottom', 'left', 'right') - """ - _schema = {'$ref': '#/definitions/AxisOrient'} - - def __init__(self, *args): - super(AxisOrient, self).__init__(*args) - - -class AxisResolveMap(VegaLiteSchema): - """AxisResolveMap schema wrapper - - Mapping(required=[]) - - Attributes - ---------- - - x : :class:`ResolveMode` - - y : :class:`ResolveMode` - - """ - _schema = {'$ref': '#/definitions/AxisResolveMap'} - - def __init__(self, x=Undefined, y=Undefined, **kwds): - super(AxisResolveMap, self).__init__(x=x, y=y, **kwds) - - -class BaseLegendLayout(VegaLiteSchema): - """BaseLegendLayout schema wrapper - - Mapping(required=[]) - - Attributes - ---------- - - anchor : :class:`TitleAnchor` - The anchor point for legend orient group layout. - bounds : :class:`LayoutBounds` - The bounds calculation to use for legend orient group layout. - center : anyOf(boolean, :class:`SignalRef`) - A flag to center legends within a shared orient group. - direction : anyOf(:class:`Orientation`, :class:`SignalRef`) - The layout direction for legend orient group layout. - margin : anyOf(float, :class:`SignalRef`) - The pixel margin between legends within a orient group. - offset : anyOf(float, :class:`SignalRef`) - The pixel offset from the chart body for a legend orient group. - """ - _schema = {'$ref': '#/definitions/BaseLegendLayout'} - - def __init__(self, anchor=Undefined, bounds=Undefined, center=Undefined, direction=Undefined, - margin=Undefined, offset=Undefined, **kwds): - super(BaseLegendLayout, self).__init__(anchor=anchor, bounds=bounds, center=center, - direction=direction, margin=margin, offset=offset, **kwds) - - -class BaseMarkConfig(VegaLiteSchema): - """BaseMarkConfig schema wrapper - - Mapping(required=[]) - - Attributes - ---------- - - align : :class:`Align` - The horizontal alignment of the text. One of ``"left"``, ``"right"``, ``"center"``. - angle : float - The rotation angle of the text, in degrees. - baseline : :class:`TextBaseline` - The vertical alignment of the text. One of ``"top"``, ``"middle"``, ``"bottom"``. - - **Default value:** ``"middle"`` - cornerRadius : float - The radius in pixels of rounded rectangle corners. - - **Default value:** ``0`` - cursor : :class:`Cursor` - The mouse cursor used over the mark. Any valid `CSS cursor type - `__ can be used. - dir : :class:`Dir` - The direction of the text. One of ``"ltr"`` (left-to-right) or ``"rtl"`` - (right-to-left). This property determines on which side is truncated in response to - the limit parameter. - - **Default value:** ``"ltr"`` - dx : float - The horizontal offset, in pixels, between the text label and its anchor point. The - offset is applied after rotation by the *angle* property. - dy : float - The vertical offset, in pixels, between the text label and its anchor point. The - offset is applied after rotation by the *angle* property. - ellipsis : string - The ellipsis string for text truncated in response to the limit parameter. - - **Default value:** ``"…"`` - fill : :class:`Color` - Default Fill Color. This has higher precedence than ``config.color`` - - **Default value:** (None) - fillOpacity : float - The fill opacity (value between [0,1]). - - **Default value:** ``1`` - font : string - The typeface to set the text in (e.g., ``"Helvetica Neue"`` ). - fontSize : float - The font size, in pixels. - fontStyle : :class:`FontStyle` - The font style (e.g., ``"italic"`` ). - fontWeight : :class:`FontWeight` - The font weight. - This can be either a string (e.g ``"bold"``, ``"normal"`` ) or a number ( ``100``, - ``200``, ``300``, ..., ``900`` where ``"normal"`` = ``400`` and ``"bold"`` = ``700`` - ). - height : float - Height of the marks. - href : string - A URL to load upon mouse click. If defined, the mark acts as a hyperlink. - interpolate : :class:`Interpolate` - The line interpolation method to use for line and area marks. One of the following: - - - * ``"linear"`` : piecewise linear segments, as in a polyline. - * ``"linear-closed"`` : close the linear segments to form a polygon. - * ``"step"`` : alternate between horizontal and vertical segments, as in a step - function. - * ``"step-before"`` : alternate between vertical and horizontal segments, as in a - step function. - * ``"step-after"`` : alternate between horizontal and vertical segments, as in a - step function. - * ``"basis"`` : a B-spline, with control point duplication on the ends. - * ``"basis-open"`` : an open B-spline; may not intersect the start or end. - * ``"basis-closed"`` : a closed B-spline, as in a loop. - * ``"cardinal"`` : a Cardinal spline, with control point duplication on the ends. - * ``"cardinal-open"`` : an open Cardinal spline; may not intersect the start or end, - but will intersect other control points. - * ``"cardinal-closed"`` : a closed Cardinal spline, as in a loop. - * ``"bundle"`` : equivalent to basis, except the tension parameter is used to - straighten the spline. - * ``"monotone"`` : cubic interpolation that preserves monotonicity in y. - limit : float - The maximum length of the text mark in pixels. The text value will be automatically - truncated if the rendered size exceeds the limit. - - **Default value:** ``0``, indicating no limit - opacity : float - The overall opacity (value between [0,1]). - - **Default value:** ``0.7`` for non-aggregate plots with ``point``, ``tick``, - ``circle``, or ``square`` marks or layered ``bar`` charts and ``1`` otherwise. - orient : :class:`Orientation` - The orientation of a non-stacked bar, tick, area, and line charts. - The value is either horizontal (default) or vertical. - - - * For bar, rule and tick, this determines whether the size of the bar and tick - should be applied to x or y dimension. - * For area, this property determines the orient property of the Vega output. - * For line and trail marks, this property determines the sort order of the points in - the line - if ``config.sortLineBy`` is not specified. - For stacked charts, this is always determined by the orientation of the stack; - therefore explicitly specified value will be ignored. - radius : float - Polar coordinate radial offset, in pixels, of the text label from the origin - determined by the ``x`` and ``y`` properties. - shape : string - Shape of the point marks. Supported values include: - - - * plotting shapes: ``"circle"``, ``"square"``, ``"cross"``, ``"diamond"``, - ``"triangle-up"``, ``"triangle-down"``, ``"triangle-right"``, or - ``"triangle-left"``. - * the line symbol ``"stroke"`` - * centered directional shapes ``"arrow"``, ``"wedge"``, or ``"triangle"`` - * a custom `SVG path string - `__ (For correct - sizing, custom shape paths should be defined within a square bounding box with - coordinates ranging from -1 to 1 along both the x and y dimensions.) - - **Default value:** ``"circle"`` - size : float - The pixel area each the point/circle/square. - For example: in the case of circles, the radius is determined in part by the square - root of the size value. - - **Default value:** ``30`` - stroke : :class:`Color` - Default Stroke Color. This has higher precedence than ``config.color`` - - **Default value:** (None) - strokeCap : :class:`StrokeCap` - The stroke cap for line ending style. One of ``"butt"``, ``"round"``, or - ``"square"``. - - **Default value:** ``"square"`` - strokeDash : List(float) - An array of alternating stroke, space lengths for creating dashed or dotted lines. - strokeDashOffset : float - The offset (in pixels) into which to begin drawing with the stroke dash array. - strokeJoin : :class:`StrokeJoin` - The stroke line join method. One of ``"miter"``, ``"round"`` or ``"bevel"``. - - **Default value:** ``"miter"`` - strokeMiterLimit : float - The miter limit at which to bevel a line join. - strokeOpacity : float - The stroke opacity (value between [0,1]). - - **Default value:** ``1`` - strokeWidth : float - The stroke width, in pixels. - tension : float - Depending on the interpolation type, sets the tension parameter (for line and area - marks). - text : string - Placeholder text if the ``text`` channel is not specified - theta : float - Polar coordinate angle, in radians, of the text label from the origin determined by - the ``x`` and ``y`` properties. Values for ``theta`` follow the same convention of - ``arc`` mark ``startAngle`` and ``endAngle`` properties: angles are measured in - radians, with ``0`` indicating "north". - tooltip : Any - The tooltip text to show upon mouse hover. - width : float - Width of the marks. - x : anyOf(float, enum('width')) - X coordinates of the marks, or width of horizontal ``"bar"`` and ``"area"`` without - specified ``x2`` or ``width``. - - The ``value`` of this channel can be a number or a string ``"width"`` for the width - of the plot. - x2 : anyOf(float, enum('width')) - X2 coordinates for ranged ``"area"``, ``"bar"``, ``"rect"``, and ``"rule"``. - - The ``value`` of this channel can be a number or a string ``"width"`` for the width - of the plot. - y : anyOf(float, enum('height')) - Y coordinates of the marks, or height of vertical ``"bar"`` and ``"area"`` without - specified ``y2`` or ``height``. - - The ``value`` of this channel can be a number or a string ``"height"`` for the - height of the plot. - y2 : anyOf(float, enum('width')) - Y2 coordinates for ranged ``"area"``, ``"bar"``, ``"rect"``, and ``"rule"``. - - The ``value`` of this channel can be a number or a string ``"height"`` for the - height of the plot. - """ - _schema = {'$ref': '#/definitions/BaseMarkConfig'} - - def __init__(self, align=Undefined, angle=Undefined, baseline=Undefined, cornerRadius=Undefined, - cursor=Undefined, dir=Undefined, dx=Undefined, dy=Undefined, ellipsis=Undefined, - fill=Undefined, fillOpacity=Undefined, font=Undefined, fontSize=Undefined, - fontStyle=Undefined, fontWeight=Undefined, height=Undefined, href=Undefined, - interpolate=Undefined, limit=Undefined, opacity=Undefined, orient=Undefined, - radius=Undefined, shape=Undefined, size=Undefined, stroke=Undefined, - strokeCap=Undefined, strokeDash=Undefined, strokeDashOffset=Undefined, - strokeJoin=Undefined, strokeMiterLimit=Undefined, strokeOpacity=Undefined, - strokeWidth=Undefined, tension=Undefined, text=Undefined, theta=Undefined, - tooltip=Undefined, width=Undefined, x=Undefined, x2=Undefined, y=Undefined, - y2=Undefined, **kwds): - super(BaseMarkConfig, self).__init__(align=align, angle=angle, baseline=baseline, - cornerRadius=cornerRadius, cursor=cursor, dir=dir, dx=dx, - dy=dy, ellipsis=ellipsis, fill=fill, - fillOpacity=fillOpacity, font=font, fontSize=fontSize, - fontStyle=fontStyle, fontWeight=fontWeight, height=height, - href=href, interpolate=interpolate, limit=limit, - opacity=opacity, orient=orient, radius=radius, shape=shape, - size=size, stroke=stroke, strokeCap=strokeCap, - strokeDash=strokeDash, strokeDashOffset=strokeDashOffset, - strokeJoin=strokeJoin, strokeMiterLimit=strokeMiterLimit, - strokeOpacity=strokeOpacity, strokeWidth=strokeWidth, - tension=tension, text=text, theta=theta, tooltip=tooltip, - width=width, x=x, x2=x2, y=y, y2=y2, **kwds) - - -class BaseTitleConfig(VegaLiteSchema): - """BaseTitleConfig schema wrapper - - Mapping(required=[]) - - Attributes - ---------- - - align : :class:`Align` - - anchor : :class:`TitleAnchor` - The anchor position for placing the title. One of ``"start"``, ``"middle"``, or - ``"end"``. For example, with an orientation of top these anchor positions map to a - left-, center-, or right-aligned title. - angle : float - Angle in degrees of title text. - baseline : :class:`TextBaseline` - Vertical text baseline for title text. One of ``"top"``, ``"middle"``, ``"bottom"``, - or ``"alphabetic"``. - color : :class:`Color` - Text color for title text. - dx : float - Delta offset for title text x-coordinate. - dy : float - Delta offset for title text y-coordinate. - font : string - Font name for title text. - fontSize : float - Font size in pixels for title text. - - **Default value:** ``10``. - fontStyle : :class:`FontStyle` - Font style for title text. - fontWeight : :class:`FontWeight` - Font weight for title text. - This can be either a string (e.g ``"bold"``, ``"normal"`` ) or a number ( ``100``, - ``200``, ``300``, ..., ``900`` where ``"normal"`` = ``400`` and ``"bold"`` = ``700`` - ). - frame : :class:`TitleFrame` - The reference frame for the anchor position, one of ``"bounds"`` (to anchor relative - to the full bounding box) or ``"group"`` (to anchor relative to the group width or - height). - limit : float - The maximum allowed length in pixels of legend labels. - offset : float - The orthogonal offset in pixels by which to displace the title from its position - along the edge of the chart. - orient : :class:`TitleOrient` - Default title orientation ( ``"top"``, ``"bottom"``, ``"left"``, or ``"right"`` ) - """ - _schema = {'$ref': '#/definitions/BaseTitleConfig'} - - def __init__(self, align=Undefined, anchor=Undefined, angle=Undefined, baseline=Undefined, - color=Undefined, dx=Undefined, dy=Undefined, font=Undefined, fontSize=Undefined, - fontStyle=Undefined, fontWeight=Undefined, frame=Undefined, limit=Undefined, - offset=Undefined, orient=Undefined, **kwds): - super(BaseTitleConfig, self).__init__(align=align, anchor=anchor, angle=angle, - baseline=baseline, color=color, dx=dx, dy=dy, font=font, - fontSize=fontSize, fontStyle=fontStyle, - fontWeight=fontWeight, frame=frame, limit=limit, - offset=offset, orient=orient, **kwds) - - -class BinParams(VegaLiteSchema): - """BinParams schema wrapper - - Mapping(required=[]) - Binning properties or boolean flag for determining whether to bin data or not. - - Attributes - ---------- - - anchor : float - A value in the binned domain at which to anchor the bins, shifting the bin - boundaries if necessary to ensure that a boundary aligns with the anchor value. - - **Default Value:** the minimum bin extent value - base : float - The number base to use for automatic bin determination (default is base 10). - - **Default value:** ``10`` - binned : boolean - When set to true, Vega-Lite treats the input data as already binned. - divide : List(float) - Scale factors indicating allowable subdivisions. The default value is [5, 2], which - indicates that for base 10 numbers (the default base), the method may consider - dividing bin sizes by 5 and/or 2. For example, for an initial step size of 10, the - method can check if bin sizes of 2 (= 10/5), 5 (= 10/2), or 1 (= 10/(5*2)) might - also satisfy the given constraints. - - **Default value:** ``[5, 2]`` - extent : List(float) - A two-element ( ``[min, max]`` ) array indicating the range of desired bin values. - maxbins : float - Maximum number of bins. - - **Default value:** ``6`` for ``row``, ``column`` and ``shape`` channels; ``10`` for - other channels - minstep : float - A minimum allowable step size (particularly useful for integer values). - nice : boolean - If true (the default), attempts to make the bin boundaries use human-friendly - boundaries, such as multiples of ten. - step : float - An exact step size to use between bins. - - **Note:** If provided, options such as maxbins will be ignored. - steps : List(float) - An array of allowable step sizes to choose from. - """ - _schema = {'$ref': '#/definitions/BinParams'} - - def __init__(self, anchor=Undefined, base=Undefined, binned=Undefined, divide=Undefined, - extent=Undefined, maxbins=Undefined, minstep=Undefined, nice=Undefined, step=Undefined, - steps=Undefined, **kwds): - super(BinParams, self).__init__(anchor=anchor, base=base, binned=binned, divide=divide, - extent=extent, maxbins=maxbins, minstep=minstep, nice=nice, - step=step, steps=steps, **kwds) - - -class Binding(VegaLiteSchema): - """Binding schema wrapper - - anyOf(:class:`BindCheckbox`, :class:`BindRadioSelect`, :class:`BindRange`, - :class:`InputBinding`) - """ - _schema = {'$ref': '#/definitions/Binding'} - - def __init__(self, *args, **kwds): - super(Binding, self).__init__(*args, **kwds) - - -class BindCheckbox(Binding): - """BindCheckbox schema wrapper - - Mapping(required=[input]) - - Attributes - ---------- - - input : enum('checkbox') - - debounce : float - - element : :class:`Element` - - name : string - - type : string - - """ - _schema = {'$ref': '#/definitions/BindCheckbox'} - - def __init__(self, input=Undefined, debounce=Undefined, element=Undefined, name=Undefined, - type=Undefined, **kwds): - super(BindCheckbox, self).__init__(input=input, debounce=debounce, element=element, name=name, - type=type, **kwds) - - -class BindRadioSelect(Binding): - """BindRadioSelect schema wrapper - - Mapping(required=[input, options]) - - Attributes - ---------- - - input : enum('radio', 'select') - - options : List(Any) - - debounce : float - - element : :class:`Element` - - name : string - - type : string - - """ - _schema = {'$ref': '#/definitions/BindRadioSelect'} - - def __init__(self, input=Undefined, options=Undefined, debounce=Undefined, element=Undefined, - name=Undefined, type=Undefined, **kwds): - super(BindRadioSelect, self).__init__(input=input, options=options, debounce=debounce, - element=element, name=name, type=type, **kwds) - - -class BindRange(Binding): - """BindRange schema wrapper - - Mapping(required=[input]) - - Attributes - ---------- - - input : enum('range') - - debounce : float - - element : :class:`Element` - - max : float - - min : float - - name : string - - step : float - - type : string - - """ - _schema = {'$ref': '#/definitions/BindRange'} - - def __init__(self, input=Undefined, debounce=Undefined, element=Undefined, max=Undefined, - min=Undefined, name=Undefined, step=Undefined, type=Undefined, **kwds): - super(BindRange, self).__init__(input=input, debounce=debounce, element=element, max=max, - min=min, name=name, step=step, type=type, **kwds) - - -class BoxPlotConfig(VegaLiteSchema): - """BoxPlotConfig schema wrapper - - Mapping(required=[]) - - Attributes - ---------- - - box : anyOf(boolean, :class:`MarkConfig`) - - extent : anyOf(enum('min-max'), float) - The extent of the whiskers. Available options include: - - - * ``"min-max"`` : min and max are the lower and upper whiskers respectively. - * A number representing multiple of the interquartile range. This number will be - multiplied by the IQR to determine whisker boundary, which spans from the smallest - data to the largest data within the range *[Q1 - k * IQR, Q3 + k * IQR]* where - *Q1* and *Q3* are the first and third quartiles while *IQR* is the interquartile - range ( *Q3-Q1* ). - - **Default value:** ``1.5``. - median : anyOf(boolean, :class:`MarkConfig`) - - outliers : anyOf(boolean, :class:`MarkConfig`) - - rule : anyOf(boolean, :class:`MarkConfig`) - - size : float - Size of the box and median tick of a box plot - ticks : anyOf(boolean, :class:`MarkConfig`) - - """ - _schema = {'$ref': '#/definitions/BoxPlotConfig'} - - def __init__(self, box=Undefined, extent=Undefined, median=Undefined, outliers=Undefined, - rule=Undefined, size=Undefined, ticks=Undefined, **kwds): - super(BoxPlotConfig, self).__init__(box=box, extent=extent, median=median, outliers=outliers, - rule=rule, size=size, ticks=ticks, **kwds) - - -class BrushConfig(VegaLiteSchema): - """BrushConfig schema wrapper - - Mapping(required=[]) - - Attributes - ---------- - - fill : :class:`Color` - The fill color of the interval mark. - - **Default value:** ``#333333`` - fillOpacity : float - The fill opacity of the interval mark (a value between 0 and 1). - - **Default value:** ``0.125`` - stroke : :class:`Color` - The stroke color of the interval mark. - - **Default value:** ``#ffffff`` - strokeDash : List(float) - An array of alternating stroke and space lengths, - for creating dashed or dotted lines. - strokeDashOffset : float - The offset (in pixels) with which to begin drawing the stroke dash array. - strokeOpacity : float - The stroke opacity of the interval mark (a value between 0 and 1). - strokeWidth : float - The stroke width of the interval mark. - """ - _schema = {'$ref': '#/definitions/BrushConfig'} - - def __init__(self, fill=Undefined, fillOpacity=Undefined, stroke=Undefined, strokeDash=Undefined, - strokeDashOffset=Undefined, strokeOpacity=Undefined, strokeWidth=Undefined, **kwds): - super(BrushConfig, self).__init__(fill=fill, fillOpacity=fillOpacity, stroke=stroke, - strokeDash=strokeDash, strokeDashOffset=strokeDashOffset, - strokeOpacity=strokeOpacity, strokeWidth=strokeWidth, **kwds) - - -class Color(VegaLiteSchema): - """Color schema wrapper - - anyOf(:class:`ColorName`, :class:`HexColor`, string) - """ - _schema = {'$ref': '#/definitions/Color'} - - def __init__(self, *args, **kwds): - super(Color, self).__init__(*args, **kwds) - - -class ColorName(Color): - """ColorName schema wrapper - - enum('black', 'silver', 'gray', 'white', 'maroon', 'red', 'purple', 'fuchsia', 'green', - 'lime', 'olive', 'yellow', 'navy', 'blue', 'teal', 'aqua', 'orange', 'aliceblue', - 'antiquewhite', 'aquamarine', 'azure', 'beige', 'bisque', 'blanchedalmond', 'blueviolet', - 'brown', 'burlywood', 'cadetblue', 'chartreuse', 'chocolate', 'coral', 'cornflowerblue', - 'cornsilk', 'crimson', 'cyan', 'darkblue', 'darkcyan', 'darkgoldenrod', 'darkgray', - 'darkgreen', 'darkgrey', 'darkkhaki', 'darkmagenta', 'darkolivegreen', 'darkorange', - 'darkorchid', 'darkred', 'darksalmon', 'darkseagreen', 'darkslateblue', 'darkslategray', - 'darkslategrey', 'darkturquoise', 'darkviolet', 'deeppink', 'deepskyblue', 'dimgray', - 'dimgrey', 'dodgerblue', 'firebrick', 'floralwhite', 'forestgreen', 'gainsboro', - 'ghostwhite', 'gold', 'goldenrod', 'greenyellow', 'grey', 'honeydew', 'hotpink', - 'indianred', 'indigo', 'ivory', 'khaki', 'lavender', 'lavenderblush', 'lawngreen', - 'lemonchiffon', 'lightblue', 'lightcoral', 'lightcyan', 'lightgoldenrodyellow', 'lightgray', - 'lightgreen', 'lightgrey', 'lightpink', 'lightsalmon', 'lightseagreen', 'lightskyblue', - 'lightslategray', 'lightslategrey', 'lightsteelblue', 'lightyellow', 'limegreen', 'linen', - 'magenta', 'mediumaquamarine', 'mediumblue', 'mediumorchid', 'mediumpurple', - 'mediumseagreen', 'mediumslateblue', 'mediumspringgreen', 'mediumturquoise', - 'mediumvioletred', 'midnightblue', 'mintcream', 'mistyrose', 'moccasin', 'navajowhite', - 'oldlace', 'olivedrab', 'orangered', 'orchid', 'palegoldenrod', 'palegreen', - 'paleturquoise', 'palevioletred', 'papayawhip', 'peachpuff', 'peru', 'pink', 'plum', - 'powderblue', 'rosybrown', 'royalblue', 'saddlebrown', 'salmon', 'sandybrown', 'seagreen', - 'seashell', 'sienna', 'skyblue', 'slateblue', 'slategray', 'slategrey', 'snow', - 'springgreen', 'steelblue', 'tan', 'thistle', 'tomato', 'turquoise', 'violet', 'wheat', - 'whitesmoke', 'yellowgreen', 'rebeccapurple') - """ - _schema = {'$ref': '#/definitions/ColorName'} - - def __init__(self, *args): - super(ColorName, self).__init__(*args) - - -class CompositeMark(AnyMark): - """CompositeMark schema wrapper - - anyOf(:class:`BoxPlot`, :class:`ErrorBar`, :class:`ErrorBand`) - """ - _schema = {'$ref': '#/definitions/CompositeMark'} - - def __init__(self, *args, **kwds): - super(CompositeMark, self).__init__(*args, **kwds) - - -class BoxPlot(CompositeMark): - """BoxPlot schema wrapper - - enum('boxplot') - """ - _schema = {'$ref': '#/definitions/BoxPlot'} - - def __init__(self, *args): - super(BoxPlot, self).__init__(*args) - - -class CompositeMarkDef(AnyMark): - """CompositeMarkDef schema wrapper - - anyOf(:class:`BoxPlotDef`, :class:`ErrorBarDef`, :class:`ErrorBandDef`) - """ - _schema = {'$ref': '#/definitions/CompositeMarkDef'} - - def __init__(self, *args, **kwds): - super(CompositeMarkDef, self).__init__(*args, **kwds) - - -class BoxPlotDef(CompositeMarkDef): - """BoxPlotDef schema wrapper - - Mapping(required=[type]) - - Attributes - ---------- - - type : :class:`BoxPlot` - The mark type. This could a primitive mark type - (one of ``"bar"``, ``"circle"``, ``"square"``, ``"tick"``, ``"line"``, - ``"area"``, ``"point"``, ``"geoshape"``, ``"rule"``, and ``"text"`` ) - or a composite mark type ( ``"boxplot"``, ``"errorband"``, ``"errorbar"`` ). - box : anyOf(boolean, :class:`MarkConfig`) - - clip : boolean - Whether a composite mark be clipped to the enclosing group’s width and height. - color : :class:`Color` - Default color. Note that ``fill`` and ``stroke`` have higher precedence than - ``color`` and will override ``color``. - - **Default value:** :raw-html:`` - ``"#4682b4"`` - - **Note:** This property cannot be used in a `style config - `__. - extent : anyOf(enum('min-max'), float) - The extent of the whiskers. Available options include: - - - * ``"min-max"`` : min and max are the lower and upper whiskers respectively. - * A number representing multiple of the interquartile range. This number will be - multiplied by the IQR to determine whisker boundary, which spans from the smallest - data to the largest data within the range *[Q1 - k * IQR, Q3 + k * IQR]* where - *Q1* and *Q3* are the first and third quartiles while *IQR* is the interquartile - range ( *Q3-Q1* ). - - **Default value:** ``1.5``. - median : anyOf(boolean, :class:`MarkConfig`) - - opacity : float - The opacity (value between [0,1]) of the mark. - orient : :class:`Orientation` - Orientation of the box plot. This is normally automatically determined based on - types of fields on x and y channels. However, an explicit ``orient`` be specified - when the orientation is ambiguous. - - **Default value:** ``"vertical"``. - outliers : anyOf(boolean, :class:`MarkConfig`) - - rule : anyOf(boolean, :class:`MarkConfig`) - - size : float - Size of the box and median tick of a box plot - ticks : anyOf(boolean, :class:`MarkConfig`) - - """ - _schema = {'$ref': '#/definitions/BoxPlotDef'} - - def __init__(self, type=Undefined, box=Undefined, clip=Undefined, color=Undefined, extent=Undefined, - median=Undefined, opacity=Undefined, orient=Undefined, outliers=Undefined, - rule=Undefined, size=Undefined, ticks=Undefined, **kwds): - super(BoxPlotDef, self).__init__(type=type, box=box, clip=clip, color=color, extent=extent, - median=median, opacity=opacity, orient=orient, - outliers=outliers, rule=rule, size=size, ticks=ticks, **kwds) - - -class CompositionConfig(VegaLiteSchema): - """CompositionConfig schema wrapper - - Mapping(required=[]) - - Attributes - ---------- - - columns : float - The number of columns to include in the view composition layout. - - **Default value** : ``undefined`` -- An infinite number of columns (a single row) - will be assumed. This is equivalent to - ``hconcat`` (for ``concat`` ) and to using the ``column`` channel (for ``facet`` and - ``repeat`` ). - - **Note** : - - 1) This property is only for: - - - * the general (wrappable) ``concat`` operator (not ``hconcat`` / ``vconcat`` ) - * the ``facet`` and ``repeat`` operator with one field/repetition definition - (without row/column nesting) - - 2) Setting the ``columns`` to ``1`` is equivalent to ``vconcat`` (for ``concat`` ) - and to using the ``row`` channel (for ``facet`` and ``repeat`` ). - spacing : float - The default spacing in pixels between composed sub-views. - - **Default value** : ``20`` - """ - _schema = {'$ref': '#/definitions/CompositionConfig'} - - def __init__(self, columns=Undefined, spacing=Undefined, **kwds): - super(CompositionConfig, self).__init__(columns=columns, spacing=spacing, **kwds) - - -class ConditionalMarkPropFieldDef(VegaLiteSchema): - """ConditionalMarkPropFieldDef schema wrapper - - anyOf(:class:`ConditionalPredicateMarkPropFieldDef`, - :class:`ConditionalSelectionMarkPropFieldDef`) - """ - _schema = {'$ref': '#/definitions/ConditionalMarkPropFieldDef'} - - def __init__(self, *args, **kwds): - super(ConditionalMarkPropFieldDef, self).__init__(*args, **kwds) - - -class ConditionalMarkPropFieldDefTypeForShape(VegaLiteSchema): - """ConditionalMarkPropFieldDefTypeForShape schema wrapper - - anyOf(:class:`ConditionalPredicateMarkPropFieldDefTypeForShape`, - :class:`ConditionalSelectionMarkPropFieldDefTypeForShape`) - """ - _schema = {'$ref': '#/definitions/ConditionalMarkPropFieldDef'} - - def __init__(self, *args, **kwds): - super(ConditionalMarkPropFieldDefTypeForShape, self).__init__(*args, **kwds) - - -class ConditionalNumberValueDef(VegaLiteSchema): - """ConditionalNumberValueDef schema wrapper - - anyOf(:class:`ConditionalPredicateNumberValueDef`, - :class:`ConditionalSelectionNumberValueDef`) - """ - _schema = {'$ref': '#/definitions/ConditionalNumberValueDef'} - - def __init__(self, *args, **kwds): - super(ConditionalNumberValueDef, self).__init__(*args, **kwds) - - -class ConditionalPredicateMarkPropFieldDef(ConditionalMarkPropFieldDef): - """ConditionalPredicateMarkPropFieldDef schema wrapper - - Mapping(required=[test, type]) - - Attributes - ---------- - - test : :class:`LogicalOperandPredicate` - Predicate for triggering the condition - type : :class:`StandardType` - The encoded field's type of measurement ( ``"quantitative"``, ``"temporal"``, - ``"ordinal"``, or ``"nominal"`` ). - It can also be a ``"geojson"`` type for encoding `'geoshape' - `__. - - **Note:** - - - * Data values for a temporal field can be either a date-time string (e.g., - ``"2015-03-07 12:32:17"``, ``"17:01"``, ``"2015-03-16"``. ``"2015"`` ) or a - timestamp number (e.g., ``1552199579097`` ). - * Data ``type`` describes the semantics of the data rather than the primitive data - types ( ``number``, ``string``, etc.). The same primitive data type can have - different types of measurement. For example, numeric data can represent - quantitative, ordinal, or nominal data. - * When using with `bin `__, the - ``type`` property can be either ``"quantitative"`` (for using a linear bin scale) - or `"ordinal" (for using an ordinal bin scale) - `__. - * When using with `timeUnit - `__, the ``type`` property - can be either ``"temporal"`` (for using a temporal scale) or `"ordinal" (for using - an ordinal scale) `__. - * When using with `aggregate - `__, the ``type`` property - refers to the post-aggregation data type. For example, we can calculate count - ``distinct`` of a categorical field ``"cat"`` using ``{"aggregate": "distinct", - "field": "cat", "type": "quantitative"}``. The ``"type"`` of the aggregate output - is ``"quantitative"``. - * Secondary channels (e.g., ``x2``, ``y2``, ``xError``, ``yError`` ) do not have - ``type`` as they have exactly the same type as their primary channels (e.g., - ``x``, ``y`` ). - - **See also:** `type `__ - documentation. - aggregate : :class:`Aggregate` - Aggregation function for the field - (e.g., ``mean``, ``sum``, ``median``, ``min``, ``max``, ``count`` ). - - **Default value:** ``undefined`` (None) - - **See also:** `aggregate `__ - documentation. - bin : anyOf(boolean, :class:`BinParams`, None) - A flag for binning a ``quantitative`` field, `an object defining binning parameters - `__, or indicating that the - data for ``x`` or ``y`` channel are binned before they are imported into Vega-Lite ( - ``"binned"`` ). - - - If ``true``, default `binning parameters - `__ will be applied. - - If ``"binned"``, this indicates that the data for the ``x`` (or ``y`` ) channel are - already binned. You can map the bin-start field to ``x`` (or ``y`` ) and the bin-end - field to ``x2`` (or ``y2`` ). The scale and axis will be formatted similar to - binning in Vega-lite. To adjust the axis ticks based on the bin step, you can also - set the axis's `tickMinStep - `__ property. - - **Default value:** ``false`` - - **See also:** `bin `__ - documentation. - field : :class:`Field` - **Required.** A string defining the name of the field from which to pull a data - value - or an object defining iterated values from the `repeat - `__ operator. - - **See also:** `field `__ - documentation. - - **Notes:** - 1) Dots ( ``.`` ) and brackets ( ``[`` and ``]`` ) can be used to access nested - objects (e.g., ``"field": "foo.bar"`` and ``"field": "foo['bar']"`` ). - If field names contain dots or brackets but are not nested, you can use ``\\`` to - escape dots and brackets (e.g., ``"a\\.b"`` and ``"a\\[0\\]"`` ). - See more details about escaping in the `field documentation - `__. - 2) ``field`` is not required if ``aggregate`` is ``count``. - legend : anyOf(:class:`Legend`, None) - An object defining properties of the legend. - If ``null``, the legend for the encoding channel will be removed. - - **Default value:** If undefined, default `legend properties - `__ are applied. - - **See also:** `legend `__ - documentation. - scale : anyOf(:class:`Scale`, None) - An object defining properties of the channel's scale, which is the function that - transforms values in the data domain (numbers, dates, strings, etc) to visual values - (pixels, colors, sizes) of the encoding channels. - - If ``null``, the scale will be `disabled and the data value will be directly encoded - `__. - - **Default value:** If undefined, default `scale properties - `__ are applied. - - **See also:** `scale `__ - documentation. - sort : :class:`Sort` - Sort order for the encoded field. - - For continuous fields (quantitative or temporal), ``sort`` can be either - ``"ascending"`` or ``"descending"``. - - For discrete fields, ``sort`` can be one of the following: - - - * ``"ascending"`` or ``"descending"`` -- for sorting by the values' natural order in - Javascript. - * `A sort-by-encoding definition - `__ for sorting - by another encoding channel. (This type of sort definition is not available for - ``row`` and ``column`` channels.) - * `A sort field definition - `__ for sorting by - another field. - * `An array specifying the field values in preferred order - `__. In this case, the - sort order will obey the values in the array, followed by any unspecified values - in their original order. For discrete time field, values in the sort array can be - `date-time definition objects `__. In addition, for time units - ``"month"`` and ``"day"``, the values can be the month or day names (case - insensitive) or their 3-letter initials (e.g., ``"Mon"``, ``"Tue"`` ). - * ``null`` indicating no sort. - - **Default value:** ``"ascending"`` - - **Note:** ``null`` is not supported for ``row`` and ``column``. - - **See also:** `sort `__ - documentation. - timeUnit : :class:`TimeUnit` - Time unit (e.g., ``year``, ``yearmonth``, ``month``, ``hours`` ) for a temporal - field. - or `a temporal field that gets casted as ordinal - `__. - - **Default value:** ``undefined`` (None) - - **See also:** `timeUnit `__ - documentation. - title : anyOf(string, None) - A title for the field. If ``null``, the title will be removed. - - **Default value:** derived from the field's name and transformation function ( - ``aggregate``, ``bin`` and ``timeUnit`` ). If the field has an aggregate function, - the function is displayed as part of the title (e.g., ``"Sum of Profit"`` ). If the - field is binned or has a time unit applied, the applied function is shown in - parentheses (e.g., ``"Profit (binned)"``, ``"Transaction Date (year-month)"`` ). - Otherwise, the title is simply the field name. - - **Notes** : - - 1) You can customize the default field title format by providing the `fieldTitle - `__ property in - the `config `__ or `fieldTitle - function via the compile function's options - `__. - - 2) If both field definition's ``title`` and axis, header, or legend ``title`` are - defined, axis/header/legend title will be used. - """ - _schema = {'$ref': '#/definitions/ConditionalPredicate'} - - def __init__(self, test=Undefined, type=Undefined, aggregate=Undefined, bin=Undefined, - field=Undefined, legend=Undefined, scale=Undefined, sort=Undefined, timeUnit=Undefined, - title=Undefined, **kwds): - super(ConditionalPredicateMarkPropFieldDef, self).__init__(test=test, type=type, - aggregate=aggregate, bin=bin, - field=field, legend=legend, - scale=scale, sort=sort, - timeUnit=timeUnit, title=title, - **kwds) - - -class ConditionalPredicateMarkPropFieldDefTypeForShape(ConditionalMarkPropFieldDefTypeForShape): - """ConditionalPredicateMarkPropFieldDefTypeForShape schema wrapper - - Mapping(required=[test, type]) - - Attributes - ---------- - - test : :class:`LogicalOperandPredicate` - Predicate for triggering the condition - type : :class:`TypeForShape` - The encoded field's type of measurement ( ``"quantitative"``, ``"temporal"``, - ``"ordinal"``, or ``"nominal"`` ). - It can also be a ``"geojson"`` type for encoding `'geoshape' - `__. - - **Note:** - - - * Data values for a temporal field can be either a date-time string (e.g., - ``"2015-03-07 12:32:17"``, ``"17:01"``, ``"2015-03-16"``. ``"2015"`` ) or a - timestamp number (e.g., ``1552199579097`` ). - * Data ``type`` describes the semantics of the data rather than the primitive data - types ( ``number``, ``string``, etc.). The same primitive data type can have - different types of measurement. For example, numeric data can represent - quantitative, ordinal, or nominal data. - * When using with `bin `__, the - ``type`` property can be either ``"quantitative"`` (for using a linear bin scale) - or `"ordinal" (for using an ordinal bin scale) - `__. - * When using with `timeUnit - `__, the ``type`` property - can be either ``"temporal"`` (for using a temporal scale) or `"ordinal" (for using - an ordinal scale) `__. - * When using with `aggregate - `__, the ``type`` property - refers to the post-aggregation data type. For example, we can calculate count - ``distinct`` of a categorical field ``"cat"`` using ``{"aggregate": "distinct", - "field": "cat", "type": "quantitative"}``. The ``"type"`` of the aggregate output - is ``"quantitative"``. - * Secondary channels (e.g., ``x2``, ``y2``, ``xError``, ``yError`` ) do not have - ``type`` as they have exactly the same type as their primary channels (e.g., - ``x``, ``y`` ). - - **See also:** `type `__ - documentation. - aggregate : :class:`Aggregate` - Aggregation function for the field - (e.g., ``mean``, ``sum``, ``median``, ``min``, ``max``, ``count`` ). - - **Default value:** ``undefined`` (None) - - **See also:** `aggregate `__ - documentation. - bin : anyOf(boolean, :class:`BinParams`, None) - A flag for binning a ``quantitative`` field, `an object defining binning parameters - `__, or indicating that the - data for ``x`` or ``y`` channel are binned before they are imported into Vega-Lite ( - ``"binned"`` ). - - - If ``true``, default `binning parameters - `__ will be applied. - - If ``"binned"``, this indicates that the data for the ``x`` (or ``y`` ) channel are - already binned. You can map the bin-start field to ``x`` (or ``y`` ) and the bin-end - field to ``x2`` (or ``y2`` ). The scale and axis will be formatted similar to - binning in Vega-lite. To adjust the axis ticks based on the bin step, you can also - set the axis's `tickMinStep - `__ property. - - **Default value:** ``false`` - - **See also:** `bin `__ - documentation. - field : :class:`Field` - **Required.** A string defining the name of the field from which to pull a data - value - or an object defining iterated values from the `repeat - `__ operator. - - **See also:** `field `__ - documentation. - - **Notes:** - 1) Dots ( ``.`` ) and brackets ( ``[`` and ``]`` ) can be used to access nested - objects (e.g., ``"field": "foo.bar"`` and ``"field": "foo['bar']"`` ). - If field names contain dots or brackets but are not nested, you can use ``\\`` to - escape dots and brackets (e.g., ``"a\\.b"`` and ``"a\\[0\\]"`` ). - See more details about escaping in the `field documentation - `__. - 2) ``field`` is not required if ``aggregate`` is ``count``. - legend : anyOf(:class:`Legend`, None) - An object defining properties of the legend. - If ``null``, the legend for the encoding channel will be removed. - - **Default value:** If undefined, default `legend properties - `__ are applied. - - **See also:** `legend `__ - documentation. - scale : anyOf(:class:`Scale`, None) - An object defining properties of the channel's scale, which is the function that - transforms values in the data domain (numbers, dates, strings, etc) to visual values - (pixels, colors, sizes) of the encoding channels. - - If ``null``, the scale will be `disabled and the data value will be directly encoded - `__. - - **Default value:** If undefined, default `scale properties - `__ are applied. - - **See also:** `scale `__ - documentation. - sort : :class:`Sort` - Sort order for the encoded field. - - For continuous fields (quantitative or temporal), ``sort`` can be either - ``"ascending"`` or ``"descending"``. - - For discrete fields, ``sort`` can be one of the following: - - - * ``"ascending"`` or ``"descending"`` -- for sorting by the values' natural order in - Javascript. - * `A sort-by-encoding definition - `__ for sorting - by another encoding channel. (This type of sort definition is not available for - ``row`` and ``column`` channels.) - * `A sort field definition - `__ for sorting by - another field. - * `An array specifying the field values in preferred order - `__. In this case, the - sort order will obey the values in the array, followed by any unspecified values - in their original order. For discrete time field, values in the sort array can be - `date-time definition objects `__. In addition, for time units - ``"month"`` and ``"day"``, the values can be the month or day names (case - insensitive) or their 3-letter initials (e.g., ``"Mon"``, ``"Tue"`` ). - * ``null`` indicating no sort. - - **Default value:** ``"ascending"`` - - **Note:** ``null`` is not supported for ``row`` and ``column``. - - **See also:** `sort `__ - documentation. - timeUnit : :class:`TimeUnit` - Time unit (e.g., ``year``, ``yearmonth``, ``month``, ``hours`` ) for a temporal - field. - or `a temporal field that gets casted as ordinal - `__. - - **Default value:** ``undefined`` (None) - - **See also:** `timeUnit `__ - documentation. - title : anyOf(string, None) - A title for the field. If ``null``, the title will be removed. - - **Default value:** derived from the field's name and transformation function ( - ``aggregate``, ``bin`` and ``timeUnit`` ). If the field has an aggregate function, - the function is displayed as part of the title (e.g., ``"Sum of Profit"`` ). If the - field is binned or has a time unit applied, the applied function is shown in - parentheses (e.g., ``"Profit (binned)"``, ``"Transaction Date (year-month)"`` ). - Otherwise, the title is simply the field name. - - **Notes** : - - 1) You can customize the default field title format by providing the `fieldTitle - `__ property in - the `config `__ or `fieldTitle - function via the compile function's options - `__. - - 2) If both field definition's ``title`` and axis, header, or legend ``title`` are - defined, axis/header/legend title will be used. - """ - _schema = {'$ref': '#/definitions/ConditionalPredicate>'} - - def __init__(self, test=Undefined, type=Undefined, aggregate=Undefined, bin=Undefined, - field=Undefined, legend=Undefined, scale=Undefined, sort=Undefined, timeUnit=Undefined, - title=Undefined, **kwds): - super(ConditionalPredicateMarkPropFieldDefTypeForShape, self).__init__(test=test, type=type, - aggregate=aggregate, - bin=bin, field=field, - legend=legend, - scale=scale, sort=sort, - timeUnit=timeUnit, - title=title, **kwds) - - -class ConditionalPredicateNumberValueDef(ConditionalNumberValueDef): - """ConditionalPredicateNumberValueDef schema wrapper - - Mapping(required=[test, value]) - - Attributes - ---------- - - test : :class:`LogicalOperandPredicate` - Predicate for triggering the condition - value : float - A constant value in visual domain (e.g., ``"red"`` / "#0099ff" for color, values - between ``0`` to ``1`` for opacity). - """ - _schema = {'$ref': '#/definitions/ConditionalPredicate'} - - def __init__(self, test=Undefined, value=Undefined, **kwds): - super(ConditionalPredicateNumberValueDef, self).__init__(test=test, value=value, **kwds) - - -class ConditionalSelectionMarkPropFieldDef(ConditionalMarkPropFieldDef): - """ConditionalSelectionMarkPropFieldDef schema wrapper - - Mapping(required=[selection, type]) - - Attributes - ---------- - - selection : :class:`SelectionOperand` - A `selection name `__, or a - series of `composed selections - `__. - type : :class:`StandardType` - The encoded field's type of measurement ( ``"quantitative"``, ``"temporal"``, - ``"ordinal"``, or ``"nominal"`` ). - It can also be a ``"geojson"`` type for encoding `'geoshape' - `__. - - **Note:** - - - * Data values for a temporal field can be either a date-time string (e.g., - ``"2015-03-07 12:32:17"``, ``"17:01"``, ``"2015-03-16"``. ``"2015"`` ) or a - timestamp number (e.g., ``1552199579097`` ). - * Data ``type`` describes the semantics of the data rather than the primitive data - types ( ``number``, ``string``, etc.). The same primitive data type can have - different types of measurement. For example, numeric data can represent - quantitative, ordinal, or nominal data. - * When using with `bin `__, the - ``type`` property can be either ``"quantitative"`` (for using a linear bin scale) - or `"ordinal" (for using an ordinal bin scale) - `__. - * When using with `timeUnit - `__, the ``type`` property - can be either ``"temporal"`` (for using a temporal scale) or `"ordinal" (for using - an ordinal scale) `__. - * When using with `aggregate - `__, the ``type`` property - refers to the post-aggregation data type. For example, we can calculate count - ``distinct`` of a categorical field ``"cat"`` using ``{"aggregate": "distinct", - "field": "cat", "type": "quantitative"}``. The ``"type"`` of the aggregate output - is ``"quantitative"``. - * Secondary channels (e.g., ``x2``, ``y2``, ``xError``, ``yError`` ) do not have - ``type`` as they have exactly the same type as their primary channels (e.g., - ``x``, ``y`` ). - - **See also:** `type `__ - documentation. - aggregate : :class:`Aggregate` - Aggregation function for the field - (e.g., ``mean``, ``sum``, ``median``, ``min``, ``max``, ``count`` ). - - **Default value:** ``undefined`` (None) - - **See also:** `aggregate `__ - documentation. - bin : anyOf(boolean, :class:`BinParams`, None) - A flag for binning a ``quantitative`` field, `an object defining binning parameters - `__, or indicating that the - data for ``x`` or ``y`` channel are binned before they are imported into Vega-Lite ( - ``"binned"`` ). - - - If ``true``, default `binning parameters - `__ will be applied. - - If ``"binned"``, this indicates that the data for the ``x`` (or ``y`` ) channel are - already binned. You can map the bin-start field to ``x`` (or ``y`` ) and the bin-end - field to ``x2`` (or ``y2`` ). The scale and axis will be formatted similar to - binning in Vega-lite. To adjust the axis ticks based on the bin step, you can also - set the axis's `tickMinStep - `__ property. - - **Default value:** ``false`` - - **See also:** `bin `__ - documentation. - field : :class:`Field` - **Required.** A string defining the name of the field from which to pull a data - value - or an object defining iterated values from the `repeat - `__ operator. - - **See also:** `field `__ - documentation. - - **Notes:** - 1) Dots ( ``.`` ) and brackets ( ``[`` and ``]`` ) can be used to access nested - objects (e.g., ``"field": "foo.bar"`` and ``"field": "foo['bar']"`` ). - If field names contain dots or brackets but are not nested, you can use ``\\`` to - escape dots and brackets (e.g., ``"a\\.b"`` and ``"a\\[0\\]"`` ). - See more details about escaping in the `field documentation - `__. - 2) ``field`` is not required if ``aggregate`` is ``count``. - legend : anyOf(:class:`Legend`, None) - An object defining properties of the legend. - If ``null``, the legend for the encoding channel will be removed. - - **Default value:** If undefined, default `legend properties - `__ are applied. - - **See also:** `legend `__ - documentation. - scale : anyOf(:class:`Scale`, None) - An object defining properties of the channel's scale, which is the function that - transforms values in the data domain (numbers, dates, strings, etc) to visual values - (pixels, colors, sizes) of the encoding channels. - - If ``null``, the scale will be `disabled and the data value will be directly encoded - `__. - - **Default value:** If undefined, default `scale properties - `__ are applied. - - **See also:** `scale `__ - documentation. - sort : :class:`Sort` - Sort order for the encoded field. - - For continuous fields (quantitative or temporal), ``sort`` can be either - ``"ascending"`` or ``"descending"``. - - For discrete fields, ``sort`` can be one of the following: - - - * ``"ascending"`` or ``"descending"`` -- for sorting by the values' natural order in - Javascript. - * `A sort-by-encoding definition - `__ for sorting - by another encoding channel. (This type of sort definition is not available for - ``row`` and ``column`` channels.) - * `A sort field definition - `__ for sorting by - another field. - * `An array specifying the field values in preferred order - `__. In this case, the - sort order will obey the values in the array, followed by any unspecified values - in their original order. For discrete time field, values in the sort array can be - `date-time definition objects `__. In addition, for time units - ``"month"`` and ``"day"``, the values can be the month or day names (case - insensitive) or their 3-letter initials (e.g., ``"Mon"``, ``"Tue"`` ). - * ``null`` indicating no sort. - - **Default value:** ``"ascending"`` - - **Note:** ``null`` is not supported for ``row`` and ``column``. - - **See also:** `sort `__ - documentation. - timeUnit : :class:`TimeUnit` - Time unit (e.g., ``year``, ``yearmonth``, ``month``, ``hours`` ) for a temporal - field. - or `a temporal field that gets casted as ordinal - `__. - - **Default value:** ``undefined`` (None) - - **See also:** `timeUnit `__ - documentation. - title : anyOf(string, None) - A title for the field. If ``null``, the title will be removed. - - **Default value:** derived from the field's name and transformation function ( - ``aggregate``, ``bin`` and ``timeUnit`` ). If the field has an aggregate function, - the function is displayed as part of the title (e.g., ``"Sum of Profit"`` ). If the - field is binned or has a time unit applied, the applied function is shown in - parentheses (e.g., ``"Profit (binned)"``, ``"Transaction Date (year-month)"`` ). - Otherwise, the title is simply the field name. - - **Notes** : - - 1) You can customize the default field title format by providing the `fieldTitle - `__ property in - the `config `__ or `fieldTitle - function via the compile function's options - `__. - - 2) If both field definition's ``title`` and axis, header, or legend ``title`` are - defined, axis/header/legend title will be used. - """ - _schema = {'$ref': '#/definitions/ConditionalSelection'} - - def __init__(self, selection=Undefined, type=Undefined, aggregate=Undefined, bin=Undefined, - field=Undefined, legend=Undefined, scale=Undefined, sort=Undefined, timeUnit=Undefined, - title=Undefined, **kwds): - super(ConditionalSelectionMarkPropFieldDef, self).__init__(selection=selection, type=type, - aggregate=aggregate, bin=bin, - field=field, legend=legend, - scale=scale, sort=sort, - timeUnit=timeUnit, title=title, - **kwds) - - -class ConditionalSelectionMarkPropFieldDefTypeForShape(ConditionalMarkPropFieldDefTypeForShape): - """ConditionalSelectionMarkPropFieldDefTypeForShape schema wrapper - - Mapping(required=[selection, type]) - - Attributes - ---------- - - selection : :class:`SelectionOperand` - A `selection name `__, or a - series of `composed selections - `__. - type : :class:`TypeForShape` - The encoded field's type of measurement ( ``"quantitative"``, ``"temporal"``, - ``"ordinal"``, or ``"nominal"`` ). - It can also be a ``"geojson"`` type for encoding `'geoshape' - `__. - - **Note:** - - - * Data values for a temporal field can be either a date-time string (e.g., - ``"2015-03-07 12:32:17"``, ``"17:01"``, ``"2015-03-16"``. ``"2015"`` ) or a - timestamp number (e.g., ``1552199579097`` ). - * Data ``type`` describes the semantics of the data rather than the primitive data - types ( ``number``, ``string``, etc.). The same primitive data type can have - different types of measurement. For example, numeric data can represent - quantitative, ordinal, or nominal data. - * When using with `bin `__, the - ``type`` property can be either ``"quantitative"`` (for using a linear bin scale) - or `"ordinal" (for using an ordinal bin scale) - `__. - * When using with `timeUnit - `__, the ``type`` property - can be either ``"temporal"`` (for using a temporal scale) or `"ordinal" (for using - an ordinal scale) `__. - * When using with `aggregate - `__, the ``type`` property - refers to the post-aggregation data type. For example, we can calculate count - ``distinct`` of a categorical field ``"cat"`` using ``{"aggregate": "distinct", - "field": "cat", "type": "quantitative"}``. The ``"type"`` of the aggregate output - is ``"quantitative"``. - * Secondary channels (e.g., ``x2``, ``y2``, ``xError``, ``yError`` ) do not have - ``type`` as they have exactly the same type as their primary channels (e.g., - ``x``, ``y`` ). - - **See also:** `type `__ - documentation. - aggregate : :class:`Aggregate` - Aggregation function for the field - (e.g., ``mean``, ``sum``, ``median``, ``min``, ``max``, ``count`` ). - - **Default value:** ``undefined`` (None) - - **See also:** `aggregate `__ - documentation. - bin : anyOf(boolean, :class:`BinParams`, None) - A flag for binning a ``quantitative`` field, `an object defining binning parameters - `__, or indicating that the - data for ``x`` or ``y`` channel are binned before they are imported into Vega-Lite ( - ``"binned"`` ). - - - If ``true``, default `binning parameters - `__ will be applied. - - If ``"binned"``, this indicates that the data for the ``x`` (or ``y`` ) channel are - already binned. You can map the bin-start field to ``x`` (or ``y`` ) and the bin-end - field to ``x2`` (or ``y2`` ). The scale and axis will be formatted similar to - binning in Vega-lite. To adjust the axis ticks based on the bin step, you can also - set the axis's `tickMinStep - `__ property. - - **Default value:** ``false`` - - **See also:** `bin `__ - documentation. - field : :class:`Field` - **Required.** A string defining the name of the field from which to pull a data - value - or an object defining iterated values from the `repeat - `__ operator. - - **See also:** `field `__ - documentation. - - **Notes:** - 1) Dots ( ``.`` ) and brackets ( ``[`` and ``]`` ) can be used to access nested - objects (e.g., ``"field": "foo.bar"`` and ``"field": "foo['bar']"`` ). - If field names contain dots or brackets but are not nested, you can use ``\\`` to - escape dots and brackets (e.g., ``"a\\.b"`` and ``"a\\[0\\]"`` ). - See more details about escaping in the `field documentation - `__. - 2) ``field`` is not required if ``aggregate`` is ``count``. - legend : anyOf(:class:`Legend`, None) - An object defining properties of the legend. - If ``null``, the legend for the encoding channel will be removed. - - **Default value:** If undefined, default `legend properties - `__ are applied. - - **See also:** `legend `__ - documentation. - scale : anyOf(:class:`Scale`, None) - An object defining properties of the channel's scale, which is the function that - transforms values in the data domain (numbers, dates, strings, etc) to visual values - (pixels, colors, sizes) of the encoding channels. - - If ``null``, the scale will be `disabled and the data value will be directly encoded - `__. - - **Default value:** If undefined, default `scale properties - `__ are applied. - - **See also:** `scale `__ - documentation. - sort : :class:`Sort` - Sort order for the encoded field. - - For continuous fields (quantitative or temporal), ``sort`` can be either - ``"ascending"`` or ``"descending"``. - - For discrete fields, ``sort`` can be one of the following: - - - * ``"ascending"`` or ``"descending"`` -- for sorting by the values' natural order in - Javascript. - * `A sort-by-encoding definition - `__ for sorting - by another encoding channel. (This type of sort definition is not available for - ``row`` and ``column`` channels.) - * `A sort field definition - `__ for sorting by - another field. - * `An array specifying the field values in preferred order - `__. In this case, the - sort order will obey the values in the array, followed by any unspecified values - in their original order. For discrete time field, values in the sort array can be - `date-time definition objects `__. In addition, for time units - ``"month"`` and ``"day"``, the values can be the month or day names (case - insensitive) or their 3-letter initials (e.g., ``"Mon"``, ``"Tue"`` ). - * ``null`` indicating no sort. - - **Default value:** ``"ascending"`` - - **Note:** ``null`` is not supported for ``row`` and ``column``. - - **See also:** `sort `__ - documentation. - timeUnit : :class:`TimeUnit` - Time unit (e.g., ``year``, ``yearmonth``, ``month``, ``hours`` ) for a temporal - field. - or `a temporal field that gets casted as ordinal - `__. - - **Default value:** ``undefined`` (None) - - **See also:** `timeUnit `__ - documentation. - title : anyOf(string, None) - A title for the field. If ``null``, the title will be removed. - - **Default value:** derived from the field's name and transformation function ( - ``aggregate``, ``bin`` and ``timeUnit`` ). If the field has an aggregate function, - the function is displayed as part of the title (e.g., ``"Sum of Profit"`` ). If the - field is binned or has a time unit applied, the applied function is shown in - parentheses (e.g., ``"Profit (binned)"``, ``"Transaction Date (year-month)"`` ). - Otherwise, the title is simply the field name. - - **Notes** : - - 1) You can customize the default field title format by providing the `fieldTitle - `__ property in - the `config `__ or `fieldTitle - function via the compile function's options - `__. - - 2) If both field definition's ``title`` and axis, header, or legend ``title`` are - defined, axis/header/legend title will be used. - """ - _schema = {'$ref': '#/definitions/ConditionalSelection>'} - - def __init__(self, selection=Undefined, type=Undefined, aggregate=Undefined, bin=Undefined, - field=Undefined, legend=Undefined, scale=Undefined, sort=Undefined, timeUnit=Undefined, - title=Undefined, **kwds): - super(ConditionalSelectionMarkPropFieldDefTypeForShape, self).__init__(selection=selection, - type=type, - aggregate=aggregate, - bin=bin, field=field, - legend=legend, - scale=scale, sort=sort, - timeUnit=timeUnit, - title=title, **kwds) - - -class ConditionalSelectionNumberValueDef(ConditionalNumberValueDef): - """ConditionalSelectionNumberValueDef schema wrapper - - Mapping(required=[selection, value]) - - Attributes - ---------- - - selection : :class:`SelectionOperand` - A `selection name `__, or a - series of `composed selections - `__. - value : float - A constant value in visual domain (e.g., ``"red"`` / "#0099ff" for color, values - between ``0`` to ``1`` for opacity). - """ - _schema = {'$ref': '#/definitions/ConditionalSelection'} - - def __init__(self, selection=Undefined, value=Undefined, **kwds): - super(ConditionalSelectionNumberValueDef, self).__init__(selection=selection, value=value, - **kwds) - - -class ConditionalStringValueDef(VegaLiteSchema): - """ConditionalStringValueDef schema wrapper - - anyOf(:class:`ConditionalPredicateStringValueDef`, - :class:`ConditionalSelectionStringValueDef`) - """ - _schema = {'$ref': '#/definitions/ConditionalStringValueDef'} - - def __init__(self, *args, **kwds): - super(ConditionalStringValueDef, self).__init__(*args, **kwds) - - -class ConditionalPredicateStringValueDef(ConditionalStringValueDef): - """ConditionalPredicateStringValueDef schema wrapper - - Mapping(required=[test, value]) - - Attributes - ---------- - - test : :class:`LogicalOperandPredicate` - Predicate for triggering the condition - value : anyOf(string, None) - A constant value in visual domain (e.g., ``"red"`` / "#0099ff" for color, values - between ``0`` to ``1`` for opacity). - """ - _schema = {'$ref': '#/definitions/ConditionalPredicate'} - - def __init__(self, test=Undefined, value=Undefined, **kwds): - super(ConditionalPredicateStringValueDef, self).__init__(test=test, value=value, **kwds) - - -class ConditionalSelectionStringValueDef(ConditionalStringValueDef): - """ConditionalSelectionStringValueDef schema wrapper - - Mapping(required=[selection, value]) - - Attributes - ---------- - - selection : :class:`SelectionOperand` - A `selection name `__, or a - series of `composed selections - `__. - value : anyOf(string, None) - A constant value in visual domain (e.g., ``"red"`` / "#0099ff" for color, values - between ``0`` to ``1`` for opacity). - """ - _schema = {'$ref': '#/definitions/ConditionalSelection'} - - def __init__(self, selection=Undefined, value=Undefined, **kwds): - super(ConditionalSelectionStringValueDef, self).__init__(selection=selection, value=value, - **kwds) - - -class ConditionalTextFieldDef(VegaLiteSchema): - """ConditionalTextFieldDef schema wrapper - - anyOf(:class:`ConditionalPredicateTextFieldDef`, :class:`ConditionalSelectionTextFieldDef`) - """ - _schema = {'$ref': '#/definitions/ConditionalTextFieldDef'} - - def __init__(self, *args, **kwds): - super(ConditionalTextFieldDef, self).__init__(*args, **kwds) - - -class ConditionalPredicateTextFieldDef(ConditionalTextFieldDef): - """ConditionalPredicateTextFieldDef schema wrapper - - Mapping(required=[test, type]) - - Attributes - ---------- - - test : :class:`LogicalOperandPredicate` - Predicate for triggering the condition - type : :class:`StandardType` - The encoded field's type of measurement ( ``"quantitative"``, ``"temporal"``, - ``"ordinal"``, or ``"nominal"`` ). - It can also be a ``"geojson"`` type for encoding `'geoshape' - `__. - - **Note:** - - - * Data values for a temporal field can be either a date-time string (e.g., - ``"2015-03-07 12:32:17"``, ``"17:01"``, ``"2015-03-16"``. ``"2015"`` ) or a - timestamp number (e.g., ``1552199579097`` ). - * Data ``type`` describes the semantics of the data rather than the primitive data - types ( ``number``, ``string``, etc.). The same primitive data type can have - different types of measurement. For example, numeric data can represent - quantitative, ordinal, or nominal data. - * When using with `bin `__, the - ``type`` property can be either ``"quantitative"`` (for using a linear bin scale) - or `"ordinal" (for using an ordinal bin scale) - `__. - * When using with `timeUnit - `__, the ``type`` property - can be either ``"temporal"`` (for using a temporal scale) or `"ordinal" (for using - an ordinal scale) `__. - * When using with `aggregate - `__, the ``type`` property - refers to the post-aggregation data type. For example, we can calculate count - ``distinct`` of a categorical field ``"cat"`` using ``{"aggregate": "distinct", - "field": "cat", "type": "quantitative"}``. The ``"type"`` of the aggregate output - is ``"quantitative"``. - * Secondary channels (e.g., ``x2``, ``y2``, ``xError``, ``yError`` ) do not have - ``type`` as they have exactly the same type as their primary channels (e.g., - ``x``, ``y`` ). - - **See also:** `type `__ - documentation. - aggregate : :class:`Aggregate` - Aggregation function for the field - (e.g., ``mean``, ``sum``, ``median``, ``min``, ``max``, ``count`` ). - - **Default value:** ``undefined`` (None) - - **See also:** `aggregate `__ - documentation. - bin : anyOf(boolean, :class:`BinParams`, enum('binned'), None) - A flag for binning a ``quantitative`` field, `an object defining binning parameters - `__, or indicating that the - data for ``x`` or ``y`` channel are binned before they are imported into Vega-Lite ( - ``"binned"`` ). - - - If ``true``, default `binning parameters - `__ will be applied. - - If ``"binned"``, this indicates that the data for the ``x`` (or ``y`` ) channel are - already binned. You can map the bin-start field to ``x`` (or ``y`` ) and the bin-end - field to ``x2`` (or ``y2`` ). The scale and axis will be formatted similar to - binning in Vega-lite. To adjust the axis ticks based on the bin step, you can also - set the axis's `tickMinStep - `__ property. - - **Default value:** ``false`` - - **See also:** `bin `__ - documentation. - field : :class:`Field` - **Required.** A string defining the name of the field from which to pull a data - value - or an object defining iterated values from the `repeat - `__ operator. - - **See also:** `field `__ - documentation. - - **Notes:** - 1) Dots ( ``.`` ) and brackets ( ``[`` and ``]`` ) can be used to access nested - objects (e.g., ``"field": "foo.bar"`` and ``"field": "foo['bar']"`` ). - If field names contain dots or brackets but are not nested, you can use ``\\`` to - escape dots and brackets (e.g., ``"a\\.b"`` and ``"a\\[0\\]"`` ). - See more details about escaping in the `field documentation - `__. - 2) ``field`` is not required if ``aggregate`` is ``count``. - format : string - The text formatting pattern for labels of guides (axes, legends, headers) and text - marks. - - - * If the format type is ``"number"`` (e.g., for quantitative fields), this is D3's - `number format pattern `__. - * If the format type is ``"time"`` (e.g., for temporal fields), this is D3's `time - format pattern `__. - - See the `format documentation `__ - for more examples. - - **Default value:** Derived from `numberFormat - `__ config for number - format and from `timeFormat - `__ config for time - format. - formatType : enum('number', 'time') - The format type for labels ( ``"number"`` or ``"time"`` ). - - **Default value:** - - - * ``"time"`` for temporal fields and ordinal and nomimal fields with ``timeUnit``. - * ``"number"`` for quantitative fields as well as ordinal and nomimal fields without - ``timeUnit``. - timeUnit : :class:`TimeUnit` - Time unit (e.g., ``year``, ``yearmonth``, ``month``, ``hours`` ) for a temporal - field. - or `a temporal field that gets casted as ordinal - `__. - - **Default value:** ``undefined`` (None) - - **See also:** `timeUnit `__ - documentation. - title : anyOf(string, None) - A title for the field. If ``null``, the title will be removed. - - **Default value:** derived from the field's name and transformation function ( - ``aggregate``, ``bin`` and ``timeUnit`` ). If the field has an aggregate function, - the function is displayed as part of the title (e.g., ``"Sum of Profit"`` ). If the - field is binned or has a time unit applied, the applied function is shown in - parentheses (e.g., ``"Profit (binned)"``, ``"Transaction Date (year-month)"`` ). - Otherwise, the title is simply the field name. - - **Notes** : - - 1) You can customize the default field title format by providing the `fieldTitle - `__ property in - the `config `__ or `fieldTitle - function via the compile function's options - `__. - - 2) If both field definition's ``title`` and axis, header, or legend ``title`` are - defined, axis/header/legend title will be used. - """ - _schema = {'$ref': '#/definitions/ConditionalPredicate'} - - def __init__(self, test=Undefined, type=Undefined, aggregate=Undefined, bin=Undefined, - field=Undefined, format=Undefined, formatType=Undefined, timeUnit=Undefined, - title=Undefined, **kwds): - super(ConditionalPredicateTextFieldDef, self).__init__(test=test, type=type, - aggregate=aggregate, bin=bin, - field=field, format=format, - formatType=formatType, timeUnit=timeUnit, - title=title, **kwds) - - -class ConditionalSelectionTextFieldDef(ConditionalTextFieldDef): - """ConditionalSelectionTextFieldDef schema wrapper - - Mapping(required=[selection, type]) - - Attributes - ---------- - - selection : :class:`SelectionOperand` - A `selection name `__, or a - series of `composed selections - `__. - type : :class:`StandardType` - The encoded field's type of measurement ( ``"quantitative"``, ``"temporal"``, - ``"ordinal"``, or ``"nominal"`` ). - It can also be a ``"geojson"`` type for encoding `'geoshape' - `__. - - **Note:** - - - * Data values for a temporal field can be either a date-time string (e.g., - ``"2015-03-07 12:32:17"``, ``"17:01"``, ``"2015-03-16"``. ``"2015"`` ) or a - timestamp number (e.g., ``1552199579097`` ). - * Data ``type`` describes the semantics of the data rather than the primitive data - types ( ``number``, ``string``, etc.). The same primitive data type can have - different types of measurement. For example, numeric data can represent - quantitative, ordinal, or nominal data. - * When using with `bin `__, the - ``type`` property can be either ``"quantitative"`` (for using a linear bin scale) - or `"ordinal" (for using an ordinal bin scale) - `__. - * When using with `timeUnit - `__, the ``type`` property - can be either ``"temporal"`` (for using a temporal scale) or `"ordinal" (for using - an ordinal scale) `__. - * When using with `aggregate - `__, the ``type`` property - refers to the post-aggregation data type. For example, we can calculate count - ``distinct`` of a categorical field ``"cat"`` using ``{"aggregate": "distinct", - "field": "cat", "type": "quantitative"}``. The ``"type"`` of the aggregate output - is ``"quantitative"``. - * Secondary channels (e.g., ``x2``, ``y2``, ``xError``, ``yError`` ) do not have - ``type`` as they have exactly the same type as their primary channels (e.g., - ``x``, ``y`` ). - - **See also:** `type `__ - documentation. - aggregate : :class:`Aggregate` - Aggregation function for the field - (e.g., ``mean``, ``sum``, ``median``, ``min``, ``max``, ``count`` ). - - **Default value:** ``undefined`` (None) - - **See also:** `aggregate `__ - documentation. - bin : anyOf(boolean, :class:`BinParams`, enum('binned'), None) - A flag for binning a ``quantitative`` field, `an object defining binning parameters - `__, or indicating that the - data for ``x`` or ``y`` channel are binned before they are imported into Vega-Lite ( - ``"binned"`` ). - - - If ``true``, default `binning parameters - `__ will be applied. - - If ``"binned"``, this indicates that the data for the ``x`` (or ``y`` ) channel are - already binned. You can map the bin-start field to ``x`` (or ``y`` ) and the bin-end - field to ``x2`` (or ``y2`` ). The scale and axis will be formatted similar to - binning in Vega-lite. To adjust the axis ticks based on the bin step, you can also - set the axis's `tickMinStep - `__ property. - - **Default value:** ``false`` - - **See also:** `bin `__ - documentation. - field : :class:`Field` - **Required.** A string defining the name of the field from which to pull a data - value - or an object defining iterated values from the `repeat - `__ operator. - - **See also:** `field `__ - documentation. - - **Notes:** - 1) Dots ( ``.`` ) and brackets ( ``[`` and ``]`` ) can be used to access nested - objects (e.g., ``"field": "foo.bar"`` and ``"field": "foo['bar']"`` ). - If field names contain dots or brackets but are not nested, you can use ``\\`` to - escape dots and brackets (e.g., ``"a\\.b"`` and ``"a\\[0\\]"`` ). - See more details about escaping in the `field documentation - `__. - 2) ``field`` is not required if ``aggregate`` is ``count``. - format : string - The text formatting pattern for labels of guides (axes, legends, headers) and text - marks. - - - * If the format type is ``"number"`` (e.g., for quantitative fields), this is D3's - `number format pattern `__. - * If the format type is ``"time"`` (e.g., for temporal fields), this is D3's `time - format pattern `__. - - See the `format documentation `__ - for more examples. - - **Default value:** Derived from `numberFormat - `__ config for number - format and from `timeFormat - `__ config for time - format. - formatType : enum('number', 'time') - The format type for labels ( ``"number"`` or ``"time"`` ). - - **Default value:** - - - * ``"time"`` for temporal fields and ordinal and nomimal fields with ``timeUnit``. - * ``"number"`` for quantitative fields as well as ordinal and nomimal fields without - ``timeUnit``. - timeUnit : :class:`TimeUnit` - Time unit (e.g., ``year``, ``yearmonth``, ``month``, ``hours`` ) for a temporal - field. - or `a temporal field that gets casted as ordinal - `__. - - **Default value:** ``undefined`` (None) - - **See also:** `timeUnit `__ - documentation. - title : anyOf(string, None) - A title for the field. If ``null``, the title will be removed. - - **Default value:** derived from the field's name and transformation function ( - ``aggregate``, ``bin`` and ``timeUnit`` ). If the field has an aggregate function, - the function is displayed as part of the title (e.g., ``"Sum of Profit"`` ). If the - field is binned or has a time unit applied, the applied function is shown in - parentheses (e.g., ``"Profit (binned)"``, ``"Transaction Date (year-month)"`` ). - Otherwise, the title is simply the field name. - - **Notes** : - - 1) You can customize the default field title format by providing the `fieldTitle - `__ property in - the `config `__ or `fieldTitle - function via the compile function's options - `__. - - 2) If both field definition's ``title`` and axis, header, or legend ``title`` are - defined, axis/header/legend title will be used. - """ - _schema = {'$ref': '#/definitions/ConditionalSelection'} - - def __init__(self, selection=Undefined, type=Undefined, aggregate=Undefined, bin=Undefined, - field=Undefined, format=Undefined, formatType=Undefined, timeUnit=Undefined, - title=Undefined, **kwds): - super(ConditionalSelectionTextFieldDef, self).__init__(selection=selection, type=type, - aggregate=aggregate, bin=bin, - field=field, format=format, - formatType=formatType, timeUnit=timeUnit, - title=title, **kwds) - - -class ConditionalValueDef(VegaLiteSchema): - """ConditionalValueDef schema wrapper - - anyOf(:class:`ConditionalPredicateValueDef`, :class:`ConditionalSelectionValueDef`) - """ - _schema = {'$ref': '#/definitions/ConditionalValueDef'} - - def __init__(self, *args, **kwds): - super(ConditionalValueDef, self).__init__(*args, **kwds) - - -class ConditionalPredicateValueDef(ConditionalValueDef): - """ConditionalPredicateValueDef schema wrapper - - Mapping(required=[test, value]) - - Attributes - ---------- - - test : :class:`LogicalOperandPredicate` - Predicate for triggering the condition - value : :class:`Value` - A constant value in visual domain (e.g., ``"red"`` / "#0099ff" for color, values - between ``0`` to ``1`` for opacity). - """ - _schema = {'$ref': '#/definitions/ConditionalPredicate'} - - def __init__(self, test=Undefined, value=Undefined, **kwds): - super(ConditionalPredicateValueDef, self).__init__(test=test, value=value, **kwds) - - -class ConditionalSelectionValueDef(ConditionalValueDef): - """ConditionalSelectionValueDef schema wrapper - - Mapping(required=[selection, value]) - - Attributes - ---------- - - selection : :class:`SelectionOperand` - A `selection name `__, or a - series of `composed selections - `__. - value : :class:`Value` - A constant value in visual domain (e.g., ``"red"`` / "#0099ff" for color, values - between ``0`` to ``1`` for opacity). - """ - _schema = {'$ref': '#/definitions/ConditionalSelection'} - - def __init__(self, selection=Undefined, value=Undefined, **kwds): - super(ConditionalSelectionValueDef, self).__init__(selection=selection, value=value, **kwds) - - -class Config(VegaLiteSchema): - """Config schema wrapper - - Mapping(required=[]) - - Attributes - ---------- - - area : :class:`AreaConfig` - Area-Specific Config - autosize : anyOf(:class:`AutosizeType`, :class:`AutoSizeParams`) - Sets how the visualization size should be determined. If a string, should be one of - ``"pad"``, ``"fit"`` or ``"none"``. - Object values can additionally specify parameters for content sizing and automatic - resizing. - ``"fit"`` is only supported for single and layered views that don't use - ``rangeStep``. - - **Default value** : ``pad`` - axis : :class:`AxisConfig` - Axis configuration, which determines default properties for all ``x`` and ``y`` - `axes `__. For a full list of axis - configuration options, please see the `corresponding section of the axis - documentation `__. - axisBand : :class:`AxisConfig` - Specific axis config for axes with "band" scales. - axisBottom : :class:`AxisConfig` - Specific axis config for x-axis along the bottom edge of the chart. - axisLeft : :class:`AxisConfig` - Specific axis config for y-axis along the left edge of the chart. - axisRight : :class:`AxisConfig` - Specific axis config for y-axis along the right edge of the chart. - axisTop : :class:`AxisConfig` - Specific axis config for x-axis along the top edge of the chart. - axisX : :class:`AxisConfig` - X-axis specific config. - axisY : :class:`AxisConfig` - Y-axis specific config. - background : string - CSS color property to use as the background of the whole Vega-Lite view - - **Default value:** none (transparent) - bar : :class:`RectConfig` - Bar-Specific Config - boxplot : :class:`BoxPlotConfig` - Box Config - circle : :class:`MarkConfig` - Circle-Specific Config - concat : :class:`CompositionConfig` - Default configuration for all concatenation view composition operators ( ``concat``, - ``hconcat``, and ``vconcat`` ) - countTitle : string - Default axis and legend title for count fields. - - **Default value:** ``'Count of Records``. - errorband : :class:`ErrorBandConfig` - ErrorBand Config - errorbar : :class:`ErrorBarConfig` - ErrorBar Config - facet : :class:`CompositionConfig` - Default configuration for the ``facet`` view composition operator - fieldTitle : enum('verbal', 'functional', 'plain') - Defines how Vega-Lite generates title for fields. There are three possible styles: - - - * ``"verbal"`` (Default) - displays function in a verbal style (e.g., "Sum of - field", "Year-month of date", "field (binned)"). - * ``"function"`` - displays function using parentheses and capitalized texts (e.g., - "SUM(field)", "YEARMONTH(date)", "BIN(field)"). - * ``"plain"`` - displays only the field name without functions (e.g., "field", - "date", "field"). - geoshape : :class:`MarkConfig` - Geoshape-Specific Config - header : :class:`HeaderConfig` - Header configuration, which determines default properties for all `headers - `__. - - For a full list of header configuration options, please see the `corresponding - section of in the header documentation - `__. - headerColumn : :class:`HeaderConfig` - Header configuration, which determines default properties for column `headers - `__. - - For a full list of header configuration options, please see the `corresponding - section of in the header documentation - `__. - headerFacet : :class:`HeaderConfig` - Header configuration, which determines default properties for non-row/column facet - `headers `__. - - For a full list of header configuration options, please see the `corresponding - section of in the header documentation - `__. - headerRow : :class:`HeaderConfig` - Header configuration, which determines default properties for row `headers - `__. - - For a full list of header configuration options, please see the `corresponding - section of in the header documentation - `__. - invalidValues : enum('filter', None) - Defines how Vega-Lite should handle invalid values ( ``null`` and ``NaN`` ). - - - * If set to ``"filter"`` (default), all data items with null values will be skipped - (for line, trail, and area marks) or filtered (for other marks). - * If ``null``, all data items are included. In this case, invalid values will be - interpreted as zeroes. - legend : :class:`LegendConfig` - Legend configuration, which determines default properties for all `legends - `__. For a full list of legend - configuration options, please see the `corresponding section of in the legend - documentation `__. - line : :class:`LineConfig` - Line-Specific Config - mark : :class:`MarkConfig` - Mark Config - numberFormat : string - D3 Number format for guide labels and text marks. For example "s" for SI units. Use - `D3's number format pattern `__. - padding : :class:`Padding` - The default visualization padding, in pixels, from the edge of the visualization - canvas to the data rectangle. If a number, specifies padding for all sides. - If an object, the value should have the format ``{"left": 5, "top": 5, "right": 5, - "bottom": 5}`` to specify padding for each side of the visualization. - - **Default value** : ``5`` - point : :class:`MarkConfig` - Point-Specific Config - projection : :class:`ProjectionConfig` - Projection configuration, which determines default properties for all `projections - `__. For a full list of - projection configuration options, please see the `corresponding section of the - projection documentation - `__. - range : :class:`RangeConfig` - An object hash that defines default range arrays or schemes for using with scales. - For a full list of scale range configuration options, please see the `corresponding - section of the scale documentation - `__. - rect : :class:`RectConfig` - Rect-Specific Config - repeat : :class:`CompositionConfig` - Default configuration for the ``repeat`` view composition operator - rule : :class:`MarkConfig` - Rule-Specific Config - scale : :class:`ScaleConfig` - Scale configuration determines default properties for all `scales - `__. For a full list of scale - configuration options, please see the `corresponding section of the scale - documentation `__. - selection : :class:`SelectionConfig` - An object hash for defining default properties for each type of selections. - square : :class:`MarkConfig` - Square-Specific Config - stack : :class:`StackOffset` - Default stack offset for stackable mark. - style : :class:`StyleConfigIndex` - An object hash that defines key-value mappings to determine default properties for - marks with a given `style - `__. The keys represent - styles names; the values have to be valid `mark configuration objects - `__. - text : :class:`TextConfig` - Text-Specific Config - tick : :class:`TickConfig` - Tick-Specific Config - timeFormat : string - Default time format for raw time values (without time units) in text marks, legend - labels and header labels. - - **Default value:** ``"%b %d, %Y"`` - **Note:** Axes automatically determine format each label automatically so this - config would not affect axes. - title : :class:`TitleConfig` - Title configuration, which determines default properties for all `titles - `__. For a full list of title - configuration options, please see the `corresponding section of the title - documentation `__. - trail : :class:`LineConfig` - Trail-Specific Config - view : :class:`ViewConfig` - Default properties for `single view plots - `__. - """ - _schema = {'$ref': '#/definitions/Config'} - - def __init__(self, area=Undefined, autosize=Undefined, axis=Undefined, axisBand=Undefined, - axisBottom=Undefined, axisLeft=Undefined, axisRight=Undefined, axisTop=Undefined, - axisX=Undefined, axisY=Undefined, background=Undefined, bar=Undefined, - boxplot=Undefined, circle=Undefined, concat=Undefined, countTitle=Undefined, - errorband=Undefined, errorbar=Undefined, facet=Undefined, fieldTitle=Undefined, - geoshape=Undefined, header=Undefined, headerColumn=Undefined, headerFacet=Undefined, - headerRow=Undefined, invalidValues=Undefined, legend=Undefined, line=Undefined, - mark=Undefined, numberFormat=Undefined, padding=Undefined, point=Undefined, - projection=Undefined, range=Undefined, rect=Undefined, repeat=Undefined, - rule=Undefined, scale=Undefined, selection=Undefined, square=Undefined, - stack=Undefined, style=Undefined, text=Undefined, tick=Undefined, timeFormat=Undefined, - title=Undefined, trail=Undefined, view=Undefined, **kwds): - super(Config, self).__init__(area=area, autosize=autosize, axis=axis, axisBand=axisBand, - axisBottom=axisBottom, axisLeft=axisLeft, axisRight=axisRight, - axisTop=axisTop, axisX=axisX, axisY=axisY, background=background, - bar=bar, boxplot=boxplot, circle=circle, concat=concat, - countTitle=countTitle, errorband=errorband, errorbar=errorbar, - facet=facet, fieldTitle=fieldTitle, geoshape=geoshape, - header=header, headerColumn=headerColumn, headerFacet=headerFacet, - headerRow=headerRow, invalidValues=invalidValues, legend=legend, - line=line, mark=mark, numberFormat=numberFormat, padding=padding, - point=point, projection=projection, range=range, rect=rect, - repeat=repeat, rule=rule, scale=scale, selection=selection, - square=square, stack=stack, style=style, text=text, tick=tick, - timeFormat=timeFormat, title=title, trail=trail, view=view, **kwds) - - -class Cursor(VegaLiteSchema): - """Cursor schema wrapper - - enum('auto', 'default', 'none', 'context-menu', 'help', 'pointer', 'progress', 'wait', - 'cell', 'crosshair', 'text', 'vertical-text', 'alias', 'copy', 'move', 'no-drop', - 'not-allowed', 'e-resize', 'n-resize', 'ne-resize', 'nw-resize', 's-resize', 'se-resize', - 'sw-resize', 'w-resize', 'ew-resize', 'ns-resize', 'nesw-resize', 'nwse-resize', - 'col-resize', 'row-resize', 'all-scroll', 'zoom-in', 'zoom-out', 'grab', 'grabbing') - """ - _schema = {'$ref': '#/definitions/Cursor'} - - def __init__(self, *args): - super(Cursor, self).__init__(*args) - - -class Data(VegaLiteSchema): - """Data schema wrapper - - anyOf(:class:`DataSource`, :class:`Generator`) - """ - _schema = {'$ref': '#/definitions/Data'} - - def __init__(self, *args, **kwds): - super(Data, self).__init__(*args, **kwds) - - -class DataFormat(VegaLiteSchema): - """DataFormat schema wrapper - - anyOf(:class:`CsvDataFormat`, :class:`DsvDataFormat`, :class:`JsonDataFormat`, - :class:`TopoDataFormat`) - """ - _schema = {'$ref': '#/definitions/DataFormat'} - - def __init__(self, *args, **kwds): - super(DataFormat, self).__init__(*args, **kwds) - - -class CsvDataFormat(DataFormat): - """CsvDataFormat schema wrapper - - Mapping(required=[]) - - Attributes - ---------- - - parse : anyOf(:class:`Parse`, None) - If set to ``null``, disable type inference based on the spec and only use type - inference based on the data. - Alternatively, a parsing directive object can be provided for explicit data types. - Each property of the object corresponds to a field name, and the value to the - desired data type (one of ``"number"``, ``"boolean"``, ``"date"``, or null (do not - parse the field)). - For example, ``"parse": {"modified_on": "date"}`` parses the ``modified_on`` field - in each input record a Date value. - - For ``"date"``, we parse data based using Javascript's `Date.parse() - `__. - For Specific date formats can be provided (e.g., ``{foo: "date:'%m%d%Y'"}`` ), using - the `d3-time-format syntax `__. - UTC date format parsing is supported similarly (e.g., ``{foo: "utc:'%m%d%Y'"}`` ). - See more about `UTC time - `__ - type : enum('csv', 'tsv') - Type of input data: ``"json"``, ``"csv"``, ``"tsv"``, ``"dsv"``. - - **Default value:** The default format type is determined by the extension of the - file URL. - If no extension is detected, ``"json"`` will be used by default. - """ - _schema = {'$ref': '#/definitions/CsvDataFormat'} - - def __init__(self, parse=Undefined, type=Undefined, **kwds): - super(CsvDataFormat, self).__init__(parse=parse, type=type, **kwds) - - -class DataSource(Data): - """DataSource schema wrapper - - anyOf(:class:`UrlData`, :class:`InlineData`, :class:`NamedData`) - """ - _schema = {'$ref': '#/definitions/DataSource'} - - def __init__(self, *args, **kwds): - super(DataSource, self).__init__(*args, **kwds) - - -class Datasets(VegaLiteSchema): - """Datasets schema wrapper - - Mapping(required=[]) - """ - _schema = {'$ref': '#/definitions/Datasets'} - - def __init__(self, **kwds): - super(Datasets, self).__init__(**kwds) - - -class Day(VegaLiteSchema): - """Day schema wrapper - - float - """ - _schema = {'$ref': '#/definitions/Day'} - - def __init__(self, *args): - super(Day, self).__init__(*args) - - -class DictInlineDataset(VegaLiteSchema): - """DictInlineDataset schema wrapper - - Mapping(required=[]) - """ - _schema = {'$ref': '#/definitions/Dict'} - - def __init__(self, **kwds): - super(DictInlineDataset, self).__init__(**kwds) - - -class Dir(VegaLiteSchema): - """Dir schema wrapper - - enum('ltr', 'rtl') - """ - _schema = {'$ref': '#/definitions/Dir'} - - def __init__(self, *args): - super(Dir, self).__init__(*args) - - -class DsvDataFormat(DataFormat): - """DsvDataFormat schema wrapper - - Mapping(required=[delimiter]) - - Attributes - ---------- - - delimiter : string - The delimiter between records. The delimiter must be a single character (i.e., a - single 16-bit code unit); so, ASCII delimiters are fine, but emoji delimiters are - not. - parse : anyOf(:class:`Parse`, None) - If set to ``null``, disable type inference based on the spec and only use type - inference based on the data. - Alternatively, a parsing directive object can be provided for explicit data types. - Each property of the object corresponds to a field name, and the value to the - desired data type (one of ``"number"``, ``"boolean"``, ``"date"``, or null (do not - parse the field)). - For example, ``"parse": {"modified_on": "date"}`` parses the ``modified_on`` field - in each input record a Date value. - - For ``"date"``, we parse data based using Javascript's `Date.parse() - `__. - For Specific date formats can be provided (e.g., ``{foo: "date:'%m%d%Y'"}`` ), using - the `d3-time-format syntax `__. - UTC date format parsing is supported similarly (e.g., ``{foo: "utc:'%m%d%Y'"}`` ). - See more about `UTC time - `__ - type : enum('dsv') - Type of input data: ``"json"``, ``"csv"``, ``"tsv"``, ``"dsv"``. - - **Default value:** The default format type is determined by the extension of the - file URL. - If no extension is detected, ``"json"`` will be used by default. - """ - _schema = {'$ref': '#/definitions/DsvDataFormat'} - - def __init__(self, delimiter=Undefined, parse=Undefined, type=Undefined, **kwds): - super(DsvDataFormat, self).__init__(delimiter=delimiter, parse=parse, type=type, **kwds) - - -class Element(VegaLiteSchema): - """Element schema wrapper - - string - """ - _schema = {'$ref': '#/definitions/Element'} - - def __init__(self, *args): - super(Element, self).__init__(*args) - - -class Encoding(VegaLiteSchema): - """Encoding schema wrapper - - Mapping(required=[]) - - Attributes - ---------- - - color : anyOf(:class:`StringFieldDefWithCondition`, :class:`StringValueDefWithCondition`) - Color of the marks – either fill or stroke color based on the ``filled`` property - of mark definition. - By default, ``color`` represents fill color for ``"area"``, ``"bar"``, ``"tick"``, - ``"text"``, ``"trail"``, ``"circle"``, and ``"square"`` / stroke color for - ``"line"`` and ``"point"``. - - **Default value:** If undefined, the default color depends on `mark config - `__ 's ``color`` property. - - *Note:* - 1) For fine-grained control over both fill and stroke colors of the marks, please - use the ``fill`` and ``stroke`` channels. If either ``fill`` or ``stroke`` channel - is specified, ``color`` channel will be ignored. - 2) See the scale documentation for more information about customizing `color scheme - `__. - detail : anyOf(:class:`FieldDefWithoutScale`, List(:class:`FieldDefWithoutScale`)) - Additional levels of detail for grouping data in aggregate views and - in line, trail, and area marks without mapping data to a specific visual channel. - fill : anyOf(:class:`StringFieldDefWithCondition`, :class:`StringValueDefWithCondition`) - Fill color of the marks. - **Default value:** If undefined, the default color depends on `mark config - `__ 's ``color`` property. - - *Note:* When using ``fill`` channel, ``color`` channel will be ignored. To customize - both fill and stroke, please use ``fill`` and ``stroke`` channels (not ``fill`` and - ``color`` ). - fillOpacity : anyOf(:class:`NumericFieldDefWithCondition`, - :class:`NumericValueDefWithCondition`) - Fill opacity of the marks. - - **Default value:** If undefined, the default opacity depends on `mark config - `__ 's ``fillOpacity`` - property. - href : anyOf(:class:`TextFieldDefWithCondition`, :class:`TextValueDefWithCondition`) - A URL to load upon mouse click. - key : :class:`FieldDefWithoutScale` - A data field to use as a unique key for data binding. When a visualization’s data is - updated, the key value will be used to match data elements to existing mark - instances. Use a key channel to enable object constancy for transitions over dynamic - data. - latitude : anyOf(:class:`LatLongFieldDef`, :class:`NumberValueDef`) - Latitude position of geographically projected marks. - latitude2 : anyOf(:class:`SecondaryFieldDef`, :class:`NumberValueDef`) - Latitude-2 position for geographically projected ranged ``"area"``, ``"bar"``, - ``"rect"``, and ``"rule"``. - longitude : anyOf(:class:`LatLongFieldDef`, :class:`NumberValueDef`) - Longitude position of geographically projected marks. - longitude2 : anyOf(:class:`SecondaryFieldDef`, :class:`NumberValueDef`) - Longitude-2 position for geographically projected ranged ``"area"``, ``"bar"``, - ``"rect"``, and ``"rule"``. - opacity : anyOf(:class:`NumericFieldDefWithCondition`, - :class:`NumericValueDefWithCondition`) - Opacity of the marks. - - **Default value:** If undefined, the default opacity depends on `mark config - `__ 's ``opacity`` property. - order : anyOf(:class:`OrderFieldDef`, List(:class:`OrderFieldDef`), :class:`NumberValueDef`) - Order of the marks. - - - * For stacked marks, this ``order`` channel encodes `stack order - `__. - * For line and trail marks, this ``order`` channel encodes order of data points in - the lines. This can be useful for creating `a connected scatterplot - `__. - Setting ``order`` to ``{"value": null}`` makes the line marks use the original - order in the data sources. - * Otherwise, this ``order`` channel encodes layer order of the marks. - - **Note** : In aggregate plots, ``order`` field should be ``aggregate`` d to avoid - creating additional aggregation grouping. - shape : anyOf(:class:`ShapeFieldDefWithCondition`, :class:`ShapeValueDefWithCondition`) - Shape of the mark. - - - #. - For ``point`` marks the supported values include: - - - * plotting shapes: ``"circle"``, ``"square"``, ``"cross"``, ``"diamond"``, - ``"triangle-up"``, ``"triangle-down"``, ``"triangle-right"``, or - ``"triangle-left"``. - * the line symbol ``"stroke"`` - * centered directional shapes ``"arrow"``, ``"wedge"``, or ``"triangle"`` - * a custom `SVG path string - `__ (For correct - sizing, custom shape paths should be defined within a square bounding box with - coordinates ranging from -1 to 1 along both the x and y dimensions.) - - #. - For ``geoshape`` marks it should be a field definition of the geojson data - - **Default value:** If undefined, the default shape depends on `mark config - `__ 's ``shape`` - property. ( ``"circle"`` if unset.) - size : anyOf(:class:`NumericFieldDefWithCondition`, :class:`NumericValueDefWithCondition`) - Size of the mark. - - - * For ``"point"``, ``"square"`` and ``"circle"``, – the symbol size, or pixel area - of the mark. - * For ``"bar"`` and ``"tick"`` – the bar and tick's size. - * For ``"text"`` – the text's font size. - * Size is unsupported for ``"line"``, ``"area"``, and ``"rect"``. (Use ``"trail"`` - instead of line with varying size) - stroke : anyOf(:class:`StringFieldDefWithCondition`, :class:`StringValueDefWithCondition`) - Stroke color of the marks. - **Default value:** If undefined, the default color depends on `mark config - `__ 's ``color`` property. - - *Note:* When using ``stroke`` channel, ``color`` channel will be ignored. To - customize both stroke and fill, please use ``stroke`` and ``fill`` channels (not - ``stroke`` and ``color`` ). - strokeOpacity : anyOf(:class:`NumericFieldDefWithCondition`, - :class:`NumericValueDefWithCondition`) - Stroke opacity of the marks. - - **Default value:** If undefined, the default opacity depends on `mark config - `__ 's ``strokeOpacity`` - property. - strokeWidth : anyOf(:class:`NumericFieldDefWithCondition`, - :class:`NumericValueDefWithCondition`) - Stroke width of the marks. - - **Default value:** If undefined, the default stroke width depends on `mark config - `__ 's ``strokeWidth`` - property. - text : anyOf(:class:`TextFieldDefWithCondition`, :class:`TextValueDefWithCondition`) - Text of the ``text`` mark. - tooltip : anyOf(:class:`TextFieldDefWithCondition`, :class:`TextValueDefWithCondition`, - List(:class:`TextFieldDef`), None) - The tooltip text to show upon mouse hover. - x : anyOf(:class:`PositionFieldDef`, :class:`XValueDef`) - X coordinates of the marks, or width of horizontal ``"bar"`` and ``"area"`` without - specified ``x2`` or ``width``. - - The ``value`` of this channel can be a number or a string ``"width"`` for the width - of the plot. - x2 : anyOf(:class:`SecondaryFieldDef`, :class:`XValueDef`) - X2 coordinates for ranged ``"area"``, ``"bar"``, ``"rect"``, and ``"rule"``. - - The ``value`` of this channel can be a number or a string ``"width"`` for the width - of the plot. - xError : anyOf(:class:`SecondaryFieldDef`, :class:`NumberValueDef`) - Error value of x coordinates for error specified ``"errorbar"`` and ``"errorband"``. - xError2 : anyOf(:class:`SecondaryFieldDef`, :class:`NumberValueDef`) - Secondary error value of x coordinates for error specified ``"errorbar"`` and - ``"errorband"``. - y : anyOf(:class:`PositionFieldDef`, :class:`YValueDef`) - Y coordinates of the marks, or height of vertical ``"bar"`` and ``"area"`` without - specified ``y2`` or ``height``. - - The ``value`` of this channel can be a number or a string ``"height"`` for the - height of the plot. - y2 : anyOf(:class:`SecondaryFieldDef`, :class:`YValueDef`) - Y2 coordinates for ranged ``"area"``, ``"bar"``, ``"rect"``, and ``"rule"``. - - The ``value`` of this channel can be a number or a string ``"height"`` for the - height of the plot. - yError : anyOf(:class:`SecondaryFieldDef`, :class:`NumberValueDef`) - Error value of y coordinates for error specified ``"errorbar"`` and ``"errorband"``. - yError2 : anyOf(:class:`SecondaryFieldDef`, :class:`NumberValueDef`) - Secondary error value of y coordinates for error specified ``"errorbar"`` and - ``"errorband"``. - """ - _schema = {'$ref': '#/definitions/Encoding'} - - def __init__(self, color=Undefined, detail=Undefined, fill=Undefined, fillOpacity=Undefined, - href=Undefined, key=Undefined, latitude=Undefined, latitude2=Undefined, - longitude=Undefined, longitude2=Undefined, opacity=Undefined, order=Undefined, - shape=Undefined, size=Undefined, stroke=Undefined, strokeOpacity=Undefined, - strokeWidth=Undefined, text=Undefined, tooltip=Undefined, x=Undefined, x2=Undefined, - xError=Undefined, xError2=Undefined, y=Undefined, y2=Undefined, yError=Undefined, - yError2=Undefined, **kwds): - super(Encoding, self).__init__(color=color, detail=detail, fill=fill, fillOpacity=fillOpacity, - href=href, key=key, latitude=latitude, latitude2=latitude2, - longitude=longitude, longitude2=longitude2, opacity=opacity, - order=order, shape=shape, size=size, stroke=stroke, - strokeOpacity=strokeOpacity, strokeWidth=strokeWidth, text=text, - tooltip=tooltip, x=x, x2=x2, xError=xError, xError2=xError2, y=y, - y2=y2, yError=yError, yError2=yError2, **kwds) - - -class ErrorBand(CompositeMark): - """ErrorBand schema wrapper - - enum('errorband') - """ - _schema = {'$ref': '#/definitions/ErrorBand'} - - def __init__(self, *args): - super(ErrorBand, self).__init__(*args) - - -class ErrorBandConfig(VegaLiteSchema): - """ErrorBandConfig schema wrapper - - Mapping(required=[]) - - Attributes - ---------- - - band : anyOf(boolean, :class:`MarkConfig`) - - borders : anyOf(boolean, :class:`MarkConfig`) - - extent : :class:`ErrorBarExtent` - The extent of the band. Available options include: - - - * ``"ci"`` : Extend the band to the confidence interval of the mean. - * ``"stderr"`` : The size of band are set to the value of standard error, extending - from the mean. - * ``"stdev"`` : The size of band are set to the value of standard deviation, - extending from the mean. - * ``"iqr"`` : Extend the band to the q1 and q3. - - **Default value:** ``"stderr"``. - interpolate : :class:`Interpolate` - The line interpolation method for the error band. One of the following: - - - * ``"linear"`` : piecewise linear segments, as in a polyline. - * ``"linear-closed"`` : close the linear segments to form a polygon. - * ``"step"`` : a piecewise constant function (a step function) consisting of - alternating horizontal and vertical lines. The y-value changes at the midpoint of - each pair of adjacent x-values. - * ``"step-before"`` : a piecewise constant function (a step function) consisting of - alternating horizontal and vertical lines. The y-value changes before the x-value. - * ``"step-after"`` : a piecewise constant function (a step function) consisting of - alternating horizontal and vertical lines. The y-value changes after the x-value. - * ``"basis"`` : a B-spline, with control point duplication on the ends. - * ``"basis-open"`` : an open B-spline; may not intersect the start or end. - * ``"basis-closed"`` : a closed B-spline, as in a loop. - * ``"cardinal"`` : a Cardinal spline, with control point duplication on the ends. - * ``"cardinal-open"`` : an open Cardinal spline; may not intersect the start or end, - but will intersect other control points. - * ``"cardinal-closed"`` : a closed Cardinal spline, as in a loop. - * ``"bundle"`` : equivalent to basis, except the tension parameter is used to - straighten the spline. - * ``"monotone"`` : cubic interpolation that preserves monotonicity in y. - tension : float - The tension parameter for the interpolation type of the error band. - """ - _schema = {'$ref': '#/definitions/ErrorBandConfig'} - - def __init__(self, band=Undefined, borders=Undefined, extent=Undefined, interpolate=Undefined, - tension=Undefined, **kwds): - super(ErrorBandConfig, self).__init__(band=band, borders=borders, extent=extent, - interpolate=interpolate, tension=tension, **kwds) - - -class ErrorBandDef(CompositeMarkDef): - """ErrorBandDef schema wrapper - - Mapping(required=[type]) - - Attributes - ---------- - - type : :class:`ErrorBand` - The mark type. This could a primitive mark type - (one of ``"bar"``, ``"circle"``, ``"square"``, ``"tick"``, ``"line"``, - ``"area"``, ``"point"``, ``"geoshape"``, ``"rule"``, and ``"text"`` ) - or a composite mark type ( ``"boxplot"``, ``"errorband"``, ``"errorbar"`` ). - band : anyOf(boolean, :class:`MarkConfig`) - - borders : anyOf(boolean, :class:`MarkConfig`) - - clip : boolean - Whether a composite mark be clipped to the enclosing group’s width and height. - color : :class:`Color` - Default color. Note that ``fill`` and ``stroke`` have higher precedence than - ``color`` and will override ``color``. - - **Default value:** :raw-html:`` - ``"#4682b4"`` - - **Note:** This property cannot be used in a `style config - `__. - extent : :class:`ErrorBarExtent` - The extent of the band. Available options include: - - - * ``"ci"`` : Extend the band to the confidence interval of the mean. - * ``"stderr"`` : The size of band are set to the value of standard error, extending - from the mean. - * ``"stdev"`` : The size of band are set to the value of standard deviation, - extending from the mean. - * ``"iqr"`` : Extend the band to the q1 and q3. - - **Default value:** ``"stderr"``. - interpolate : :class:`Interpolate` - The line interpolation method for the error band. One of the following: - - - * ``"linear"`` : piecewise linear segments, as in a polyline. - * ``"linear-closed"`` : close the linear segments to form a polygon. - * ``"step"`` : a piecewise constant function (a step function) consisting of - alternating horizontal and vertical lines. The y-value changes at the midpoint of - each pair of adjacent x-values. - * ``"step-before"`` : a piecewise constant function (a step function) consisting of - alternating horizontal and vertical lines. The y-value changes before the x-value. - * ``"step-after"`` : a piecewise constant function (a step function) consisting of - alternating horizontal and vertical lines. The y-value changes after the x-value. - * ``"basis"`` : a B-spline, with control point duplication on the ends. - * ``"basis-open"`` : an open B-spline; may not intersect the start or end. - * ``"basis-closed"`` : a closed B-spline, as in a loop. - * ``"cardinal"`` : a Cardinal spline, with control point duplication on the ends. - * ``"cardinal-open"`` : an open Cardinal spline; may not intersect the start or end, - but will intersect other control points. - * ``"cardinal-closed"`` : a closed Cardinal spline, as in a loop. - * ``"bundle"`` : equivalent to basis, except the tension parameter is used to - straighten the spline. - * ``"monotone"`` : cubic interpolation that preserves monotonicity in y. - opacity : float - The opacity (value between [0,1]) of the mark. - orient : :class:`Orientation` - Orientation of the error band. This is normally automatically determined, but can be - specified when the orientation is ambiguous and cannot be automatically determined. - tension : float - The tension parameter for the interpolation type of the error band. - """ - _schema = {'$ref': '#/definitions/ErrorBandDef'} - - def __init__(self, type=Undefined, band=Undefined, borders=Undefined, clip=Undefined, - color=Undefined, extent=Undefined, interpolate=Undefined, opacity=Undefined, - orient=Undefined, tension=Undefined, **kwds): - super(ErrorBandDef, self).__init__(type=type, band=band, borders=borders, clip=clip, - color=color, extent=extent, interpolate=interpolate, - opacity=opacity, orient=orient, tension=tension, **kwds) - - -class ErrorBar(CompositeMark): - """ErrorBar schema wrapper - - enum('errorbar') - """ - _schema = {'$ref': '#/definitions/ErrorBar'} - - def __init__(self, *args): - super(ErrorBar, self).__init__(*args) - - -class ErrorBarConfig(VegaLiteSchema): - """ErrorBarConfig schema wrapper - - Mapping(required=[]) - - Attributes - ---------- - - extent : :class:`ErrorBarExtent` - The extent of the rule. Available options include: - - - * ``"ci"`` : Extend the rule to the confidence interval of the mean. - * ``"stderr"`` : The size of rule are set to the value of standard error, extending - from the mean. - * ``"stdev"`` : The size of rule are set to the value of standard deviation, - extending from the mean. - * ``"iqr"`` : Extend the rule to the q1 and q3. - - **Default value:** ``"stderr"``. - rule : anyOf(boolean, :class:`MarkConfig`) - - ticks : anyOf(boolean, :class:`MarkConfig`) - - """ - _schema = {'$ref': '#/definitions/ErrorBarConfig'} - - def __init__(self, extent=Undefined, rule=Undefined, ticks=Undefined, **kwds): - super(ErrorBarConfig, self).__init__(extent=extent, rule=rule, ticks=ticks, **kwds) - - -class ErrorBarDef(CompositeMarkDef): - """ErrorBarDef schema wrapper - - Mapping(required=[type]) - - Attributes - ---------- - - type : :class:`ErrorBar` - The mark type. This could a primitive mark type - (one of ``"bar"``, ``"circle"``, ``"square"``, ``"tick"``, ``"line"``, - ``"area"``, ``"point"``, ``"geoshape"``, ``"rule"``, and ``"text"`` ) - or a composite mark type ( ``"boxplot"``, ``"errorband"``, ``"errorbar"`` ). - clip : boolean - Whether a composite mark be clipped to the enclosing group’s width and height. - color : :class:`Color` - Default color. Note that ``fill`` and ``stroke`` have higher precedence than - ``color`` and will override ``color``. - - **Default value:** :raw-html:`` - ``"#4682b4"`` - - **Note:** This property cannot be used in a `style config - `__. - extent : :class:`ErrorBarExtent` - The extent of the rule. Available options include: - - - * ``"ci"`` : Extend the rule to the confidence interval of the mean. - * ``"stderr"`` : The size of rule are set to the value of standard error, extending - from the mean. - * ``"stdev"`` : The size of rule are set to the value of standard deviation, - extending from the mean. - * ``"iqr"`` : Extend the rule to the q1 and q3. - - **Default value:** ``"stderr"``. - opacity : float - The opacity (value between [0,1]) of the mark. - orient : :class:`Orientation` - Orientation of the error bar. This is normally automatically determined, but can be - specified when the orientation is ambiguous and cannot be automatically determined. - rule : anyOf(boolean, :class:`MarkConfig`) - - ticks : anyOf(boolean, :class:`MarkConfig`) - - """ - _schema = {'$ref': '#/definitions/ErrorBarDef'} - - def __init__(self, type=Undefined, clip=Undefined, color=Undefined, extent=Undefined, - opacity=Undefined, orient=Undefined, rule=Undefined, ticks=Undefined, **kwds): - super(ErrorBarDef, self).__init__(type=type, clip=clip, color=color, extent=extent, - opacity=opacity, orient=orient, rule=rule, ticks=ticks, **kwds) - - -class ErrorBarExtent(VegaLiteSchema): - """ErrorBarExtent schema wrapper - - enum('ci', 'iqr', 'stderr', 'stdev') - """ - _schema = {'$ref': '#/definitions/ErrorBarExtent'} - - def __init__(self, *args): - super(ErrorBarExtent, self).__init__(*args) - - -class EventStream(VegaLiteSchema): - """EventStream schema wrapper - - Any - """ - _schema = {'$ref': '#/definitions/EventStream'} - - def __init__(self, *args, **kwds): - super(EventStream, self).__init__(*args, **kwds) - - -class FacetFieldDef(VegaLiteSchema): - """FacetFieldDef schema wrapper - - Mapping(required=[type]) - - Attributes - ---------- - - type : :class:`StandardType` - The encoded field's type of measurement ( ``"quantitative"``, ``"temporal"``, - ``"ordinal"``, or ``"nominal"`` ). - It can also be a ``"geojson"`` type for encoding `'geoshape' - `__. - - **Note:** - - - * Data values for a temporal field can be either a date-time string (e.g., - ``"2015-03-07 12:32:17"``, ``"17:01"``, ``"2015-03-16"``. ``"2015"`` ) or a - timestamp number (e.g., ``1552199579097`` ). - * Data ``type`` describes the semantics of the data rather than the primitive data - types ( ``number``, ``string``, etc.). The same primitive data type can have - different types of measurement. For example, numeric data can represent - quantitative, ordinal, or nominal data. - * When using with `bin `__, the - ``type`` property can be either ``"quantitative"`` (for using a linear bin scale) - or `"ordinal" (for using an ordinal bin scale) - `__. - * When using with `timeUnit - `__, the ``type`` property - can be either ``"temporal"`` (for using a temporal scale) or `"ordinal" (for using - an ordinal scale) `__. - * When using with `aggregate - `__, the ``type`` property - refers to the post-aggregation data type. For example, we can calculate count - ``distinct`` of a categorical field ``"cat"`` using ``{"aggregate": "distinct", - "field": "cat", "type": "quantitative"}``. The ``"type"`` of the aggregate output - is ``"quantitative"``. - * Secondary channels (e.g., ``x2``, ``y2``, ``xError``, ``yError`` ) do not have - ``type`` as they have exactly the same type as their primary channels (e.g., - ``x``, ``y`` ). - - **See also:** `type `__ - documentation. - aggregate : :class:`Aggregate` - Aggregation function for the field - (e.g., ``mean``, ``sum``, ``median``, ``min``, ``max``, ``count`` ). - - **Default value:** ``undefined`` (None) - - **See also:** `aggregate `__ - documentation. - bin : anyOf(boolean, :class:`BinParams`, None) - A flag for binning a ``quantitative`` field, `an object defining binning parameters - `__, or indicating that the - data for ``x`` or ``y`` channel are binned before they are imported into Vega-Lite ( - ``"binned"`` ). - - - If ``true``, default `binning parameters - `__ will be applied. - - If ``"binned"``, this indicates that the data for the ``x`` (or ``y`` ) channel are - already binned. You can map the bin-start field to ``x`` (or ``y`` ) and the bin-end - field to ``x2`` (or ``y2`` ). The scale and axis will be formatted similar to - binning in Vega-lite. To adjust the axis ticks based on the bin step, you can also - set the axis's `tickMinStep - `__ property. - - **Default value:** ``false`` - - **See also:** `bin `__ - documentation. - field : :class:`Field` - **Required.** A string defining the name of the field from which to pull a data - value - or an object defining iterated values from the `repeat - `__ operator. - - **See also:** `field `__ - documentation. - - **Notes:** - 1) Dots ( ``.`` ) and brackets ( ``[`` and ``]`` ) can be used to access nested - objects (e.g., ``"field": "foo.bar"`` and ``"field": "foo['bar']"`` ). - If field names contain dots or brackets but are not nested, you can use ``\\`` to - escape dots and brackets (e.g., ``"a\\.b"`` and ``"a\\[0\\]"`` ). - See more details about escaping in the `field documentation - `__. - 2) ``field`` is not required if ``aggregate`` is ``count``. - header : :class:`Header` - An object defining properties of a facet's header. - sort : anyOf(:class:`SortArray`, :class:`SortOrder`, :class:`EncodingSortField`, None) - Sort order for the encoded field. - - For continuous fields (quantitative or temporal), ``sort`` can be either - ``"ascending"`` or ``"descending"``. - - For discrete fields, ``sort`` can be one of the following: - - - * ``"ascending"`` or ``"descending"`` -- for sorting by the values' natural order in - Javascript. - * `A sort field definition - `__ for sorting by - another field. - * `An array specifying the field values in preferred order - `__. In this case, the - sort order will obey the values in the array, followed by any unspecified values - in their original order. For discrete time field, values in the sort array can be - `date-time definition objects `__. In addition, for time units - ``"month"`` and ``"day"``, the values can be the month or day names (case - insensitive) or their 3-letter initials (e.g., ``"Mon"``, ``"Tue"`` ). - * ``null`` indicating no sort. - - **Default value:** ``"ascending"`` - - **Note:** ``null`` is not supported for ``row`` and ``column``. - timeUnit : :class:`TimeUnit` - Time unit (e.g., ``year``, ``yearmonth``, ``month``, ``hours`` ) for a temporal - field. - or `a temporal field that gets casted as ordinal - `__. - - **Default value:** ``undefined`` (None) - - **See also:** `timeUnit `__ - documentation. - title : anyOf(string, None) - A title for the field. If ``null``, the title will be removed. - - **Default value:** derived from the field's name and transformation function ( - ``aggregate``, ``bin`` and ``timeUnit`` ). If the field has an aggregate function, - the function is displayed as part of the title (e.g., ``"Sum of Profit"`` ). If the - field is binned or has a time unit applied, the applied function is shown in - parentheses (e.g., ``"Profit (binned)"``, ``"Transaction Date (year-month)"`` ). - Otherwise, the title is simply the field name. - - **Notes** : - - 1) You can customize the default field title format by providing the `fieldTitle - `__ property in - the `config `__ or `fieldTitle - function via the compile function's options - `__. - - 2) If both field definition's ``title`` and axis, header, or legend ``title`` are - defined, axis/header/legend title will be used. - """ - _schema = {'$ref': '#/definitions/FacetFieldDef'} - - def __init__(self, type=Undefined, aggregate=Undefined, bin=Undefined, field=Undefined, - header=Undefined, sort=Undefined, timeUnit=Undefined, title=Undefined, **kwds): - super(FacetFieldDef, self).__init__(type=type, aggregate=aggregate, bin=bin, field=field, - header=header, sort=sort, timeUnit=timeUnit, title=title, - **kwds) - - -class FacetMapping(VegaLiteSchema): - """FacetMapping schema wrapper - - Mapping(required=[]) - - Attributes - ---------- - - column : :class:`FacetFieldDef` - A field definition for the horizontal facet of trellis plots. - row : :class:`FacetFieldDef` - A field definition for the vertical facet of trellis plots. - """ - _schema = {'$ref': '#/definitions/FacetMapping'} - - def __init__(self, column=Undefined, row=Undefined, **kwds): - super(FacetMapping, self).__init__(column=column, row=row, **kwds) - - -class FacetedEncoding(VegaLiteSchema): - """FacetedEncoding schema wrapper - - Mapping(required=[]) - - Attributes - ---------- - - color : anyOf(:class:`StringFieldDefWithCondition`, :class:`StringValueDefWithCondition`) - Color of the marks – either fill or stroke color based on the ``filled`` property - of mark definition. - By default, ``color`` represents fill color for ``"area"``, ``"bar"``, ``"tick"``, - ``"text"``, ``"trail"``, ``"circle"``, and ``"square"`` / stroke color for - ``"line"`` and ``"point"``. - - **Default value:** If undefined, the default color depends on `mark config - `__ 's ``color`` property. - - *Note:* - 1) For fine-grained control over both fill and stroke colors of the marks, please - use the ``fill`` and ``stroke`` channels. If either ``fill`` or ``stroke`` channel - is specified, ``color`` channel will be ignored. - 2) See the scale documentation for more information about customizing `color scheme - `__. - column : :class:`FacetFieldDef` - A field definition for the horizontal facet of trellis plots. - detail : anyOf(:class:`FieldDefWithoutScale`, List(:class:`FieldDefWithoutScale`)) - Additional levels of detail for grouping data in aggregate views and - in line, trail, and area marks without mapping data to a specific visual channel. - facet : :class:`FacetFieldDef` - A field definition for the (flexible) facet of trellis plots. - - If either ``row`` or ``column`` is specified, this channel will be ignored. - fill : anyOf(:class:`StringFieldDefWithCondition`, :class:`StringValueDefWithCondition`) - Fill color of the marks. - **Default value:** If undefined, the default color depends on `mark config - `__ 's ``color`` property. - - *Note:* When using ``fill`` channel, ``color`` channel will be ignored. To customize - both fill and stroke, please use ``fill`` and ``stroke`` channels (not ``fill`` and - ``color`` ). - fillOpacity : anyOf(:class:`NumericFieldDefWithCondition`, - :class:`NumericValueDefWithCondition`) - Fill opacity of the marks. - - **Default value:** If undefined, the default opacity depends on `mark config - `__ 's ``fillOpacity`` - property. - href : anyOf(:class:`TextFieldDefWithCondition`, :class:`TextValueDefWithCondition`) - A URL to load upon mouse click. - key : :class:`FieldDefWithoutScale` - A data field to use as a unique key for data binding. When a visualization’s data is - updated, the key value will be used to match data elements to existing mark - instances. Use a key channel to enable object constancy for transitions over dynamic - data. - latitude : anyOf(:class:`LatLongFieldDef`, :class:`NumberValueDef`) - Latitude position of geographically projected marks. - latitude2 : anyOf(:class:`SecondaryFieldDef`, :class:`NumberValueDef`) - Latitude-2 position for geographically projected ranged ``"area"``, ``"bar"``, - ``"rect"``, and ``"rule"``. - longitude : anyOf(:class:`LatLongFieldDef`, :class:`NumberValueDef`) - Longitude position of geographically projected marks. - longitude2 : anyOf(:class:`SecondaryFieldDef`, :class:`NumberValueDef`) - Longitude-2 position for geographically projected ranged ``"area"``, ``"bar"``, - ``"rect"``, and ``"rule"``. - opacity : anyOf(:class:`NumericFieldDefWithCondition`, - :class:`NumericValueDefWithCondition`) - Opacity of the marks. - - **Default value:** If undefined, the default opacity depends on `mark config - `__ 's ``opacity`` property. - order : anyOf(:class:`OrderFieldDef`, List(:class:`OrderFieldDef`), :class:`NumberValueDef`) - Order of the marks. - - - * For stacked marks, this ``order`` channel encodes `stack order - `__. - * For line and trail marks, this ``order`` channel encodes order of data points in - the lines. This can be useful for creating `a connected scatterplot - `__. - Setting ``order`` to ``{"value": null}`` makes the line marks use the original - order in the data sources. - * Otherwise, this ``order`` channel encodes layer order of the marks. - - **Note** : In aggregate plots, ``order`` field should be ``aggregate`` d to avoid - creating additional aggregation grouping. - row : :class:`FacetFieldDef` - A field definition for the vertical facet of trellis plots. - shape : anyOf(:class:`ShapeFieldDefWithCondition`, :class:`ShapeValueDefWithCondition`) - Shape of the mark. - - - #. - For ``point`` marks the supported values include: - - - * plotting shapes: ``"circle"``, ``"square"``, ``"cross"``, ``"diamond"``, - ``"triangle-up"``, ``"triangle-down"``, ``"triangle-right"``, or - ``"triangle-left"``. - * the line symbol ``"stroke"`` - * centered directional shapes ``"arrow"``, ``"wedge"``, or ``"triangle"`` - * a custom `SVG path string - `__ (For correct - sizing, custom shape paths should be defined within a square bounding box with - coordinates ranging from -1 to 1 along both the x and y dimensions.) - - #. - For ``geoshape`` marks it should be a field definition of the geojson data - - **Default value:** If undefined, the default shape depends on `mark config - `__ 's ``shape`` - property. ( ``"circle"`` if unset.) - size : anyOf(:class:`NumericFieldDefWithCondition`, :class:`NumericValueDefWithCondition`) - Size of the mark. - - - * For ``"point"``, ``"square"`` and ``"circle"``, – the symbol size, or pixel area - of the mark. - * For ``"bar"`` and ``"tick"`` – the bar and tick's size. - * For ``"text"`` – the text's font size. - * Size is unsupported for ``"line"``, ``"area"``, and ``"rect"``. (Use ``"trail"`` - instead of line with varying size) - stroke : anyOf(:class:`StringFieldDefWithCondition`, :class:`StringValueDefWithCondition`) - Stroke color of the marks. - **Default value:** If undefined, the default color depends on `mark config - `__ 's ``color`` property. - - *Note:* When using ``stroke`` channel, ``color`` channel will be ignored. To - customize both stroke and fill, please use ``stroke`` and ``fill`` channels (not - ``stroke`` and ``color`` ). - strokeOpacity : anyOf(:class:`NumericFieldDefWithCondition`, - :class:`NumericValueDefWithCondition`) - Stroke opacity of the marks. - - **Default value:** If undefined, the default opacity depends on `mark config - `__ 's ``strokeOpacity`` - property. - strokeWidth : anyOf(:class:`NumericFieldDefWithCondition`, - :class:`NumericValueDefWithCondition`) - Stroke width of the marks. - - **Default value:** If undefined, the default stroke width depends on `mark config - `__ 's ``strokeWidth`` - property. - text : anyOf(:class:`TextFieldDefWithCondition`, :class:`TextValueDefWithCondition`) - Text of the ``text`` mark. - tooltip : anyOf(:class:`TextFieldDefWithCondition`, :class:`TextValueDefWithCondition`, - List(:class:`TextFieldDef`), None) - The tooltip text to show upon mouse hover. - x : anyOf(:class:`PositionFieldDef`, :class:`XValueDef`) - X coordinates of the marks, or width of horizontal ``"bar"`` and ``"area"`` without - specified ``x2`` or ``width``. - - The ``value`` of this channel can be a number or a string ``"width"`` for the width - of the plot. - x2 : anyOf(:class:`SecondaryFieldDef`, :class:`XValueDef`) - X2 coordinates for ranged ``"area"``, ``"bar"``, ``"rect"``, and ``"rule"``. - - The ``value`` of this channel can be a number or a string ``"width"`` for the width - of the plot. - xError : anyOf(:class:`SecondaryFieldDef`, :class:`NumberValueDef`) - Error value of x coordinates for error specified ``"errorbar"`` and ``"errorband"``. - xError2 : anyOf(:class:`SecondaryFieldDef`, :class:`NumberValueDef`) - Secondary error value of x coordinates for error specified ``"errorbar"`` and - ``"errorband"``. - y : anyOf(:class:`PositionFieldDef`, :class:`YValueDef`) - Y coordinates of the marks, or height of vertical ``"bar"`` and ``"area"`` without - specified ``y2`` or ``height``. - - The ``value`` of this channel can be a number or a string ``"height"`` for the - height of the plot. - y2 : anyOf(:class:`SecondaryFieldDef`, :class:`YValueDef`) - Y2 coordinates for ranged ``"area"``, ``"bar"``, ``"rect"``, and ``"rule"``. - - The ``value`` of this channel can be a number or a string ``"height"`` for the - height of the plot. - yError : anyOf(:class:`SecondaryFieldDef`, :class:`NumberValueDef`) - Error value of y coordinates for error specified ``"errorbar"`` and ``"errorband"``. - yError2 : anyOf(:class:`SecondaryFieldDef`, :class:`NumberValueDef`) - Secondary error value of y coordinates for error specified ``"errorbar"`` and - ``"errorband"``. - """ - _schema = {'$ref': '#/definitions/FacetedEncoding'} - - def __init__(self, color=Undefined, column=Undefined, detail=Undefined, facet=Undefined, - fill=Undefined, fillOpacity=Undefined, href=Undefined, key=Undefined, - latitude=Undefined, latitude2=Undefined, longitude=Undefined, longitude2=Undefined, - opacity=Undefined, order=Undefined, row=Undefined, shape=Undefined, size=Undefined, - stroke=Undefined, strokeOpacity=Undefined, strokeWidth=Undefined, text=Undefined, - tooltip=Undefined, x=Undefined, x2=Undefined, xError=Undefined, xError2=Undefined, - y=Undefined, y2=Undefined, yError=Undefined, yError2=Undefined, **kwds): - super(FacetedEncoding, self).__init__(color=color, column=column, detail=detail, facet=facet, - fill=fill, fillOpacity=fillOpacity, href=href, key=key, - latitude=latitude, latitude2=latitude2, - longitude=longitude, longitude2=longitude2, - opacity=opacity, order=order, row=row, shape=shape, - size=size, stroke=stroke, strokeOpacity=strokeOpacity, - strokeWidth=strokeWidth, text=text, tooltip=tooltip, x=x, - x2=x2, xError=xError, xError2=xError2, y=y, y2=y2, - yError=yError, yError2=yError2, **kwds) - - -class Field(VegaLiteSchema): - """Field schema wrapper - - anyOf(:class:`FieldName`, :class:`RepeatRef`) - """ - _schema = {'$ref': '#/definitions/Field'} - - def __init__(self, *args, **kwds): - super(Field, self).__init__(*args, **kwds) - - -class FieldDefWithConditionMarkPropFieldDefTypeForShapestringnull(VegaLiteSchema): - """FieldDefWithConditionMarkPropFieldDefTypeForShapestringnull schema wrapper - - Mapping(required=[type]) - A FieldDef with Condition :raw-html:`` - - Attributes - ---------- - - type : :class:`TypeForShape` - The encoded field's type of measurement ( ``"quantitative"``, ``"temporal"``, - ``"ordinal"``, or ``"nominal"`` ). - It can also be a ``"geojson"`` type for encoding `'geoshape' - `__. - - **Note:** - - - * Data values for a temporal field can be either a date-time string (e.g., - ``"2015-03-07 12:32:17"``, ``"17:01"``, ``"2015-03-16"``. ``"2015"`` ) or a - timestamp number (e.g., ``1552199579097`` ). - * Data ``type`` describes the semantics of the data rather than the primitive data - types ( ``number``, ``string``, etc.). The same primitive data type can have - different types of measurement. For example, numeric data can represent - quantitative, ordinal, or nominal data. - * When using with `bin `__, the - ``type`` property can be either ``"quantitative"`` (for using a linear bin scale) - or `"ordinal" (for using an ordinal bin scale) - `__. - * When using with `timeUnit - `__, the ``type`` property - can be either ``"temporal"`` (for using a temporal scale) or `"ordinal" (for using - an ordinal scale) `__. - * When using with `aggregate - `__, the ``type`` property - refers to the post-aggregation data type. For example, we can calculate count - ``distinct`` of a categorical field ``"cat"`` using ``{"aggregate": "distinct", - "field": "cat", "type": "quantitative"}``. The ``"type"`` of the aggregate output - is ``"quantitative"``. - * Secondary channels (e.g., ``x2``, ``y2``, ``xError``, ``yError`` ) do not have - ``type`` as they have exactly the same type as their primary channels (e.g., - ``x``, ``y`` ). - - **See also:** `type `__ - documentation. - aggregate : :class:`Aggregate` - Aggregation function for the field - (e.g., ``mean``, ``sum``, ``median``, ``min``, ``max``, ``count`` ). - - **Default value:** ``undefined`` (None) - - **See also:** `aggregate `__ - documentation. - bin : anyOf(boolean, :class:`BinParams`, None) - A flag for binning a ``quantitative`` field, `an object defining binning parameters - `__, or indicating that the - data for ``x`` or ``y`` channel are binned before they are imported into Vega-Lite ( - ``"binned"`` ). - - - If ``true``, default `binning parameters - `__ will be applied. - - If ``"binned"``, this indicates that the data for the ``x`` (or ``y`` ) channel are - already binned. You can map the bin-start field to ``x`` (or ``y`` ) and the bin-end - field to ``x2`` (or ``y2`` ). The scale and axis will be formatted similar to - binning in Vega-lite. To adjust the axis ticks based on the bin step, you can also - set the axis's `tickMinStep - `__ property. - - **Default value:** ``false`` - - **See also:** `bin `__ - documentation. - condition : anyOf(:class:`ConditionalStringValueDef`, - List(:class:`ConditionalStringValueDef`)) - One or more value definition(s) with `a selection or a test predicate - `__. - - **Note:** A field definition's ``condition`` property can only contain `conditional - value definitions `__ - since Vega-Lite only allows at most one encoded field per encoding channel. - field : :class:`Field` - **Required.** A string defining the name of the field from which to pull a data - value - or an object defining iterated values from the `repeat - `__ operator. - - **See also:** `field `__ - documentation. - - **Notes:** - 1) Dots ( ``.`` ) and brackets ( ``[`` and ``]`` ) can be used to access nested - objects (e.g., ``"field": "foo.bar"`` and ``"field": "foo['bar']"`` ). - If field names contain dots or brackets but are not nested, you can use ``\\`` to - escape dots and brackets (e.g., ``"a\\.b"`` and ``"a\\[0\\]"`` ). - See more details about escaping in the `field documentation - `__. - 2) ``field`` is not required if ``aggregate`` is ``count``. - legend : anyOf(:class:`Legend`, None) - An object defining properties of the legend. - If ``null``, the legend for the encoding channel will be removed. - - **Default value:** If undefined, default `legend properties - `__ are applied. - - **See also:** `legend `__ - documentation. - scale : anyOf(:class:`Scale`, None) - An object defining properties of the channel's scale, which is the function that - transforms values in the data domain (numbers, dates, strings, etc) to visual values - (pixels, colors, sizes) of the encoding channels. - - If ``null``, the scale will be `disabled and the data value will be directly encoded - `__. - - **Default value:** If undefined, default `scale properties - `__ are applied. - - **See also:** `scale `__ - documentation. - sort : :class:`Sort` - Sort order for the encoded field. - - For continuous fields (quantitative or temporal), ``sort`` can be either - ``"ascending"`` or ``"descending"``. - - For discrete fields, ``sort`` can be one of the following: - - - * ``"ascending"`` or ``"descending"`` -- for sorting by the values' natural order in - Javascript. - * `A sort-by-encoding definition - `__ for sorting - by another encoding channel. (This type of sort definition is not available for - ``row`` and ``column`` channels.) - * `A sort field definition - `__ for sorting by - another field. - * `An array specifying the field values in preferred order - `__. In this case, the - sort order will obey the values in the array, followed by any unspecified values - in their original order. For discrete time field, values in the sort array can be - `date-time definition objects `__. In addition, for time units - ``"month"`` and ``"day"``, the values can be the month or day names (case - insensitive) or their 3-letter initials (e.g., ``"Mon"``, ``"Tue"`` ). - * ``null`` indicating no sort. - - **Default value:** ``"ascending"`` - - **Note:** ``null`` is not supported for ``row`` and ``column``. - - **See also:** `sort `__ - documentation. - timeUnit : :class:`TimeUnit` - Time unit (e.g., ``year``, ``yearmonth``, ``month``, ``hours`` ) for a temporal - field. - or `a temporal field that gets casted as ordinal - `__. - - **Default value:** ``undefined`` (None) - - **See also:** `timeUnit `__ - documentation. - title : anyOf(string, None) - A title for the field. If ``null``, the title will be removed. - - **Default value:** derived from the field's name and transformation function ( - ``aggregate``, ``bin`` and ``timeUnit`` ). If the field has an aggregate function, - the function is displayed as part of the title (e.g., ``"Sum of Profit"`` ). If the - field is binned or has a time unit applied, the applied function is shown in - parentheses (e.g., ``"Profit (binned)"``, ``"Transaction Date (year-month)"`` ). - Otherwise, the title is simply the field name. - - **Notes** : - - 1) You can customize the default field title format by providing the `fieldTitle - `__ property in - the `config `__ or `fieldTitle - function via the compile function's options - `__. - - 2) If both field definition's ``title`` and axis, header, or legend ``title`` are - defined, axis/header/legend title will be used. - """ - _schema = {'$ref': '#/definitions/FieldDefWithCondition,(string|null)>'} - - def __init__(self, type=Undefined, aggregate=Undefined, bin=Undefined, condition=Undefined, - field=Undefined, legend=Undefined, scale=Undefined, sort=Undefined, timeUnit=Undefined, - title=Undefined, **kwds): - super(FieldDefWithConditionMarkPropFieldDefTypeForShapestringnull, self).__init__(type=type, - aggregate=aggregate, - bin=bin, - condition=condition, - field=field, - legend=legend, - scale=scale, - sort=sort, - timeUnit=timeUnit, - title=title, - **kwds) - - -class FieldDefWithConditionMarkPropFieldDefnumber(VegaLiteSchema): - """FieldDefWithConditionMarkPropFieldDefnumber schema wrapper - - Mapping(required=[type]) - A FieldDef with Condition :raw-html:`` - - Attributes - ---------- - - type : :class:`StandardType` - The encoded field's type of measurement ( ``"quantitative"``, ``"temporal"``, - ``"ordinal"``, or ``"nominal"`` ). - It can also be a ``"geojson"`` type for encoding `'geoshape' - `__. - - **Note:** - - - * Data values for a temporal field can be either a date-time string (e.g., - ``"2015-03-07 12:32:17"``, ``"17:01"``, ``"2015-03-16"``. ``"2015"`` ) or a - timestamp number (e.g., ``1552199579097`` ). - * Data ``type`` describes the semantics of the data rather than the primitive data - types ( ``number``, ``string``, etc.). The same primitive data type can have - different types of measurement. For example, numeric data can represent - quantitative, ordinal, or nominal data. - * When using with `bin `__, the - ``type`` property can be either ``"quantitative"`` (for using a linear bin scale) - or `"ordinal" (for using an ordinal bin scale) - `__. - * When using with `timeUnit - `__, the ``type`` property - can be either ``"temporal"`` (for using a temporal scale) or `"ordinal" (for using - an ordinal scale) `__. - * When using with `aggregate - `__, the ``type`` property - refers to the post-aggregation data type. For example, we can calculate count - ``distinct`` of a categorical field ``"cat"`` using ``{"aggregate": "distinct", - "field": "cat", "type": "quantitative"}``. The ``"type"`` of the aggregate output - is ``"quantitative"``. - * Secondary channels (e.g., ``x2``, ``y2``, ``xError``, ``yError`` ) do not have - ``type`` as they have exactly the same type as their primary channels (e.g., - ``x``, ``y`` ). - - **See also:** `type `__ - documentation. - aggregate : :class:`Aggregate` - Aggregation function for the field - (e.g., ``mean``, ``sum``, ``median``, ``min``, ``max``, ``count`` ). - - **Default value:** ``undefined`` (None) - - **See also:** `aggregate `__ - documentation. - bin : anyOf(boolean, :class:`BinParams`, None) - A flag for binning a ``quantitative`` field, `an object defining binning parameters - `__, or indicating that the - data for ``x`` or ``y`` channel are binned before they are imported into Vega-Lite ( - ``"binned"`` ). - - - If ``true``, default `binning parameters - `__ will be applied. - - If ``"binned"``, this indicates that the data for the ``x`` (or ``y`` ) channel are - already binned. You can map the bin-start field to ``x`` (or ``y`` ) and the bin-end - field to ``x2`` (or ``y2`` ). The scale and axis will be formatted similar to - binning in Vega-lite. To adjust the axis ticks based on the bin step, you can also - set the axis's `tickMinStep - `__ property. - - **Default value:** ``false`` - - **See also:** `bin `__ - documentation. - condition : anyOf(:class:`ConditionalNumberValueDef`, - List(:class:`ConditionalNumberValueDef`)) - One or more value definition(s) with `a selection or a test predicate - `__. - - **Note:** A field definition's ``condition`` property can only contain `conditional - value definitions `__ - since Vega-Lite only allows at most one encoded field per encoding channel. - field : :class:`Field` - **Required.** A string defining the name of the field from which to pull a data - value - or an object defining iterated values from the `repeat - `__ operator. - - **See also:** `field `__ - documentation. - - **Notes:** - 1) Dots ( ``.`` ) and brackets ( ``[`` and ``]`` ) can be used to access nested - objects (e.g., ``"field": "foo.bar"`` and ``"field": "foo['bar']"`` ). - If field names contain dots or brackets but are not nested, you can use ``\\`` to - escape dots and brackets (e.g., ``"a\\.b"`` and ``"a\\[0\\]"`` ). - See more details about escaping in the `field documentation - `__. - 2) ``field`` is not required if ``aggregate`` is ``count``. - legend : anyOf(:class:`Legend`, None) - An object defining properties of the legend. - If ``null``, the legend for the encoding channel will be removed. - - **Default value:** If undefined, default `legend properties - `__ are applied. - - **See also:** `legend `__ - documentation. - scale : anyOf(:class:`Scale`, None) - An object defining properties of the channel's scale, which is the function that - transforms values in the data domain (numbers, dates, strings, etc) to visual values - (pixels, colors, sizes) of the encoding channels. - - If ``null``, the scale will be `disabled and the data value will be directly encoded - `__. - - **Default value:** If undefined, default `scale properties - `__ are applied. - - **See also:** `scale `__ - documentation. - sort : :class:`Sort` - Sort order for the encoded field. - - For continuous fields (quantitative or temporal), ``sort`` can be either - ``"ascending"`` or ``"descending"``. - - For discrete fields, ``sort`` can be one of the following: - - - * ``"ascending"`` or ``"descending"`` -- for sorting by the values' natural order in - Javascript. - * `A sort-by-encoding definition - `__ for sorting - by another encoding channel. (This type of sort definition is not available for - ``row`` and ``column`` channels.) - * `A sort field definition - `__ for sorting by - another field. - * `An array specifying the field values in preferred order - `__. In this case, the - sort order will obey the values in the array, followed by any unspecified values - in their original order. For discrete time field, values in the sort array can be - `date-time definition objects `__. In addition, for time units - ``"month"`` and ``"day"``, the values can be the month or day names (case - insensitive) or their 3-letter initials (e.g., ``"Mon"``, ``"Tue"`` ). - * ``null`` indicating no sort. - - **Default value:** ``"ascending"`` - - **Note:** ``null`` is not supported for ``row`` and ``column``. - - **See also:** `sort `__ - documentation. - timeUnit : :class:`TimeUnit` - Time unit (e.g., ``year``, ``yearmonth``, ``month``, ``hours`` ) for a temporal - field. - or `a temporal field that gets casted as ordinal - `__. - - **Default value:** ``undefined`` (None) - - **See also:** `timeUnit `__ - documentation. - title : anyOf(string, None) - A title for the field. If ``null``, the title will be removed. - - **Default value:** derived from the field's name and transformation function ( - ``aggregate``, ``bin`` and ``timeUnit`` ). If the field has an aggregate function, - the function is displayed as part of the title (e.g., ``"Sum of Profit"`` ). If the - field is binned or has a time unit applied, the applied function is shown in - parentheses (e.g., ``"Profit (binned)"``, ``"Transaction Date (year-month)"`` ). - Otherwise, the title is simply the field name. - - **Notes** : - - 1) You can customize the default field title format by providing the `fieldTitle - `__ property in - the `config `__ or `fieldTitle - function via the compile function's options - `__. - - 2) If both field definition's ``title`` and axis, header, or legend ``title`` are - defined, axis/header/legend title will be used. - """ - _schema = {'$ref': '#/definitions/FieldDefWithCondition'} - - def __init__(self, type=Undefined, aggregate=Undefined, bin=Undefined, condition=Undefined, - field=Undefined, legend=Undefined, scale=Undefined, sort=Undefined, timeUnit=Undefined, - title=Undefined, **kwds): - super(FieldDefWithConditionMarkPropFieldDefnumber, self).__init__(type=type, - aggregate=aggregate, bin=bin, - condition=condition, - field=field, legend=legend, - scale=scale, sort=sort, - timeUnit=timeUnit, - title=title, **kwds) - - -class FieldDefWithConditionMarkPropFieldDefstringnull(VegaLiteSchema): - """FieldDefWithConditionMarkPropFieldDefstringnull schema wrapper - - Mapping(required=[type]) - A FieldDef with Condition :raw-html:`` - - Attributes - ---------- - - type : :class:`StandardType` - The encoded field's type of measurement ( ``"quantitative"``, ``"temporal"``, - ``"ordinal"``, or ``"nominal"`` ). - It can also be a ``"geojson"`` type for encoding `'geoshape' - `__. - - **Note:** - - - * Data values for a temporal field can be either a date-time string (e.g., - ``"2015-03-07 12:32:17"``, ``"17:01"``, ``"2015-03-16"``. ``"2015"`` ) or a - timestamp number (e.g., ``1552199579097`` ). - * Data ``type`` describes the semantics of the data rather than the primitive data - types ( ``number``, ``string``, etc.). The same primitive data type can have - different types of measurement. For example, numeric data can represent - quantitative, ordinal, or nominal data. - * When using with `bin `__, the - ``type`` property can be either ``"quantitative"`` (for using a linear bin scale) - or `"ordinal" (for using an ordinal bin scale) - `__. - * When using with `timeUnit - `__, the ``type`` property - can be either ``"temporal"`` (for using a temporal scale) or `"ordinal" (for using - an ordinal scale) `__. - * When using with `aggregate - `__, the ``type`` property - refers to the post-aggregation data type. For example, we can calculate count - ``distinct`` of a categorical field ``"cat"`` using ``{"aggregate": "distinct", - "field": "cat", "type": "quantitative"}``. The ``"type"`` of the aggregate output - is ``"quantitative"``. - * Secondary channels (e.g., ``x2``, ``y2``, ``xError``, ``yError`` ) do not have - ``type`` as they have exactly the same type as their primary channels (e.g., - ``x``, ``y`` ). - - **See also:** `type `__ - documentation. - aggregate : :class:`Aggregate` - Aggregation function for the field - (e.g., ``mean``, ``sum``, ``median``, ``min``, ``max``, ``count`` ). - - **Default value:** ``undefined`` (None) - - **See also:** `aggregate `__ - documentation. - bin : anyOf(boolean, :class:`BinParams`, None) - A flag for binning a ``quantitative`` field, `an object defining binning parameters - `__, or indicating that the - data for ``x`` or ``y`` channel are binned before they are imported into Vega-Lite ( - ``"binned"`` ). - - - If ``true``, default `binning parameters - `__ will be applied. - - If ``"binned"``, this indicates that the data for the ``x`` (or ``y`` ) channel are - already binned. You can map the bin-start field to ``x`` (or ``y`` ) and the bin-end - field to ``x2`` (or ``y2`` ). The scale and axis will be formatted similar to - binning in Vega-lite. To adjust the axis ticks based on the bin step, you can also - set the axis's `tickMinStep - `__ property. - - **Default value:** ``false`` - - **See also:** `bin `__ - documentation. - condition : anyOf(:class:`ConditionalStringValueDef`, - List(:class:`ConditionalStringValueDef`)) - One or more value definition(s) with `a selection or a test predicate - `__. - - **Note:** A field definition's ``condition`` property can only contain `conditional - value definitions `__ - since Vega-Lite only allows at most one encoded field per encoding channel. - field : :class:`Field` - **Required.** A string defining the name of the field from which to pull a data - value - or an object defining iterated values from the `repeat - `__ operator. - - **See also:** `field `__ - documentation. - - **Notes:** - 1) Dots ( ``.`` ) and brackets ( ``[`` and ``]`` ) can be used to access nested - objects (e.g., ``"field": "foo.bar"`` and ``"field": "foo['bar']"`` ). - If field names contain dots or brackets but are not nested, you can use ``\\`` to - escape dots and brackets (e.g., ``"a\\.b"`` and ``"a\\[0\\]"`` ). - See more details about escaping in the `field documentation - `__. - 2) ``field`` is not required if ``aggregate`` is ``count``. - legend : anyOf(:class:`Legend`, None) - An object defining properties of the legend. - If ``null``, the legend for the encoding channel will be removed. - - **Default value:** If undefined, default `legend properties - `__ are applied. - - **See also:** `legend `__ - documentation. - scale : anyOf(:class:`Scale`, None) - An object defining properties of the channel's scale, which is the function that - transforms values in the data domain (numbers, dates, strings, etc) to visual values - (pixels, colors, sizes) of the encoding channels. - - If ``null``, the scale will be `disabled and the data value will be directly encoded - `__. - - **Default value:** If undefined, default `scale properties - `__ are applied. - - **See also:** `scale `__ - documentation. - sort : :class:`Sort` - Sort order for the encoded field. - - For continuous fields (quantitative or temporal), ``sort`` can be either - ``"ascending"`` or ``"descending"``. - - For discrete fields, ``sort`` can be one of the following: - - - * ``"ascending"`` or ``"descending"`` -- for sorting by the values' natural order in - Javascript. - * `A sort-by-encoding definition - `__ for sorting - by another encoding channel. (This type of sort definition is not available for - ``row`` and ``column`` channels.) - * `A sort field definition - `__ for sorting by - another field. - * `An array specifying the field values in preferred order - `__. In this case, the - sort order will obey the values in the array, followed by any unspecified values - in their original order. For discrete time field, values in the sort array can be - `date-time definition objects `__. In addition, for time units - ``"month"`` and ``"day"``, the values can be the month or day names (case - insensitive) or their 3-letter initials (e.g., ``"Mon"``, ``"Tue"`` ). - * ``null`` indicating no sort. - - **Default value:** ``"ascending"`` - - **Note:** ``null`` is not supported for ``row`` and ``column``. - - **See also:** `sort `__ - documentation. - timeUnit : :class:`TimeUnit` - Time unit (e.g., ``year``, ``yearmonth``, ``month``, ``hours`` ) for a temporal - field. - or `a temporal field that gets casted as ordinal - `__. - - **Default value:** ``undefined`` (None) - - **See also:** `timeUnit `__ - documentation. - title : anyOf(string, None) - A title for the field. If ``null``, the title will be removed. - - **Default value:** derived from the field's name and transformation function ( - ``aggregate``, ``bin`` and ``timeUnit`` ). If the field has an aggregate function, - the function is displayed as part of the title (e.g., ``"Sum of Profit"`` ). If the - field is binned or has a time unit applied, the applied function is shown in - parentheses (e.g., ``"Profit (binned)"``, ``"Transaction Date (year-month)"`` ). - Otherwise, the title is simply the field name. - - **Notes** : - - 1) You can customize the default field title format by providing the `fieldTitle - `__ property in - the `config `__ or `fieldTitle - function via the compile function's options - `__. - - 2) If both field definition's ``title`` and axis, header, or legend ``title`` are - defined, axis/header/legend title will be used. - """ - _schema = {'$ref': '#/definitions/FieldDefWithCondition'} - - def __init__(self, type=Undefined, aggregate=Undefined, bin=Undefined, condition=Undefined, - field=Undefined, legend=Undefined, scale=Undefined, sort=Undefined, timeUnit=Undefined, - title=Undefined, **kwds): - super(FieldDefWithConditionMarkPropFieldDefstringnull, self).__init__(type=type, - aggregate=aggregate, - bin=bin, - condition=condition, - field=field, - legend=legend, - scale=scale, sort=sort, - timeUnit=timeUnit, - title=title, **kwds) - - -class FieldDefWithConditionTextFieldDefValue(VegaLiteSchema): - """FieldDefWithConditionTextFieldDefValue schema wrapper - - Mapping(required=[type]) - A FieldDef with Condition :raw-html:`` - - Attributes - ---------- - - type : :class:`StandardType` - The encoded field's type of measurement ( ``"quantitative"``, ``"temporal"``, - ``"ordinal"``, or ``"nominal"`` ). - It can also be a ``"geojson"`` type for encoding `'geoshape' - `__. - - **Note:** - - - * Data values for a temporal field can be either a date-time string (e.g., - ``"2015-03-07 12:32:17"``, ``"17:01"``, ``"2015-03-16"``. ``"2015"`` ) or a - timestamp number (e.g., ``1552199579097`` ). - * Data ``type`` describes the semantics of the data rather than the primitive data - types ( ``number``, ``string``, etc.). The same primitive data type can have - different types of measurement. For example, numeric data can represent - quantitative, ordinal, or nominal data. - * When using with `bin `__, the - ``type`` property can be either ``"quantitative"`` (for using a linear bin scale) - or `"ordinal" (for using an ordinal bin scale) - `__. - * When using with `timeUnit - `__, the ``type`` property - can be either ``"temporal"`` (for using a temporal scale) or `"ordinal" (for using - an ordinal scale) `__. - * When using with `aggregate - `__, the ``type`` property - refers to the post-aggregation data type. For example, we can calculate count - ``distinct`` of a categorical field ``"cat"`` using ``{"aggregate": "distinct", - "field": "cat", "type": "quantitative"}``. The ``"type"`` of the aggregate output - is ``"quantitative"``. - * Secondary channels (e.g., ``x2``, ``y2``, ``xError``, ``yError`` ) do not have - ``type`` as they have exactly the same type as their primary channels (e.g., - ``x``, ``y`` ). - - **See also:** `type `__ - documentation. - aggregate : :class:`Aggregate` - Aggregation function for the field - (e.g., ``mean``, ``sum``, ``median``, ``min``, ``max``, ``count`` ). - - **Default value:** ``undefined`` (None) - - **See also:** `aggregate `__ - documentation. - bin : anyOf(boolean, :class:`BinParams`, enum('binned'), None) - A flag for binning a ``quantitative`` field, `an object defining binning parameters - `__, or indicating that the - data for ``x`` or ``y`` channel are binned before they are imported into Vega-Lite ( - ``"binned"`` ). - - - If ``true``, default `binning parameters - `__ will be applied. - - If ``"binned"``, this indicates that the data for the ``x`` (or ``y`` ) channel are - already binned. You can map the bin-start field to ``x`` (or ``y`` ) and the bin-end - field to ``x2`` (or ``y2`` ). The scale and axis will be formatted similar to - binning in Vega-lite. To adjust the axis ticks based on the bin step, you can also - set the axis's `tickMinStep - `__ property. - - **Default value:** ``false`` - - **See also:** `bin `__ - documentation. - condition : anyOf(:class:`ConditionalValueDef`, List(:class:`ConditionalValueDef`)) - One or more value definition(s) with `a selection or a test predicate - `__. - - **Note:** A field definition's ``condition`` property can only contain `conditional - value definitions `__ - since Vega-Lite only allows at most one encoded field per encoding channel. - field : :class:`Field` - **Required.** A string defining the name of the field from which to pull a data - value - or an object defining iterated values from the `repeat - `__ operator. - - **See also:** `field `__ - documentation. - - **Notes:** - 1) Dots ( ``.`` ) and brackets ( ``[`` and ``]`` ) can be used to access nested - objects (e.g., ``"field": "foo.bar"`` and ``"field": "foo['bar']"`` ). - If field names contain dots or brackets but are not nested, you can use ``\\`` to - escape dots and brackets (e.g., ``"a\\.b"`` and ``"a\\[0\\]"`` ). - See more details about escaping in the `field documentation - `__. - 2) ``field`` is not required if ``aggregate`` is ``count``. - format : string - The text formatting pattern for labels of guides (axes, legends, headers) and text - marks. - - - * If the format type is ``"number"`` (e.g., for quantitative fields), this is D3's - `number format pattern `__. - * If the format type is ``"time"`` (e.g., for temporal fields), this is D3's `time - format pattern `__. - - See the `format documentation `__ - for more examples. - - **Default value:** Derived from `numberFormat - `__ config for number - format and from `timeFormat - `__ config for time - format. - formatType : enum('number', 'time') - The format type for labels ( ``"number"`` or ``"time"`` ). - - **Default value:** - - - * ``"time"`` for temporal fields and ordinal and nomimal fields with ``timeUnit``. - * ``"number"`` for quantitative fields as well as ordinal and nomimal fields without - ``timeUnit``. - timeUnit : :class:`TimeUnit` - Time unit (e.g., ``year``, ``yearmonth``, ``month``, ``hours`` ) for a temporal - field. - or `a temporal field that gets casted as ordinal - `__. - - **Default value:** ``undefined`` (None) - - **See also:** `timeUnit `__ - documentation. - title : anyOf(string, None) - A title for the field. If ``null``, the title will be removed. - - **Default value:** derived from the field's name and transformation function ( - ``aggregate``, ``bin`` and ``timeUnit`` ). If the field has an aggregate function, - the function is displayed as part of the title (e.g., ``"Sum of Profit"`` ). If the - field is binned or has a time unit applied, the applied function is shown in - parentheses (e.g., ``"Profit (binned)"``, ``"Transaction Date (year-month)"`` ). - Otherwise, the title is simply the field name. - - **Notes** : - - 1) You can customize the default field title format by providing the `fieldTitle - `__ property in - the `config `__ or `fieldTitle - function via the compile function's options - `__. - - 2) If both field definition's ``title`` and axis, header, or legend ``title`` are - defined, axis/header/legend title will be used. - """ - _schema = {'$ref': '#/definitions/FieldDefWithCondition'} - - def __init__(self, type=Undefined, aggregate=Undefined, bin=Undefined, condition=Undefined, - field=Undefined, format=Undefined, formatType=Undefined, timeUnit=Undefined, - title=Undefined, **kwds): - super(FieldDefWithConditionTextFieldDefValue, self).__init__(type=type, aggregate=aggregate, - bin=bin, condition=condition, - field=field, format=format, - formatType=formatType, - timeUnit=timeUnit, title=title, - **kwds) - - -class FieldDefWithoutScale(VegaLiteSchema): - """FieldDefWithoutScale schema wrapper - - Mapping(required=[type]) - Definition object for a data field, its type and transformation of an encoding channel. - - Attributes - ---------- - - type : :class:`StandardType` - The encoded field's type of measurement ( ``"quantitative"``, ``"temporal"``, - ``"ordinal"``, or ``"nominal"`` ). - It can also be a ``"geojson"`` type for encoding `'geoshape' - `__. - - **Note:** - - - * Data values for a temporal field can be either a date-time string (e.g., - ``"2015-03-07 12:32:17"``, ``"17:01"``, ``"2015-03-16"``. ``"2015"`` ) or a - timestamp number (e.g., ``1552199579097`` ). - * Data ``type`` describes the semantics of the data rather than the primitive data - types ( ``number``, ``string``, etc.). The same primitive data type can have - different types of measurement. For example, numeric data can represent - quantitative, ordinal, or nominal data. - * When using with `bin `__, the - ``type`` property can be either ``"quantitative"`` (for using a linear bin scale) - or `"ordinal" (for using an ordinal bin scale) - `__. - * When using with `timeUnit - `__, the ``type`` property - can be either ``"temporal"`` (for using a temporal scale) or `"ordinal" (for using - an ordinal scale) `__. - * When using with `aggregate - `__, the ``type`` property - refers to the post-aggregation data type. For example, we can calculate count - ``distinct`` of a categorical field ``"cat"`` using ``{"aggregate": "distinct", - "field": "cat", "type": "quantitative"}``. The ``"type"`` of the aggregate output - is ``"quantitative"``. - * Secondary channels (e.g., ``x2``, ``y2``, ``xError``, ``yError`` ) do not have - ``type`` as they have exactly the same type as their primary channels (e.g., - ``x``, ``y`` ). - - **See also:** `type `__ - documentation. - aggregate : :class:`Aggregate` - Aggregation function for the field - (e.g., ``mean``, ``sum``, ``median``, ``min``, ``max``, ``count`` ). - - **Default value:** ``undefined`` (None) - - **See also:** `aggregate `__ - documentation. - bin : anyOf(boolean, :class:`BinParams`, enum('binned'), None) - A flag for binning a ``quantitative`` field, `an object defining binning parameters - `__, or indicating that the - data for ``x`` or ``y`` channel are binned before they are imported into Vega-Lite ( - ``"binned"`` ). - - - If ``true``, default `binning parameters - `__ will be applied. - - If ``"binned"``, this indicates that the data for the ``x`` (or ``y`` ) channel are - already binned. You can map the bin-start field to ``x`` (or ``y`` ) and the bin-end - field to ``x2`` (or ``y2`` ). The scale and axis will be formatted similar to - binning in Vega-lite. To adjust the axis ticks based on the bin step, you can also - set the axis's `tickMinStep - `__ property. - - **Default value:** ``false`` - - **See also:** `bin `__ - documentation. - field : :class:`Field` - **Required.** A string defining the name of the field from which to pull a data - value - or an object defining iterated values from the `repeat - `__ operator. - - **See also:** `field `__ - documentation. - - **Notes:** - 1) Dots ( ``.`` ) and brackets ( ``[`` and ``]`` ) can be used to access nested - objects (e.g., ``"field": "foo.bar"`` and ``"field": "foo['bar']"`` ). - If field names contain dots or brackets but are not nested, you can use ``\\`` to - escape dots and brackets (e.g., ``"a\\.b"`` and ``"a\\[0\\]"`` ). - See more details about escaping in the `field documentation - `__. - 2) ``field`` is not required if ``aggregate`` is ``count``. - timeUnit : :class:`TimeUnit` - Time unit (e.g., ``year``, ``yearmonth``, ``month``, ``hours`` ) for a temporal - field. - or `a temporal field that gets casted as ordinal - `__. - - **Default value:** ``undefined`` (None) - - **See also:** `timeUnit `__ - documentation. - title : anyOf(string, None) - A title for the field. If ``null``, the title will be removed. - - **Default value:** derived from the field's name and transformation function ( - ``aggregate``, ``bin`` and ``timeUnit`` ). If the field has an aggregate function, - the function is displayed as part of the title (e.g., ``"Sum of Profit"`` ). If the - field is binned or has a time unit applied, the applied function is shown in - parentheses (e.g., ``"Profit (binned)"``, ``"Transaction Date (year-month)"`` ). - Otherwise, the title is simply the field name. - - **Notes** : - - 1) You can customize the default field title format by providing the `fieldTitle - `__ property in - the `config `__ or `fieldTitle - function via the compile function's options - `__. - - 2) If both field definition's ``title`` and axis, header, or legend ``title`` are - defined, axis/header/legend title will be used. - """ - _schema = {'$ref': '#/definitions/FieldDefWithoutScale'} - - def __init__(self, type=Undefined, aggregate=Undefined, bin=Undefined, field=Undefined, - timeUnit=Undefined, title=Undefined, **kwds): - super(FieldDefWithoutScale, self).__init__(type=type, aggregate=aggregate, bin=bin, field=field, - timeUnit=timeUnit, title=title, **kwds) - - -class FieldName(Field): - """FieldName schema wrapper - - string - """ - _schema = {'$ref': '#/definitions/FieldName'} - - def __init__(self, *args): - super(FieldName, self).__init__(*args) - - -class FontStyle(VegaLiteSchema): - """FontStyle schema wrapper - - string - """ - _schema = {'$ref': '#/definitions/FontStyle'} - - def __init__(self, *args): - super(FontStyle, self).__init__(*args) - - -class FontWeight(VegaLiteSchema): - """FontWeight schema wrapper - - enum('normal', 'bold', 'lighter', 'bolder', 100, 200, 300, 400, 500, 600, 700, 800, 900) - """ - _schema = {'$ref': '#/definitions/FontWeight'} - - def __init__(self, *args): - super(FontWeight, self).__init__(*args) - - -class Generator(Data): - """Generator schema wrapper - - anyOf(:class:`SequenceGenerator`, :class:`SphereGenerator`, :class:`GraticuleGenerator`) - """ - _schema = {'$ref': '#/definitions/Generator'} - - def __init__(self, *args, **kwds): - super(Generator, self).__init__(*args, **kwds) - - -class GenericUnitSpecEncodingAnyMark(VegaLiteSchema): - """GenericUnitSpecEncodingAnyMark schema wrapper - - Mapping(required=[mark]) - Base interface for a unit (single-view) specification. - - Attributes - ---------- - - mark : :class:`AnyMark` - A string describing the mark type (one of ``"bar"``, ``"circle"``, ``"square"``, - ``"tick"``, ``"line"``, - ``"area"``, ``"point"``, ``"rule"``, ``"geoshape"``, and ``"text"`` ) or a `mark - definition object `__. - data : anyOf(:class:`Data`, None) - An object describing the data source. Set to ``null`` to ignore the parent's data - source. If no data is set, it is derived from the parent. - description : string - Description of this mark for commenting purpose. - encoding : :class:`Encoding` - A key-value mapping between encoding channels and definition of fields. - height : float - The height of a visualization. - - **Default value:** - - - * If a view's `autosize - `__ type is ``"fit"`` or - its y-channel has a `continuous scale - `__, the height will - be the value of `config.view.height - `__. - * For y-axis with a band or point scale: if `rangeStep - `__ is a numeric value or - unspecified, the height is `determined by the range step, paddings, and the - cardinality of the field mapped to y-channel - `__. Otherwise, if the - ``rangeStep`` is ``null``, the height will be the value of `config.view.height - `__. - * If no field is mapped to ``y`` channel, the ``height`` will be the value of - ``rangeStep``. - - **Note** : For plots with `row and column channels - `__, this represents the - height of a single view. - - **See also:** The documentation for `width and height - `__ contains more examples. - name : string - Name of the visualization for later reference. - projection : :class:`Projection` - An object defining properties of geographic projection, which will be applied to - ``shape`` path for ``"geoshape"`` marks - and to ``latitude`` and ``"longitude"`` channels for other marks. - selection : Mapping(required=[]) - A key-value mapping between selection names and definitions. - title : anyOf(string, :class:`TitleParams`) - Title for the plot. - transform : List(:class:`Transform`) - An array of data transformations such as filter and new field calculation. - view : :class:`ViewBackground` - An object defining the view background's fill and stroke. - - **Default value:** none (transparent) - width : float - The width of a visualization. - - **Default value:** This will be determined by the following rules: - - - * If a view's `autosize - `__ type is ``"fit"`` or - its x-channel has a `continuous scale - `__, the width will - be the value of `config.view.width - `__. - * For x-axis with a band or point scale: if `rangeStep - `__ is a numeric value or - unspecified, the width is `determined by the range step, paddings, and the - cardinality of the field mapped to x-channel - `__. Otherwise, if the - ``rangeStep`` is ``null``, the width will be the value of `config.view.width - `__. - * If no field is mapped to ``x`` channel, the ``width`` will be the value of - `config.scale.textXRangeStep - `__ for - ``text`` mark and the value of ``rangeStep`` for other marks. - - **Note:** For plots with `row and column channels - `__, this represents the - width of a single view. - - **See also:** The documentation for `width and height - `__ contains more examples. - """ - _schema = {'$ref': '#/definitions/GenericUnitSpec'} - - def __init__(self, mark=Undefined, data=Undefined, description=Undefined, encoding=Undefined, - height=Undefined, name=Undefined, projection=Undefined, selection=Undefined, - title=Undefined, transform=Undefined, view=Undefined, width=Undefined, **kwds): - super(GenericUnitSpecEncodingAnyMark, self).__init__(mark=mark, data=data, - description=description, encoding=encoding, - height=height, name=name, - projection=projection, selection=selection, - title=title, transform=transform, - view=view, width=width, **kwds) - - -class GraticuleGenerator(Generator): - """GraticuleGenerator schema wrapper - - Mapping(required=[graticule]) - - Attributes - ---------- - - graticule : anyOf(enum(True), :class:`GraticuleParams`) - Generate graticule GeoJSON data for geographic reference lines. - name : string - Provide a placeholder name and bind data at runtime. - """ - _schema = {'$ref': '#/definitions/GraticuleGenerator'} - - def __init__(self, graticule=Undefined, name=Undefined, **kwds): - super(GraticuleGenerator, self).__init__(graticule=graticule, name=name, **kwds) - - -class GraticuleParams(VegaLiteSchema): - """GraticuleParams schema wrapper - - Mapping(required=[]) - - Attributes - ---------- - - extent : List(List(float)) - Sets both the major and minor extents to the same values. - extentMajor : List(List(float)) - The major extent of the graticule as a two-element array of coordinates. - extentMinor : List(List(float)) - The minor extent of the graticule as a two-element array of coordinates. - precision : float - The precision of the graticule in degrees. - - **Default value:** ``2.5`` - step : List(float) - Sets both the major and minor step angles to the same values. - stepMajor : List(float) - The major step angles of the graticule. - - **Default value:** ``[90, 360]`` - stepMinor : List(float) - The minor step angles of the graticule. - - **Default value:** ``[10, 10]`` - """ - _schema = {'$ref': '#/definitions/GraticuleParams'} - - def __init__(self, extent=Undefined, extentMajor=Undefined, extentMinor=Undefined, - precision=Undefined, step=Undefined, stepMajor=Undefined, stepMinor=Undefined, **kwds): - super(GraticuleParams, self).__init__(extent=extent, extentMajor=extentMajor, - extentMinor=extentMinor, precision=precision, step=step, - stepMajor=stepMajor, stepMinor=stepMinor, **kwds) - - -class Header(VegaLiteSchema): - """Header schema wrapper - - Mapping(required=[]) - Headers of row / column channels for faceted plots. - - Attributes - ---------- - - format : string - The text formatting pattern for labels of guides (axes, legends, headers) and text - marks. - - - * If the format type is ``"number"`` (e.g., for quantitative fields), this is D3's - `number format pattern `__. - * If the format type is ``"time"`` (e.g., for temporal fields), this is D3's `time - format pattern `__. - - See the `format documentation `__ - for more examples. - - **Default value:** Derived from `numberFormat - `__ config for number - format and from `timeFormat - `__ config for time - format. - formatType : enum('number', 'time') - The format type for labels ( ``"number"`` or ``"time"`` ). - - **Default value:** - - - * ``"time"`` for temporal fields and ordinal and nomimal fields with ``timeUnit``. - * ``"number"`` for quantitative fields as well as ordinal and nomimal fields without - ``timeUnit``. - labelAlign : :class:`Align` - Horizontal text alignment of header labels. - labelAnchor : :class:`TitleAnchor` - The anchor position for placing the labels. One of ``"start"``, ``"middle"``, or - ``"end"``. For example, with a label orientation of top these anchor positions map - to a left-, center-, or right-aligned label. - labelAngle : float - The rotation angle of the header labels. - - **Default value:** ``0`` for column header, ``-90`` for row header. - labelColor : :class:`Color` - The color of the header label, can be in hex color code or regular color name. - labelFont : string - The font of the header label. - labelFontSize : float - The font size of the header label, in pixels. - labelFontStyle : :class:`FontStyle` - The font style of the header label. - labelLimit : float - The maximum length of the header label in pixels. The text value will be - automatically truncated if the rendered size exceeds the limit. - - **Default value:** ``0``, indicating no limit - labelOrient : :class:`Orient` - The orientation of the header label. One of ``"top"``, ``"bottom"``, ``"left"`` or - ``"right"``. - labelPadding : float - The padding, in pixel, between facet header's label and the plot. - - **Default value:** ``10`` - labels : boolean - A boolean flag indicating if labels should be included as part of the header. - - **Default value:** ``true``. - title : anyOf(string, None) - A title for the field. If ``null``, the title will be removed. - - **Default value:** derived from the field's name and transformation function ( - ``aggregate``, ``bin`` and ``timeUnit`` ). If the field has an aggregate function, - the function is displayed as part of the title (e.g., ``"Sum of Profit"`` ). If the - field is binned or has a time unit applied, the applied function is shown in - parentheses (e.g., ``"Profit (binned)"``, ``"Transaction Date (year-month)"`` ). - Otherwise, the title is simply the field name. - - **Notes** : - - 1) You can customize the default field title format by providing the `fieldTitle - `__ property in - the `config `__ or `fieldTitle - function via the compile function's options - `__. - - 2) If both field definition's ``title`` and axis, header, or legend ``title`` are - defined, axis/header/legend title will be used. - titleAlign : :class:`Align` - Horizontal text alignment (to the anchor) of header titles. - titleAnchor : :class:`TitleAnchor` - The anchor position for placing the title. One of ``"start"``, ``"middle"``, or - ``"end"``. For example, with an orientation of top these anchor positions map to a - left-, center-, or right-aligned title. - titleAngle : float - The rotation angle of the header title. - - **Default value:** ``0``. - titleBaseline : :class:`TextBaseline` - Vertical text baseline for the header title. One of ``"top"``, ``"bottom"``, - ``"middle"``. - - **Default value:** ``"middle"`` - titleColor : :class:`Color` - Color of the header title, can be in hex color code or regular color name. - titleFont : string - Font of the header title. (e.g., ``"Helvetica Neue"`` ). - titleFontSize : float - Font size of the header title. - titleFontStyle : :class:`FontStyle` - The font style of the header title. - titleFontWeight : :class:`FontWeight` - Font weight of the header title. - This can be either a string (e.g ``"bold"``, ``"normal"`` ) or a number ( ``100``, - ``200``, ``300``, ..., ``900`` where ``"normal"`` = ``400`` and ``"bold"`` = ``700`` - ). - titleLimit : float - The maximum length of the header title in pixels. The text value will be - automatically truncated if the rendered size exceeds the limit. - - **Default value:** ``0``, indicating no limit - titleOrient : :class:`Orient` - The orientation of the header title. One of ``"top"``, ``"bottom"``, ``"left"`` or - ``"right"``. - titlePadding : float - The padding, in pixel, between facet header's title and the label. - - **Default value:** ``10`` - """ - _schema = {'$ref': '#/definitions/Header'} - - def __init__(self, format=Undefined, formatType=Undefined, labelAlign=Undefined, - labelAnchor=Undefined, labelAngle=Undefined, labelColor=Undefined, labelFont=Undefined, - labelFontSize=Undefined, labelFontStyle=Undefined, labelLimit=Undefined, - labelOrient=Undefined, labelPadding=Undefined, labels=Undefined, title=Undefined, - titleAlign=Undefined, titleAnchor=Undefined, titleAngle=Undefined, - titleBaseline=Undefined, titleColor=Undefined, titleFont=Undefined, - titleFontSize=Undefined, titleFontStyle=Undefined, titleFontWeight=Undefined, - titleLimit=Undefined, titleOrient=Undefined, titlePadding=Undefined, **kwds): - super(Header, self).__init__(format=format, formatType=formatType, labelAlign=labelAlign, - labelAnchor=labelAnchor, labelAngle=labelAngle, - labelColor=labelColor, labelFont=labelFont, - labelFontSize=labelFontSize, labelFontStyle=labelFontStyle, - labelLimit=labelLimit, labelOrient=labelOrient, - labelPadding=labelPadding, labels=labels, title=title, - titleAlign=titleAlign, titleAnchor=titleAnchor, - titleAngle=titleAngle, titleBaseline=titleBaseline, - titleColor=titleColor, titleFont=titleFont, - titleFontSize=titleFontSize, titleFontStyle=titleFontStyle, - titleFontWeight=titleFontWeight, titleLimit=titleLimit, - titleOrient=titleOrient, titlePadding=titlePadding, **kwds) - - -class HeaderConfig(VegaLiteSchema): - """HeaderConfig schema wrapper - - Mapping(required=[]) - - Attributes - ---------- - - format : string - The text formatting pattern for labels of guides (axes, legends, headers) and text - marks. - - - * If the format type is ``"number"`` (e.g., for quantitative fields), this is D3's - `number format pattern `__. - * If the format type is ``"time"`` (e.g., for temporal fields), this is D3's `time - format pattern `__. - - See the `format documentation `__ - for more examples. - - **Default value:** Derived from `numberFormat - `__ config for number - format and from `timeFormat - `__ config for time - format. - formatType : enum('number', 'time') - The format type for labels ( ``"number"`` or ``"time"`` ). - - **Default value:** - - - * ``"time"`` for temporal fields and ordinal and nomimal fields with ``timeUnit``. - * ``"number"`` for quantitative fields as well as ordinal and nomimal fields without - ``timeUnit``. - labelAlign : :class:`Align` - Horizontal text alignment of header labels. - labelAnchor : :class:`TitleAnchor` - The anchor position for placing the labels. One of ``"start"``, ``"middle"``, or - ``"end"``. For example, with a label orientation of top these anchor positions map - to a left-, center-, or right-aligned label. - labelAngle : float - The rotation angle of the header labels. - - **Default value:** ``0`` for column header, ``-90`` for row header. - labelColor : :class:`Color` - The color of the header label, can be in hex color code or regular color name. - labelFont : string - The font of the header label. - labelFontSize : float - The font size of the header label, in pixels. - labelFontStyle : :class:`FontStyle` - The font style of the header label. - labelLimit : float - The maximum length of the header label in pixels. The text value will be - automatically truncated if the rendered size exceeds the limit. - - **Default value:** ``0``, indicating no limit - labelOrient : :class:`Orient` - The orientation of the header label. One of ``"top"``, ``"bottom"``, ``"left"`` or - ``"right"``. - labelPadding : float - The padding, in pixel, between facet header's label and the plot. - - **Default value:** ``10`` - labels : boolean - A boolean flag indicating if labels should be included as part of the header. - - **Default value:** ``true``. - shortTimeLabels : boolean - Whether month names and weekday names should be abbreviated. - - **Default value:** ``false`` - title : None - Set to null to disable title for the axis, legend, or header. - titleAlign : :class:`Align` - Horizontal text alignment (to the anchor) of header titles. - titleAnchor : :class:`TitleAnchor` - The anchor position for placing the title. One of ``"start"``, ``"middle"``, or - ``"end"``. For example, with an orientation of top these anchor positions map to a - left-, center-, or right-aligned title. - titleAngle : float - The rotation angle of the header title. - - **Default value:** ``0``. - titleBaseline : :class:`TextBaseline` - Vertical text baseline for the header title. One of ``"top"``, ``"bottom"``, - ``"middle"``. - - **Default value:** ``"middle"`` - titleColor : :class:`Color` - Color of the header title, can be in hex color code or regular color name. - titleFont : string - Font of the header title. (e.g., ``"Helvetica Neue"`` ). - titleFontSize : float - Font size of the header title. - titleFontStyle : :class:`FontStyle` - The font style of the header title. - titleFontWeight : :class:`FontWeight` - Font weight of the header title. - This can be either a string (e.g ``"bold"``, ``"normal"`` ) or a number ( ``100``, - ``200``, ``300``, ..., ``900`` where ``"normal"`` = ``400`` and ``"bold"`` = ``700`` - ). - titleLimit : float - The maximum length of the header title in pixels. The text value will be - automatically truncated if the rendered size exceeds the limit. - - **Default value:** ``0``, indicating no limit - titleOrient : :class:`Orient` - The orientation of the header title. One of ``"top"``, ``"bottom"``, ``"left"`` or - ``"right"``. - titlePadding : float - The padding, in pixel, between facet header's title and the label. - - **Default value:** ``10`` - """ - _schema = {'$ref': '#/definitions/HeaderConfig'} - - def __init__(self, format=Undefined, formatType=Undefined, labelAlign=Undefined, - labelAnchor=Undefined, labelAngle=Undefined, labelColor=Undefined, labelFont=Undefined, - labelFontSize=Undefined, labelFontStyle=Undefined, labelLimit=Undefined, - labelOrient=Undefined, labelPadding=Undefined, labels=Undefined, - shortTimeLabels=Undefined, title=Undefined, titleAlign=Undefined, - titleAnchor=Undefined, titleAngle=Undefined, titleBaseline=Undefined, - titleColor=Undefined, titleFont=Undefined, titleFontSize=Undefined, - titleFontStyle=Undefined, titleFontWeight=Undefined, titleLimit=Undefined, - titleOrient=Undefined, titlePadding=Undefined, **kwds): - super(HeaderConfig, self).__init__(format=format, formatType=formatType, labelAlign=labelAlign, - labelAnchor=labelAnchor, labelAngle=labelAngle, - labelColor=labelColor, labelFont=labelFont, - labelFontSize=labelFontSize, labelFontStyle=labelFontStyle, - labelLimit=labelLimit, labelOrient=labelOrient, - labelPadding=labelPadding, labels=labels, - shortTimeLabels=shortTimeLabels, title=title, - titleAlign=titleAlign, titleAnchor=titleAnchor, - titleAngle=titleAngle, titleBaseline=titleBaseline, - titleColor=titleColor, titleFont=titleFont, - titleFontSize=titleFontSize, titleFontStyle=titleFontStyle, - titleFontWeight=titleFontWeight, titleLimit=titleLimit, - titleOrient=titleOrient, titlePadding=titlePadding, **kwds) - - -class HexColor(Color): - """HexColor schema wrapper - - string - """ - _schema = {'$ref': '#/definitions/HexColor'} - - def __init__(self, *args): - super(HexColor, self).__init__(*args) - - -class ImputeMethod(VegaLiteSchema): - """ImputeMethod schema wrapper - - enum('value', 'median', 'max', 'min', 'mean') - """ - _schema = {'$ref': '#/definitions/ImputeMethod'} - - def __init__(self, *args): - super(ImputeMethod, self).__init__(*args) - - -class ImputeParams(VegaLiteSchema): - """ImputeParams schema wrapper - - Mapping(required=[]) - - Attributes - ---------- - - frame : List(anyOf(None, float)) - A frame specification as a two-element array used to control the window over which - the specified method is applied. The array entries should either be a number - indicating the offset from the current data object, or null to indicate unbounded - rows preceding or following the current data object. For example, the value ``[-5, - 5]`` indicates that the window should include five objects preceding and five - objects following the current object. - - **Default value:** : ``[null, null]`` indicating that the window includes all - objects. - keyvals : anyOf(List(Any), :class:`ImputeSequence`) - Defines the key values that should be considered for imputation. - An array of key values or an object defining a `number sequence - `__. - - If provided, this will be used in addition to the key values observed within the - input data. If not provided, the values will be derived from all unique values of - the ``key`` field. For ``impute`` in ``encoding``, the key field is the x-field if - the y-field is imputed, or vice versa. - - If there is no impute grouping, this property *must* be specified. - method : :class:`ImputeMethod` - The imputation method to use for the field value of imputed data objects. - One of ``value``, ``mean``, ``median``, ``max`` or ``min``. - - **Default value:** ``"value"`` - value : Any - The field value to use when the imputation ``method`` is ``"value"``. - """ - _schema = {'$ref': '#/definitions/ImputeParams'} - - def __init__(self, frame=Undefined, keyvals=Undefined, method=Undefined, value=Undefined, **kwds): - super(ImputeParams, self).__init__(frame=frame, keyvals=keyvals, method=method, value=value, - **kwds) - - -class ImputeSequence(VegaLiteSchema): - """ImputeSequence schema wrapper - - Mapping(required=[stop]) - - Attributes - ---------- - - stop : float - The ending value(exclusive) of the sequence. - start : float - The starting value of the sequence. - **Default value:** ``0`` - step : float - The step value between sequence entries. - **Default value:** ``1`` or ``-1`` if ``stop < start`` - """ - _schema = {'$ref': '#/definitions/ImputeSequence'} - - def __init__(self, stop=Undefined, start=Undefined, step=Undefined, **kwds): - super(ImputeSequence, self).__init__(stop=stop, start=start, step=step, **kwds) - - -class InlineData(DataSource): - """InlineData schema wrapper - - Mapping(required=[values]) - - Attributes - ---------- - - values : :class:`InlineDataset` - The full data set, included inline. This can be an array of objects or primitive - values, an object, or a string. - Arrays of primitive values are ingested as objects with a ``data`` property. Strings - are parsed according to the specified format type. - format : :class:`DataFormat` - An object that specifies the format for parsing the data. - name : string - Provide a placeholder name and bind data at runtime. - """ - _schema = {'$ref': '#/definitions/InlineData'} - - def __init__(self, values=Undefined, format=Undefined, name=Undefined, **kwds): - super(InlineData, self).__init__(values=values, format=format, name=name, **kwds) - - -class InlineDataset(VegaLiteSchema): - """InlineDataset schema wrapper - - anyOf(List(float), List(string), List(boolean), List(Mapping(required=[])), string, - Mapping(required=[])) - """ - _schema = {'$ref': '#/definitions/InlineDataset'} - - def __init__(self, *args, **kwds): - super(InlineDataset, self).__init__(*args, **kwds) - - -class InputBinding(Binding): - """InputBinding schema wrapper - - Mapping(required=[]) - - Attributes - ---------- - - autocomplete : string - - debounce : float - - element : :class:`Element` - - input : string - - name : string - - placeholder : string - - type : string - - """ - _schema = {'$ref': '#/definitions/InputBinding'} - - def __init__(self, autocomplete=Undefined, debounce=Undefined, element=Undefined, input=Undefined, - name=Undefined, placeholder=Undefined, type=Undefined, **kwds): - super(InputBinding, self).__init__(autocomplete=autocomplete, debounce=debounce, - element=element, input=input, name=name, - placeholder=placeholder, type=type, **kwds) - - -class Interpolate(VegaLiteSchema): - """Interpolate schema wrapper - - enum('linear', 'linear-closed', 'step', 'step-before', 'step-after', 'basis', 'basis-open', - 'basis-closed', 'cardinal', 'cardinal-open', 'cardinal-closed', 'bundle', 'monotone') - """ - _schema = {'$ref': '#/definitions/Interpolate'} - - def __init__(self, *args): - super(Interpolate, self).__init__(*args) - - -class IntervalSelectionConfig(VegaLiteSchema): - """IntervalSelectionConfig schema wrapper - - Mapping(required=[]) - - Attributes - ---------- - - bind : enum('scales') - Establishes a two-way binding between the interval selection and the scales - used within the same view. This allows a user to interactively pan and - zoom the view. - - **See also:** `bind `__ - documentation. - clear : anyOf(:class:`EventStream`, boolean) - Clears the selection, emptying it of all values. Can be an - `EventStream `__ or ``false`` to - disable. - - **Default value:** ``dblclick``. - - **See also:** `clear `__ - documentation. - empty : enum('all', 'none') - By default, ``all`` data values are considered to lie within an empty selection. - When set to ``none``, empty selections contain no data values. - encodings : List(:class:`SingleDefUnitChannel`) - An array of encoding channels. The corresponding data field values - must match for a data tuple to fall within the selection. - - **See also:** `encodings `__ - documentation. - fields : List(:class:`FieldName`) - An array of field names whose values must match for a data tuple to - fall within the selection. - - **See also:** `fields `__ - documentation. - init : :class:`SelectionInitIntervalMapping` - Initialize the selection with a mapping between `projected channels or field names - `__ and arrays of - initial values. - - **See also:** `init `__ - documentation. - mark : :class:`BrushConfig` - An interval selection also adds a rectangle mark to depict the - extents of the interval. The ``mark`` property can be used to customize the - appearance of the mark. - - **See also:** `mark `__ - documentation. - on : :class:`EventStream` - A `Vega event stream `__ (object or - selector) that triggers the selection. - For interval selections, the event stream must specify a `start and end - `__. - resolve : :class:`SelectionResolution` - With layered and multi-view displays, a strategy that determines how - selections' data queries are resolved when applied in a filter transform, - conditional encoding rule, or scale domain. - - **See also:** `resolve - `__ documentation. - translate : anyOf(string, boolean) - When truthy, allows a user to interactively move an interval selection - back-and-forth. Can be ``true``, ``false`` (to disable panning), or a - `Vega event stream definition `__ - which must include a start and end event to trigger continuous panning. - - **Default value:** ``true``, which corresponds to - ``[mousedown, window:mouseup] > window:mousemove!`` which corresponds to - clicks and dragging within an interval selection to reposition it. - - **See also:** `translate `__ - documentation. - zoom : anyOf(string, boolean) - When truthy, allows a user to interactively resize an interval selection. - Can be ``true``, ``false`` (to disable zooming), or a `Vega event stream - definition `__. Currently, - only ``wheel`` events are supported. - - **Default value:** ``true``, which corresponds to ``wheel!``. - - **See also:** `zoom `__ - documentation. - """ - _schema = {'$ref': '#/definitions/IntervalSelectionConfig'} - - def __init__(self, bind=Undefined, clear=Undefined, empty=Undefined, encodings=Undefined, - fields=Undefined, init=Undefined, mark=Undefined, on=Undefined, resolve=Undefined, - translate=Undefined, zoom=Undefined, **kwds): - super(IntervalSelectionConfig, self).__init__(bind=bind, clear=clear, empty=empty, - encodings=encodings, fields=fields, init=init, - mark=mark, on=on, resolve=resolve, - translate=translate, zoom=zoom, **kwds) - - -class JoinAggregateFieldDef(VegaLiteSchema): - """JoinAggregateFieldDef schema wrapper - - Mapping(required=[op, as]) - - Attributes - ---------- - - op : :class:`AggregateOp` - The aggregation operation to apply (e.g., sum, average or count). See the list of - all supported operations `here - `__. - field : :class:`FieldName` - The data field for which to compute the aggregate function. This can be omitted for - functions that do not operate over a field such as ``count``. - as : :class:`FieldName` - The output name for the join aggregate operation. - """ - _schema = {'$ref': '#/definitions/JoinAggregateFieldDef'} - - def __init__(self, op=Undefined, field=Undefined, **kwds): - super(JoinAggregateFieldDef, self).__init__(op=op, field=field, **kwds) - - -class JsonDataFormat(DataFormat): - """JsonDataFormat schema wrapper - - Mapping(required=[]) - - Attributes - ---------- - - parse : anyOf(:class:`Parse`, None) - If set to ``null``, disable type inference based on the spec and only use type - inference based on the data. - Alternatively, a parsing directive object can be provided for explicit data types. - Each property of the object corresponds to a field name, and the value to the - desired data type (one of ``"number"``, ``"boolean"``, ``"date"``, or null (do not - parse the field)). - For example, ``"parse": {"modified_on": "date"}`` parses the ``modified_on`` field - in each input record a Date value. - - For ``"date"``, we parse data based using Javascript's `Date.parse() - `__. - For Specific date formats can be provided (e.g., ``{foo: "date:'%m%d%Y'"}`` ), using - the `d3-time-format syntax `__. - UTC date format parsing is supported similarly (e.g., ``{foo: "utc:'%m%d%Y'"}`` ). - See more about `UTC time - `__ - property : string - The JSON property containing the desired data. - This parameter can be used when the loaded JSON file may have surrounding structure - or meta-data. - For example ``"property": "values.features"`` is equivalent to retrieving - ``json.values.features`` - from the loaded JSON object. - type : enum('json') - Type of input data: ``"json"``, ``"csv"``, ``"tsv"``, ``"dsv"``. - - **Default value:** The default format type is determined by the extension of the - file URL. - If no extension is detected, ``"json"`` will be used by default. - """ - _schema = {'$ref': '#/definitions/JsonDataFormat'} - - def __init__(self, parse=Undefined, property=Undefined, type=Undefined, **kwds): - super(JsonDataFormat, self).__init__(parse=parse, property=property, type=type, **kwds) - - -class LabelOverlap(VegaLiteSchema): - """LabelOverlap schema wrapper - - anyOf(boolean, enum('parity'), enum('greedy')) - """ - _schema = {'$ref': '#/definitions/LabelOverlap'} - - def __init__(self, *args, **kwds): - super(LabelOverlap, self).__init__(*args, **kwds) - - -class LatLongFieldDef(VegaLiteSchema): - """LatLongFieldDef schema wrapper - - Mapping(required=[]) - - Attributes - ---------- - - aggregate : :class:`Aggregate` - Aggregation function for the field - (e.g., ``mean``, ``sum``, ``median``, ``min``, ``max``, ``count`` ). - - **Default value:** ``undefined`` (None) - - **See also:** `aggregate `__ - documentation. - bin : None - A flag for binning a ``quantitative`` field, `an object defining binning parameters - `__, or indicating that the - data for ``x`` or ``y`` channel are binned before they are imported into Vega-Lite ( - ``"binned"`` ). - - - If ``true``, default `binning parameters - `__ will be applied. - - If ``"binned"``, this indicates that the data for the ``x`` (or ``y`` ) channel are - already binned. You can map the bin-start field to ``x`` (or ``y`` ) and the bin-end - field to ``x2`` (or ``y2`` ). The scale and axis will be formatted similar to - binning in Vega-lite. To adjust the axis ticks based on the bin step, you can also - set the axis's `tickMinStep - `__ property. - - **Default value:** ``false`` - - **See also:** `bin `__ - documentation. - field : :class:`Field` - **Required.** A string defining the name of the field from which to pull a data - value - or an object defining iterated values from the `repeat - `__ operator. - - **See also:** `field `__ - documentation. - - **Notes:** - 1) Dots ( ``.`` ) and brackets ( ``[`` and ``]`` ) can be used to access nested - objects (e.g., ``"field": "foo.bar"`` and ``"field": "foo['bar']"`` ). - If field names contain dots or brackets but are not nested, you can use ``\\`` to - escape dots and brackets (e.g., ``"a\\.b"`` and ``"a\\[0\\]"`` ). - See more details about escaping in the `field documentation - `__. - 2) ``field`` is not required if ``aggregate`` is ``count``. - timeUnit : :class:`TimeUnit` - Time unit (e.g., ``year``, ``yearmonth``, ``month``, ``hours`` ) for a temporal - field. - or `a temporal field that gets casted as ordinal - `__. - - **Default value:** ``undefined`` (None) - - **See also:** `timeUnit `__ - documentation. - title : anyOf(string, None) - A title for the field. If ``null``, the title will be removed. - - **Default value:** derived from the field's name and transformation function ( - ``aggregate``, ``bin`` and ``timeUnit`` ). If the field has an aggregate function, - the function is displayed as part of the title (e.g., ``"Sum of Profit"`` ). If the - field is binned or has a time unit applied, the applied function is shown in - parentheses (e.g., ``"Profit (binned)"``, ``"Transaction Date (year-month)"`` ). - Otherwise, the title is simply the field name. - - **Notes** : - - 1) You can customize the default field title format by providing the `fieldTitle - `__ property in - the `config `__ or `fieldTitle - function via the compile function's options - `__. - - 2) If both field definition's ``title`` and axis, header, or legend ``title`` are - defined, axis/header/legend title will be used. - type : enum('quantitative') - The encoded field's type of measurement ( ``"quantitative"``, ``"temporal"``, - ``"ordinal"``, or ``"nominal"`` ). - It can also be a ``"geojson"`` type for encoding `'geoshape' - `__. - - **Note:** - - - * Data values for a temporal field can be either a date-time string (e.g., - ``"2015-03-07 12:32:17"``, ``"17:01"``, ``"2015-03-16"``. ``"2015"`` ) or a - timestamp number (e.g., ``1552199579097`` ). - * Data ``type`` describes the semantics of the data rather than the primitive data - types ( ``number``, ``string``, etc.). The same primitive data type can have - different types of measurement. For example, numeric data can represent - quantitative, ordinal, or nominal data. - * When using with `bin `__, the - ``type`` property can be either ``"quantitative"`` (for using a linear bin scale) - or `"ordinal" (for using an ordinal bin scale) - `__. - * When using with `timeUnit - `__, the ``type`` property - can be either ``"temporal"`` (for using a temporal scale) or `"ordinal" (for using - an ordinal scale) `__. - * When using with `aggregate - `__, the ``type`` property - refers to the post-aggregation data type. For example, we can calculate count - ``distinct`` of a categorical field ``"cat"`` using ``{"aggregate": "distinct", - "field": "cat", "type": "quantitative"}``. The ``"type"`` of the aggregate output - is ``"quantitative"``. - * Secondary channels (e.g., ``x2``, ``y2``, ``xError``, ``yError`` ) do not have - ``type`` as they have exactly the same type as their primary channels (e.g., - ``x``, ``y`` ). - - **See also:** `type `__ - documentation. - """ - _schema = {'$ref': '#/definitions/LatLongFieldDef'} - - def __init__(self, aggregate=Undefined, bin=Undefined, field=Undefined, timeUnit=Undefined, - title=Undefined, type=Undefined, **kwds): - super(LatLongFieldDef, self).__init__(aggregate=aggregate, bin=bin, field=field, - timeUnit=timeUnit, title=title, type=type, **kwds) - - -class LayoutAlign(VegaLiteSchema): - """LayoutAlign schema wrapper - - enum('all', 'each', 'none') - """ - _schema = {'$ref': '#/definitions/LayoutAlign'} - - def __init__(self, *args): - super(LayoutAlign, self).__init__(*args) - - -class LayoutBounds(VegaLiteSchema): - """LayoutBounds schema wrapper - - anyOf(enum('full'), enum('flush'), :class:`SignalRef`) - """ - _schema = {'$ref': '#/definitions/LayoutBounds'} - - def __init__(self, *args, **kwds): - super(LayoutBounds, self).__init__(*args, **kwds) - - -class Legend(VegaLiteSchema): - """Legend schema wrapper - - Mapping(required=[]) - Properties of a legend or boolean flag for determining whether to show it. - - Attributes - ---------- - - clipHeight : float - The height in pixels to clip symbol legend entries and limit their size. - columnPadding : float - The horizontal padding in pixels between symbol legend entries. - - **Default value:** ``10``. - columns : float - The number of columns in which to arrange symbol legend entries. A value of ``0`` or - lower indicates a single row with one column per entry. - cornerRadius : float - Corner radius for the full legend. - direction : :class:`Orientation` - The direction of the legend, one of ``"vertical"`` or ``"horizontal"``. - - **Default value:** - - - * For top-/bottom- ``orient`` ed legends, ``"horizontal"`` - * For left-/right- ``orient`` ed legends, ``"vertical"`` - * For top/bottom-left/right- ``orient`` ed legends, ``"horizontal"`` for gradient - legends and ``"vertical"`` for symbol legends. - fillColor : :class:`Color` - Background fill color for the full legend. - format : string - The text formatting pattern for labels of guides (axes, legends, headers) and text - marks. - - - * If the format type is ``"number"`` (e.g., for quantitative fields), this is D3's - `number format pattern `__. - * If the format type is ``"time"`` (e.g., for temporal fields), this is D3's `time - format pattern `__. - - See the `format documentation `__ - for more examples. - - **Default value:** Derived from `numberFormat - `__ config for number - format and from `timeFormat - `__ config for time - format. - formatType : enum('number', 'time') - The format type for labels ( ``"number"`` or ``"time"`` ). - - **Default value:** - - - * ``"time"`` for temporal fields and ordinal and nomimal fields with ``timeUnit``. - * ``"number"`` for quantitative fields as well as ordinal and nomimal fields without - ``timeUnit``. - gradientLength : float - The length in pixels of the primary axis of a color gradient. This value corresponds - to the height of a vertical gradient or the width of a horizontal gradient. - - **Default value:** ``200``. - gradientOpacity : float - Opacity of the color gradient. - gradientStrokeColor : :class:`Color` - The color of the gradient stroke, can be in hex color code or regular color name. - - **Default value:** ``"lightGray"``. - gradientStrokeWidth : float - The width of the gradient stroke, in pixels. - - **Default value:** ``0``. - gradientThickness : float - The thickness in pixels of the color gradient. This value corresponds to the width - of a vertical gradient or the height of a horizontal gradient. - - **Default value:** ``16``. - gridAlign : :class:`LayoutAlign` - The alignment to apply to symbol legends rows and columns. The supported string - values are ``"all"``, ``"each"`` (the default), and ``none``. For more information, - see the `grid layout documentation `__. - - **Default value:** ``"each"``. - labelAlign : :class:`Align` - The alignment of the legend label, can be left, center, or right. - labelBaseline : :class:`TextBaseline` - The position of the baseline of legend label, can be ``"top"``, ``"middle"``, - ``"bottom"``, or ``"alphabetic"``. - - **Default value:** ``"middle"``. - labelColor : :class:`Color` - The color of the legend label, can be in hex color code or regular color name. - labelFont : string - The font of the legend label. - labelFontSize : float - The font size of legend label. - - **Default value:** ``10``. - labelFontStyle : :class:`FontStyle` - The font style of legend label. - labelFontWeight : :class:`FontWeight` - The font weight of legend label. - labelLimit : float - Maximum allowed pixel width of legend tick labels. - - **Default value:** ``160``. - labelOffset : float - The offset of the legend label. - labelOpacity : float - Opacity of labels. - labelOverlap : :class:`LabelOverlap` - The strategy to use for resolving overlap of labels in gradient legends. If - ``false``, no overlap reduction is attempted. If set to ``true`` (default) or - ``"parity"``, a strategy of removing every other label is used. If set to - ``"greedy"``, a linear scan of the labels is performed, removing any label that - overlaps with the last visible label (this often works better for log-scaled axes). - - **Default value:** ``true``. - labelPadding : float - Padding in pixels between the legend and legend labels. - labelSeparation : float - The minimum separation that must be between label bounding boxes for them to be - considered non-overlapping (default ``0`` ). This property is ignored if - *labelOverlap* resolution is not enabled. - legendX : float - Custom x-position for legend with orient "none". - legendY : float - Custom y-position for legend with orient "none". - offset : float - The offset in pixels by which to displace the legend from the data rectangle and - axes. - - **Default value:** ``18``. - orient : :class:`LegendOrient` - The orientation of the legend, which determines how the legend is positioned within - the scene. One of ``"left"``, ``"right"``, ``"top"``, ``"bottom"``, ``"top-left"``, - ``"top-right"``, ``"bottom-left"``, ``"bottom-right"``, ``"none"``. - - **Default value:** ``"right"`` - padding : float - The padding between the border and content of the legend group. - - **Default value:** ``0``. - rowPadding : float - The vertical padding in pixels between symbol legend entries. - - **Default value:** ``2``. - strokeColor : :class:`Color` - Border stroke color for the full legend. - symbolDash : List(float) - An array of alternating [stroke, space] lengths for dashed symbol strokes. - symbolDashOffset : float - The pixel offset at which to start drawing with the symbol stroke dash array. - symbolFillColor : :class:`Color` - The color of the legend symbol, - symbolOffset : float - Horizontal pixel offset for legend symbols. - - **Default value:** ``0``. - symbolOpacity : float - Opacity of the legend symbols. - symbolSize : float - The size of the legend symbol, in pixels. - - **Default value:** ``100``. - symbolStrokeColor : :class:`Color` - Stroke color for legend symbols. - symbolStrokeWidth : float - The width of the symbol's stroke. - - **Default value:** ``1.5``. - symbolType : :class:`SymbolShape` - The symbol shape. One of the plotting shapes ``circle`` (default), ``square``, - ``cross``, ``diamond``, ``triangle-up``, ``triangle-down``, ``triangle-right``, or - ``triangle-left``, the line symbol ``stroke``, or one of the centered directional - shapes ``arrow``, ``wedge``, or ``triangle``. Alternatively, a custom `SVG path - string `__ can be - provided. For correct sizing, custom shape paths should be defined within a square - bounding box with coordinates ranging from -1 to 1 along both the x and y - dimensions. - - **Default value:** ``"circle"``. - tickCount : float - The desired number of tick values for quantitative legends. - tickMinStep : float - The minimum desired step between legend ticks, in terms of scale domain values. For - example, a value of ``1`` indicates that ticks should not be less than 1 unit apart. - If ``tickMinStep`` is specified, the ``tickCount`` value will be adjusted, if - necessary, to enforce the minimum step value. - - **Default value** : ``undefined`` - title : anyOf(string, None) - A title for the field. If ``null``, the title will be removed. - - **Default value:** derived from the field's name and transformation function ( - ``aggregate``, ``bin`` and ``timeUnit`` ). If the field has an aggregate function, - the function is displayed as part of the title (e.g., ``"Sum of Profit"`` ). If the - field is binned or has a time unit applied, the applied function is shown in - parentheses (e.g., ``"Profit (binned)"``, ``"Transaction Date (year-month)"`` ). - Otherwise, the title is simply the field name. - - **Notes** : - - 1) You can customize the default field title format by providing the `fieldTitle - `__ property in - the `config `__ or `fieldTitle - function via the compile function's options - `__. - - 2) If both field definition's ``title`` and axis, header, or legend ``title`` are - defined, axis/header/legend title will be used. - titleAlign : :class:`Align` - Horizontal text alignment for legend titles. - - **Default value:** ``"left"``. - titleAnchor : :class:`TitleAnchor` - Text anchor position for placing legend titles. - titleBaseline : :class:`TextBaseline` - Vertical text baseline for legend titles. - - **Default value:** ``"top"``. - titleColor : :class:`Color` - The color of the legend title, can be in hex color code or regular color name. - titleFont : string - The font of the legend title. - titleFontSize : float - The font size of the legend title. - titleFontStyle : :class:`FontStyle` - The font style of the legend title. - titleFontWeight : :class:`FontWeight` - The font weight of the legend title. - This can be either a string (e.g ``"bold"``, ``"normal"`` ) or a number ( ``100``, - ``200``, ``300``, ..., ``900`` where ``"normal"`` = ``400`` and ``"bold"`` = ``700`` - ). - titleLimit : float - Maximum allowed pixel width of legend titles. - - **Default value:** ``180``. - titleOpacity : float - Opacity of the legend title. - titleOrient : :class:`Orient` - Orientation of the legend title. - titlePadding : float - The padding, in pixels, between title and legend. - - **Default value:** ``5``. - type : enum('symbol', 'gradient') - The type of the legend. Use ``"symbol"`` to create a discrete legend and - ``"gradient"`` for a continuous color gradient. - - **Default value:** ``"gradient"`` for non-binned quantitative fields and temporal - fields; ``"symbol"`` otherwise. - values : List(anyOf(float, string, boolean, :class:`DateTime`)) - Explicitly set the visible legend values. - zindex : float - A non-negative integer indicating the z-index of the legend. - If zindex is 0, legend should be drawn behind all chart elements. - To put them in front, use zindex = 1. - """ - _schema = {'$ref': '#/definitions/Legend'} - - def __init__(self, clipHeight=Undefined, columnPadding=Undefined, columns=Undefined, - cornerRadius=Undefined, direction=Undefined, fillColor=Undefined, format=Undefined, - formatType=Undefined, gradientLength=Undefined, gradientOpacity=Undefined, - gradientStrokeColor=Undefined, gradientStrokeWidth=Undefined, - gradientThickness=Undefined, gridAlign=Undefined, labelAlign=Undefined, - labelBaseline=Undefined, labelColor=Undefined, labelFont=Undefined, - labelFontSize=Undefined, labelFontStyle=Undefined, labelFontWeight=Undefined, - labelLimit=Undefined, labelOffset=Undefined, labelOpacity=Undefined, - labelOverlap=Undefined, labelPadding=Undefined, labelSeparation=Undefined, - legendX=Undefined, legendY=Undefined, offset=Undefined, orient=Undefined, - padding=Undefined, rowPadding=Undefined, strokeColor=Undefined, symbolDash=Undefined, - symbolDashOffset=Undefined, symbolFillColor=Undefined, symbolOffset=Undefined, - symbolOpacity=Undefined, symbolSize=Undefined, symbolStrokeColor=Undefined, - symbolStrokeWidth=Undefined, symbolType=Undefined, tickCount=Undefined, - tickMinStep=Undefined, title=Undefined, titleAlign=Undefined, titleAnchor=Undefined, - titleBaseline=Undefined, titleColor=Undefined, titleFont=Undefined, - titleFontSize=Undefined, titleFontStyle=Undefined, titleFontWeight=Undefined, - titleLimit=Undefined, titleOpacity=Undefined, titleOrient=Undefined, - titlePadding=Undefined, type=Undefined, values=Undefined, zindex=Undefined, **kwds): - super(Legend, self).__init__(clipHeight=clipHeight, columnPadding=columnPadding, - columns=columns, cornerRadius=cornerRadius, direction=direction, - fillColor=fillColor, format=format, formatType=formatType, - gradientLength=gradientLength, gradientOpacity=gradientOpacity, - gradientStrokeColor=gradientStrokeColor, - gradientStrokeWidth=gradientStrokeWidth, - gradientThickness=gradientThickness, gridAlign=gridAlign, - labelAlign=labelAlign, labelBaseline=labelBaseline, - labelColor=labelColor, labelFont=labelFont, - labelFontSize=labelFontSize, labelFontStyle=labelFontStyle, - labelFontWeight=labelFontWeight, labelLimit=labelLimit, - labelOffset=labelOffset, labelOpacity=labelOpacity, - labelOverlap=labelOverlap, labelPadding=labelPadding, - labelSeparation=labelSeparation, legendX=legendX, legendY=legendY, - offset=offset, orient=orient, padding=padding, - rowPadding=rowPadding, strokeColor=strokeColor, - symbolDash=symbolDash, symbolDashOffset=symbolDashOffset, - symbolFillColor=symbolFillColor, symbolOffset=symbolOffset, - symbolOpacity=symbolOpacity, symbolSize=symbolSize, - symbolStrokeColor=symbolStrokeColor, - symbolStrokeWidth=symbolStrokeWidth, symbolType=symbolType, - tickCount=tickCount, tickMinStep=tickMinStep, title=title, - titleAlign=titleAlign, titleAnchor=titleAnchor, - titleBaseline=titleBaseline, titleColor=titleColor, - titleFont=titleFont, titleFontSize=titleFontSize, - titleFontStyle=titleFontStyle, titleFontWeight=titleFontWeight, - titleLimit=titleLimit, titleOpacity=titleOpacity, - titleOrient=titleOrient, titlePadding=titlePadding, type=type, - values=values, zindex=zindex, **kwds) - - -class LegendConfig(VegaLiteSchema): - """LegendConfig schema wrapper - - Mapping(required=[]) - - Attributes - ---------- - - clipHeight : float - The height in pixels to clip symbol legend entries and limit their size. - columnPadding : float - The horizontal padding in pixels between symbol legend entries. - - **Default value:** ``10``. - columns : float - The number of columns in which to arrange symbol legend entries. A value of ``0`` or - lower indicates a single row with one column per entry. - cornerRadius : float - Corner radius for the full legend. - fillColor : :class:`Color` - Background fill color for the full legend. - gradientDirection : :class:`Orientation` - The default direction ( ``"horizontal"`` or ``"vertical"`` ) for gradient legends. - - **Default value:** ``"vertical"``. - gradientHorizontalMaxLength : float - Max legend length for a horizontal gradient when ``config.legend.gradientLength`` is - undefined. - - **Default value:** ``200`` - gradientHorizontalMinLength : float - Min legend length for a horizontal gradient when ``config.legend.gradientLength`` is - undefined. - - **Default value:** ``100`` - gradientLabelLimit : float - The maximum allowed length in pixels of color ramp gradient labels. - gradientLabelOffset : float - Vertical offset in pixels for color ramp gradient labels. - - **Default value:** ``2``. - gradientLength : float - The length in pixels of the primary axis of a color gradient. This value corresponds - to the height of a vertical gradient or the width of a horizontal gradient. - - **Default value:** ``200``. - gradientOpacity : float - Opacity of the color gradient. - gradientStrokeColor : :class:`Color` - The color of the gradient stroke, can be in hex color code or regular color name. - - **Default value:** ``"lightGray"``. - gradientStrokeWidth : float - The width of the gradient stroke, in pixels. - - **Default value:** ``0``. - gradientThickness : float - The thickness in pixels of the color gradient. This value corresponds to the width - of a vertical gradient or the height of a horizontal gradient. - - **Default value:** ``16``. - gradientVerticalMaxLength : float - Max legend length for a vertical gradient when ``config.legend.gradientLength`` is - undefined. - - **Default value:** ``200`` - gradientVerticalMinLength : float - Min legend length for a vertical gradient when ``config.legend.gradientLength`` is - undefined. - - **Default value:** ``100`` - gridAlign : :class:`LayoutAlign` - The alignment to apply to symbol legends rows and columns. The supported string - values are ``"all"``, ``"each"`` (the default), and ``none``. For more information, - see the `grid layout documentation `__. - - **Default value:** ``"each"``. - labelAlign : :class:`Align` - The alignment of the legend label, can be left, center, or right. - labelBaseline : :class:`TextBaseline` - The position of the baseline of legend label, can be ``"top"``, ``"middle"``, - ``"bottom"``, or ``"alphabetic"``. - - **Default value:** ``"middle"``. - labelColor : :class:`Color` - The color of the legend label, can be in hex color code or regular color name. - labelFont : string - The font of the legend label. - labelFontSize : float - The font size of legend label. - - **Default value:** ``10``. - labelFontStyle : :class:`FontStyle` - The font style of legend label. - labelFontWeight : :class:`FontWeight` - The font weight of legend label. - labelLimit : float - Maximum allowed pixel width of legend tick labels. - - **Default value:** ``160``. - labelOffset : float - The offset of the legend label. - labelOpacity : float - Opacity of labels. - labelOverlap : :class:`LabelOverlap` - The strategy to use for resolving overlap of labels in gradient legends. If - ``false``, no overlap reduction is attempted. If set to ``true`` or ``"parity"``, a - strategy of removing every other label is used. If set to ``"greedy"``, a linear - scan of the labels is performed, removing any label that overlaps with the last - visible label (this often works better for log-scaled axes). - - **Default value:** ``"greedy"`` for ``log scales otherwise`` true`. - labelPadding : float - Padding in pixels between the legend and legend labels. - labelSeparation : float - The minimum separation that must be between label bounding boxes for them to be - considered non-overlapping (default ``0`` ). This property is ignored if - *labelOverlap* resolution is not enabled. - layout : :class:`LegendLayout` - Legend orient group layout parameters. - legendX : float - Custom x-position for legend with orient "none". - legendY : float - Custom y-position for legend with orient "none". - offset : float - The offset in pixels by which to displace the legend from the data rectangle and - axes. - - **Default value:** ``18``. - orient : :class:`LegendOrient` - The orientation of the legend, which determines how the legend is positioned within - the scene. One of "left", "right", "top-left", "top-right", "bottom-left", - "bottom-right", "none". - - **Default value:** ``"right"`` - padding : float - The padding between the border and content of the legend group. - - **Default value:** ``0``. - rowPadding : float - The vertical padding in pixels between symbol legend entries. - - **Default value:** ``2``. - shortTimeLabels : boolean - Whether month names and weekday names should be abbreviated. - - **Default value:** ``false`` - strokeColor : :class:`Color` - Border stroke color for the full legend. - strokeDash : List(float) - Border stroke dash pattern for the full legend. - strokeWidth : float - Border stroke width for the full legend. - symbolBaseFillColor : :class:`Color` - Default fill color for legend symbols. Only applied if there is no ``"fill"`` scale - color encoding for the legend. - - **Default value:** ``"transparent"``. - symbolBaseStrokeColor : :class:`Color` - Default stroke color for legend symbols. Only applied if there is no ``"fill"`` - scale color encoding for the legend. - - **Default value:** ``"gray"``. - symbolDash : List(float) - An array of alternating [stroke, space] lengths for dashed symbol strokes. - symbolDashOffset : float - The pixel offset at which to start drawing with the symbol stroke dash array. - symbolDirection : :class:`Orientation` - The default direction ( ``"horizontal"`` or ``"vertical"`` ) for symbol legends. - - **Default value:** ``"vertical"``. - symbolFillColor : :class:`Color` - The color of the legend symbol, - symbolOffset : float - Horizontal pixel offset for legend symbols. - - **Default value:** ``0``. - symbolOpacity : float - Opacity of the legend symbols. - symbolSize : float - The size of the legend symbol, in pixels. - - **Default value:** ``100``. - symbolStrokeColor : :class:`Color` - Stroke color for legend symbols. - symbolStrokeWidth : float - The width of the symbol's stroke. - - **Default value:** ``1.5``. - symbolType : :class:`SymbolShape` - The symbol shape. One of the plotting shapes ``circle`` (default), ``square``, - ``cross``, ``diamond``, ``triangle-up``, ``triangle-down``, ``triangle-right``, or - ``triangle-left``, the line symbol ``stroke``, or one of the centered directional - shapes ``arrow``, ``wedge``, or ``triangle``. Alternatively, a custom `SVG path - string `__ can be - provided. For correct sizing, custom shape paths should be defined within a square - bounding box with coordinates ranging from -1 to 1 along both the x and y - dimensions. - - **Default value:** ``"circle"``. - title : None - Set to null to disable title for the axis, legend, or header. - titleAlign : :class:`Align` - Horizontal text alignment for legend titles. - - **Default value:** ``"left"``. - titleAnchor : :class:`TitleAnchor` - Text anchor position for placing legend titles. - titleBaseline : :class:`TextBaseline` - Vertical text baseline for legend titles. - - **Default value:** ``"top"``. - titleColor : :class:`Color` - The color of the legend title, can be in hex color code or regular color name. - titleFont : string - The font of the legend title. - titleFontSize : float - The font size of the legend title. - titleFontStyle : :class:`FontStyle` - The font style of the legend title. - titleFontWeight : :class:`FontWeight` - The font weight of the legend title. - This can be either a string (e.g ``"bold"``, ``"normal"`` ) or a number ( ``100``, - ``200``, ``300``, ..., ``900`` where ``"normal"`` = ``400`` and ``"bold"`` = ``700`` - ). - titleLimit : float - Maximum allowed pixel width of legend titles. - - **Default value:** ``180``. - titleOpacity : float - Opacity of the legend title. - titleOrient : :class:`Orient` - Orientation of the legend title. - titlePadding : float - The padding, in pixels, between title and legend. - - **Default value:** ``5``. - """ - _schema = {'$ref': '#/definitions/LegendConfig'} - - def __init__(self, clipHeight=Undefined, columnPadding=Undefined, columns=Undefined, - cornerRadius=Undefined, fillColor=Undefined, gradientDirection=Undefined, - gradientHorizontalMaxLength=Undefined, gradientHorizontalMinLength=Undefined, - gradientLabelLimit=Undefined, gradientLabelOffset=Undefined, gradientLength=Undefined, - gradientOpacity=Undefined, gradientStrokeColor=Undefined, - gradientStrokeWidth=Undefined, gradientThickness=Undefined, - gradientVerticalMaxLength=Undefined, gradientVerticalMinLength=Undefined, - gridAlign=Undefined, labelAlign=Undefined, labelBaseline=Undefined, - labelColor=Undefined, labelFont=Undefined, labelFontSize=Undefined, - labelFontStyle=Undefined, labelFontWeight=Undefined, labelLimit=Undefined, - labelOffset=Undefined, labelOpacity=Undefined, labelOverlap=Undefined, - labelPadding=Undefined, labelSeparation=Undefined, layout=Undefined, legendX=Undefined, - legendY=Undefined, offset=Undefined, orient=Undefined, padding=Undefined, - rowPadding=Undefined, shortTimeLabels=Undefined, strokeColor=Undefined, - strokeDash=Undefined, strokeWidth=Undefined, symbolBaseFillColor=Undefined, - symbolBaseStrokeColor=Undefined, symbolDash=Undefined, symbolDashOffset=Undefined, - symbolDirection=Undefined, symbolFillColor=Undefined, symbolOffset=Undefined, - symbolOpacity=Undefined, symbolSize=Undefined, symbolStrokeColor=Undefined, - symbolStrokeWidth=Undefined, symbolType=Undefined, title=Undefined, - titleAlign=Undefined, titleAnchor=Undefined, titleBaseline=Undefined, - titleColor=Undefined, titleFont=Undefined, titleFontSize=Undefined, - titleFontStyle=Undefined, titleFontWeight=Undefined, titleLimit=Undefined, - titleOpacity=Undefined, titleOrient=Undefined, titlePadding=Undefined, **kwds): - super(LegendConfig, self).__init__(clipHeight=clipHeight, columnPadding=columnPadding, - columns=columns, cornerRadius=cornerRadius, - fillColor=fillColor, gradientDirection=gradientDirection, - gradientHorizontalMaxLength=gradientHorizontalMaxLength, - gradientHorizontalMinLength=gradientHorizontalMinLength, - gradientLabelLimit=gradientLabelLimit, - gradientLabelOffset=gradientLabelOffset, - gradientLength=gradientLength, - gradientOpacity=gradientOpacity, - gradientStrokeColor=gradientStrokeColor, - gradientStrokeWidth=gradientStrokeWidth, - gradientThickness=gradientThickness, - gradientVerticalMaxLength=gradientVerticalMaxLength, - gradientVerticalMinLength=gradientVerticalMinLength, - gridAlign=gridAlign, labelAlign=labelAlign, - labelBaseline=labelBaseline, labelColor=labelColor, - labelFont=labelFont, labelFontSize=labelFontSize, - labelFontStyle=labelFontStyle, - labelFontWeight=labelFontWeight, labelLimit=labelLimit, - labelOffset=labelOffset, labelOpacity=labelOpacity, - labelOverlap=labelOverlap, labelPadding=labelPadding, - labelSeparation=labelSeparation, layout=layout, - legendX=legendX, legendY=legendY, offset=offset, - orient=orient, padding=padding, rowPadding=rowPadding, - shortTimeLabels=shortTimeLabels, strokeColor=strokeColor, - strokeDash=strokeDash, strokeWidth=strokeWidth, - symbolBaseFillColor=symbolBaseFillColor, - symbolBaseStrokeColor=symbolBaseStrokeColor, - symbolDash=symbolDash, symbolDashOffset=symbolDashOffset, - symbolDirection=symbolDirection, - symbolFillColor=symbolFillColor, symbolOffset=symbolOffset, - symbolOpacity=symbolOpacity, symbolSize=symbolSize, - symbolStrokeColor=symbolStrokeColor, - symbolStrokeWidth=symbolStrokeWidth, symbolType=symbolType, - title=title, titleAlign=titleAlign, titleAnchor=titleAnchor, - titleBaseline=titleBaseline, titleColor=titleColor, - titleFont=titleFont, titleFontSize=titleFontSize, - titleFontStyle=titleFontStyle, - titleFontWeight=titleFontWeight, titleLimit=titleLimit, - titleOpacity=titleOpacity, titleOrient=titleOrient, - titlePadding=titlePadding, **kwds) - - -class LegendLayout(VegaLiteSchema): - """LegendLayout schema wrapper - - Mapping(required=[]) - - Attributes - ---------- - - anchor : :class:`TitleAnchor` - The anchor point for legend orient group layout. - bottom : :class:`BaseLegendLayout` - - bounds : :class:`LayoutBounds` - The bounds calculation to use for legend orient group layout. - center : anyOf(boolean, :class:`SignalRef`) - A flag to center legends within a shared orient group. - direction : anyOf(:class:`Orientation`, :class:`SignalRef`) - The layout direction for legend orient group layout. - left : :class:`BaseLegendLayout` - - margin : anyOf(float, :class:`SignalRef`) - The pixel margin between legends within a orient group. - offset : anyOf(float, :class:`SignalRef`) - The pixel offset from the chart body for a legend orient group. - right : :class:`BaseLegendLayout` - - top : :class:`BaseLegendLayout` - - bottom-left : :class:`BaseLegendLayout` - - bottom-right : :class:`BaseLegendLayout` - - top-left : :class:`BaseLegendLayout` - - top-right : :class:`BaseLegendLayout` - - """ - _schema = {'$ref': '#/definitions/LegendLayout'} - - def __init__(self, anchor=Undefined, bottom=Undefined, bounds=Undefined, center=Undefined, - direction=Undefined, left=Undefined, margin=Undefined, offset=Undefined, - right=Undefined, top=Undefined, **kwds): - super(LegendLayout, self).__init__(anchor=anchor, bottom=bottom, bounds=bounds, center=center, - direction=direction, left=left, margin=margin, offset=offset, - right=right, top=top, **kwds) - - -class LegendOrient(VegaLiteSchema): - """LegendOrient schema wrapper - - enum('none', 'left', 'right', 'top', 'bottom', 'top-left', 'top-right', 'bottom-left', - 'bottom-right') - """ - _schema = {'$ref': '#/definitions/LegendOrient'} - - def __init__(self, *args): - super(LegendOrient, self).__init__(*args) - - -class LegendResolveMap(VegaLiteSchema): - """LegendResolveMap schema wrapper - - Mapping(required=[]) - - Attributes - ---------- - - color : :class:`ResolveMode` - - fill : :class:`ResolveMode` - - fillOpacity : :class:`ResolveMode` - - opacity : :class:`ResolveMode` - - shape : :class:`ResolveMode` - - size : :class:`ResolveMode` - - stroke : :class:`ResolveMode` - - strokeOpacity : :class:`ResolveMode` - - strokeWidth : :class:`ResolveMode` - - """ - _schema = {'$ref': '#/definitions/LegendResolveMap'} - - def __init__(self, color=Undefined, fill=Undefined, fillOpacity=Undefined, opacity=Undefined, - shape=Undefined, size=Undefined, stroke=Undefined, strokeOpacity=Undefined, - strokeWidth=Undefined, **kwds): - super(LegendResolveMap, self).__init__(color=color, fill=fill, fillOpacity=fillOpacity, - opacity=opacity, shape=shape, size=size, stroke=stroke, - strokeOpacity=strokeOpacity, strokeWidth=strokeWidth, - **kwds) - - -class LineConfig(VegaLiteSchema): - """LineConfig schema wrapper - - Mapping(required=[]) - - Attributes - ---------- - - align : :class:`Align` - The horizontal alignment of the text. One of ``"left"``, ``"right"``, ``"center"``. - angle : float - The rotation angle of the text, in degrees. - baseline : :class:`TextBaseline` - The vertical alignment of the text. One of ``"top"``, ``"middle"``, ``"bottom"``. - - **Default value:** ``"middle"`` - color : :class:`Color` - Default color. Note that ``fill`` and ``stroke`` have higher precedence than - ``color`` and will override ``color``. - - **Default value:** :raw-html:`` - ``"#4682b4"`` - - **Note:** This property cannot be used in a `style config - `__. - cornerRadius : float - The radius in pixels of rounded rectangle corners. - - **Default value:** ``0`` - cursor : :class:`Cursor` - The mouse cursor used over the mark. Any valid `CSS cursor type - `__ can be used. - dir : :class:`Dir` - The direction of the text. One of ``"ltr"`` (left-to-right) or ``"rtl"`` - (right-to-left). This property determines on which side is truncated in response to - the limit parameter. - - **Default value:** ``"ltr"`` - dx : float - The horizontal offset, in pixels, between the text label and its anchor point. The - offset is applied after rotation by the *angle* property. - dy : float - The vertical offset, in pixels, between the text label and its anchor point. The - offset is applied after rotation by the *angle* property. - ellipsis : string - The ellipsis string for text truncated in response to the limit parameter. - - **Default value:** ``"…"`` - fill : :class:`Color` - Default Fill Color. This has higher precedence than ``config.color`` - - **Default value:** (None) - fillOpacity : float - The fill opacity (value between [0,1]). - - **Default value:** ``1`` - filled : boolean - Whether the mark's color should be used as fill color instead of stroke color. - - **Default value:** ``false`` for ``point``, ``line`` and ``rule`` ; otherwise, - ``true``. - - **Note:** This property cannot be used in a `style config - `__. - font : string - The typeface to set the text in (e.g., ``"Helvetica Neue"`` ). - fontSize : float - The font size, in pixels. - fontStyle : :class:`FontStyle` - The font style (e.g., ``"italic"`` ). - fontWeight : :class:`FontWeight` - The font weight. - This can be either a string (e.g ``"bold"``, ``"normal"`` ) or a number ( ``100``, - ``200``, ``300``, ..., ``900`` where ``"normal"`` = ``400`` and ``"bold"`` = ``700`` - ). - height : float - Height of the marks. - href : string - A URL to load upon mouse click. If defined, the mark acts as a hyperlink. - interpolate : :class:`Interpolate` - The line interpolation method to use for line and area marks. One of the following: - - - * ``"linear"`` : piecewise linear segments, as in a polyline. - * ``"linear-closed"`` : close the linear segments to form a polygon. - * ``"step"`` : alternate between horizontal and vertical segments, as in a step - function. - * ``"step-before"`` : alternate between vertical and horizontal segments, as in a - step function. - * ``"step-after"`` : alternate between horizontal and vertical segments, as in a - step function. - * ``"basis"`` : a B-spline, with control point duplication on the ends. - * ``"basis-open"`` : an open B-spline; may not intersect the start or end. - * ``"basis-closed"`` : a closed B-spline, as in a loop. - * ``"cardinal"`` : a Cardinal spline, with control point duplication on the ends. - * ``"cardinal-open"`` : an open Cardinal spline; may not intersect the start or end, - but will intersect other control points. - * ``"cardinal-closed"`` : a closed Cardinal spline, as in a loop. - * ``"bundle"`` : equivalent to basis, except the tension parameter is used to - straighten the spline. - * ``"monotone"`` : cubic interpolation that preserves monotonicity in y. - limit : float - The maximum length of the text mark in pixels. The text value will be automatically - truncated if the rendered size exceeds the limit. - - **Default value:** ``0``, indicating no limit - opacity : float - The overall opacity (value between [0,1]). - - **Default value:** ``0.7`` for non-aggregate plots with ``point``, ``tick``, - ``circle``, or ``square`` marks or layered ``bar`` charts and ``1`` otherwise. - order : anyOf(None, boolean) - For line and trail marks, this ``order`` property can be set to ``null`` or - ``false`` to make the lines use the original order in the data sources. - orient : :class:`Orientation` - The orientation of a non-stacked bar, tick, area, and line charts. - The value is either horizontal (default) or vertical. - - - * For bar, rule and tick, this determines whether the size of the bar and tick - should be applied to x or y dimension. - * For area, this property determines the orient property of the Vega output. - * For line and trail marks, this property determines the sort order of the points in - the line - if ``config.sortLineBy`` is not specified. - For stacked charts, this is always determined by the orientation of the stack; - therefore explicitly specified value will be ignored. - point : anyOf(boolean, :class:`OverlayMarkDef`, enum('transparent')) - A flag for overlaying points on top of line or area marks, or an object defining the - properties of the overlayed points. - - - If this property is ``"transparent"``, transparent points will be used (for - enhancing tooltips and selections). - - If this property is an empty object ( ``{}`` ) or ``true``, filled points with - default properties will be used. - - If this property is ``false``, no points would be automatically added to line or - area marks. - - **Default value:** ``false``. - radius : float - Polar coordinate radial offset, in pixels, of the text label from the origin - determined by the ``x`` and ``y`` properties. - shape : string - Shape of the point marks. Supported values include: - - - * plotting shapes: ``"circle"``, ``"square"``, ``"cross"``, ``"diamond"``, - ``"triangle-up"``, ``"triangle-down"``, ``"triangle-right"``, or - ``"triangle-left"``. - * the line symbol ``"stroke"`` - * centered directional shapes ``"arrow"``, ``"wedge"``, or ``"triangle"`` - * a custom `SVG path string - `__ (For correct - sizing, custom shape paths should be defined within a square bounding box with - coordinates ranging from -1 to 1 along both the x and y dimensions.) - - **Default value:** ``"circle"`` - size : float - Default size for marks. - - - * For ``point`` / ``circle`` / ``square``, this represents the pixel area of the - marks. For example: in the case of circles, the radius is determined in part by - the square root of the size value. - * For ``bar``, this represents the band size of the bar, in pixels. - * For ``text``, this represents the font size, in pixels. - - **Default value:** ``30`` for point, circle, square marks; ``rangeStep`` - 1 for bar - marks with discrete dimensions; ``5`` for bar marks with continuous dimensions; - ``11`` for text marks. - stroke : :class:`Color` - Default Stroke Color. This has higher precedence than ``config.color`` - - **Default value:** (None) - strokeCap : :class:`StrokeCap` - The stroke cap for line ending style. One of ``"butt"``, ``"round"``, or - ``"square"``. - - **Default value:** ``"square"`` - strokeDash : List(float) - An array of alternating stroke, space lengths for creating dashed or dotted lines. - strokeDashOffset : float - The offset (in pixels) into which to begin drawing with the stroke dash array. - strokeJoin : :class:`StrokeJoin` - The stroke line join method. One of ``"miter"``, ``"round"`` or ``"bevel"``. - - **Default value:** ``"miter"`` - strokeMiterLimit : float - The miter limit at which to bevel a line join. - strokeOpacity : float - The stroke opacity (value between [0,1]). - - **Default value:** ``1`` - strokeWidth : float - The stroke width, in pixels. - tension : float - Depending on the interpolation type, sets the tension parameter (for line and area - marks). - text : string - Placeholder text if the ``text`` channel is not specified - theta : float - Polar coordinate angle, in radians, of the text label from the origin determined by - the ``x`` and ``y`` properties. Values for ``theta`` follow the same convention of - ``arc`` mark ``startAngle`` and ``endAngle`` properties: angles are measured in - radians, with ``0`` indicating "north". - tooltip : anyOf(:class:`Value`, :class:`TooltipContent`, None) - The tooltip text string to show upon mouse hover or an object defining which fields - should the tooltip be derived from. - - - * If ``tooltip`` is ``{"content": "encoding"}``, then all fields from ``encoding`` - will be used. - * If ``tooltip`` is ``{"content": "data"}``, then all fields that appear in the - highlighted data point will be used. - * If set to ``null``, then no tooltip will be used. - width : float - Width of the marks. - x : anyOf(float, enum('width')) - X coordinates of the marks, or width of horizontal ``"bar"`` and ``"area"`` without - specified ``x2`` or ``width``. - - The ``value`` of this channel can be a number or a string ``"width"`` for the width - of the plot. - x2 : anyOf(float, enum('width')) - X2 coordinates for ranged ``"area"``, ``"bar"``, ``"rect"``, and ``"rule"``. - - The ``value`` of this channel can be a number or a string ``"width"`` for the width - of the plot. - y : anyOf(float, enum('height')) - Y coordinates of the marks, or height of vertical ``"bar"`` and ``"area"`` without - specified ``y2`` or ``height``. - - The ``value`` of this channel can be a number or a string ``"height"`` for the - height of the plot. - y2 : anyOf(float, enum('width')) - Y2 coordinates for ranged ``"area"``, ``"bar"``, ``"rect"``, and ``"rule"``. - - The ``value`` of this channel can be a number or a string ``"height"`` for the - height of the plot. - """ - _schema = {'$ref': '#/definitions/LineConfig'} - - def __init__(self, align=Undefined, angle=Undefined, baseline=Undefined, color=Undefined, - cornerRadius=Undefined, cursor=Undefined, dir=Undefined, dx=Undefined, dy=Undefined, - ellipsis=Undefined, fill=Undefined, fillOpacity=Undefined, filled=Undefined, - font=Undefined, fontSize=Undefined, fontStyle=Undefined, fontWeight=Undefined, - height=Undefined, href=Undefined, interpolate=Undefined, limit=Undefined, - opacity=Undefined, order=Undefined, orient=Undefined, point=Undefined, - radius=Undefined, shape=Undefined, size=Undefined, stroke=Undefined, - strokeCap=Undefined, strokeDash=Undefined, strokeDashOffset=Undefined, - strokeJoin=Undefined, strokeMiterLimit=Undefined, strokeOpacity=Undefined, - strokeWidth=Undefined, tension=Undefined, text=Undefined, theta=Undefined, - tooltip=Undefined, width=Undefined, x=Undefined, x2=Undefined, y=Undefined, - y2=Undefined, **kwds): - super(LineConfig, self).__init__(align=align, angle=angle, baseline=baseline, color=color, - cornerRadius=cornerRadius, cursor=cursor, dir=dir, dx=dx, - dy=dy, ellipsis=ellipsis, fill=fill, fillOpacity=fillOpacity, - filled=filled, font=font, fontSize=fontSize, - fontStyle=fontStyle, fontWeight=fontWeight, height=height, - href=href, interpolate=interpolate, limit=limit, - opacity=opacity, order=order, orient=orient, point=point, - radius=radius, shape=shape, size=size, stroke=stroke, - strokeCap=strokeCap, strokeDash=strokeDash, - strokeDashOffset=strokeDashOffset, strokeJoin=strokeJoin, - strokeMiterLimit=strokeMiterLimit, strokeOpacity=strokeOpacity, - strokeWidth=strokeWidth, tension=tension, text=text, - theta=theta, tooltip=tooltip, width=width, x=x, x2=x2, y=y, - y2=y2, **kwds) - - -class LogicalOperandPredicate(VegaLiteSchema): - """LogicalOperandPredicate schema wrapper - - anyOf(:class:`LogicalNotPredicate`, :class:`LogicalAndPredicate`, - :class:`LogicalOrPredicate`, :class:`Predicate`) - """ - _schema = {'$ref': '#/definitions/LogicalOperand'} - - def __init__(self, *args, **kwds): - super(LogicalOperandPredicate, self).__init__(*args, **kwds) - - -class LogicalAndPredicate(LogicalOperandPredicate): - """LogicalAndPredicate schema wrapper - - Mapping(required=[and]) - - Attributes - ---------- - - and : List(:class:`LogicalOperandPredicate`) - - """ - _schema = {'$ref': '#/definitions/LogicalAnd'} - - def __init__(self, **kwds): - super(LogicalAndPredicate, self).__init__(**kwds) - - -class LogicalNotPredicate(LogicalOperandPredicate): - """LogicalNotPredicate schema wrapper - - Mapping(required=[not]) - - Attributes - ---------- - - not : :class:`LogicalOperandPredicate` - - """ - _schema = {'$ref': '#/definitions/LogicalNot'} - - def __init__(self, **kwds): - super(LogicalNotPredicate, self).__init__(**kwds) - - -class LogicalOrPredicate(LogicalOperandPredicate): - """LogicalOrPredicate schema wrapper - - Mapping(required=[or]) - - Attributes - ---------- - - or : List(:class:`LogicalOperandPredicate`) - - """ - _schema = {'$ref': '#/definitions/LogicalOr'} - - def __init__(self, **kwds): - super(LogicalOrPredicate, self).__init__(**kwds) - - -class LookupData(VegaLiteSchema): - """LookupData schema wrapper - - Mapping(required=[data, key]) - - Attributes - ---------- - - data : :class:`Data` - Secondary data source to lookup in. - key : :class:`FieldName` - Key in data to lookup. - fields : List(:class:`FieldName`) - Fields in foreign data to lookup. - If not specified, the entire object is queried. - """ - _schema = {'$ref': '#/definitions/LookupData'} - - def __init__(self, data=Undefined, key=Undefined, fields=Undefined, **kwds): - super(LookupData, self).__init__(data=data, key=key, fields=fields, **kwds) - - -class Mark(AnyMark): - """Mark schema wrapper - - enum('area', 'bar', 'line', 'trail', 'point', 'text', 'tick', 'rect', 'rule', 'circle', - 'square', 'geoshape') - All types of primitive marks. - """ - _schema = {'$ref': '#/definitions/Mark'} - - def __init__(self, *args): - super(Mark, self).__init__(*args) - - -class MarkConfig(VegaLiteSchema): - """MarkConfig schema wrapper - - Mapping(required=[]) - - Attributes - ---------- - - align : :class:`Align` - The horizontal alignment of the text. One of ``"left"``, ``"right"``, ``"center"``. - angle : float - The rotation angle of the text, in degrees. - baseline : :class:`TextBaseline` - The vertical alignment of the text. One of ``"top"``, ``"middle"``, ``"bottom"``. - - **Default value:** ``"middle"`` - color : :class:`Color` - Default color. Note that ``fill`` and ``stroke`` have higher precedence than - ``color`` and will override ``color``. - - **Default value:** :raw-html:`` - ``"#4682b4"`` - - **Note:** This property cannot be used in a `style config - `__. - cornerRadius : float - The radius in pixels of rounded rectangle corners. - - **Default value:** ``0`` - cursor : :class:`Cursor` - The mouse cursor used over the mark. Any valid `CSS cursor type - `__ can be used. - dir : :class:`Dir` - The direction of the text. One of ``"ltr"`` (left-to-right) or ``"rtl"`` - (right-to-left). This property determines on which side is truncated in response to - the limit parameter. - - **Default value:** ``"ltr"`` - dx : float - The horizontal offset, in pixels, between the text label and its anchor point. The - offset is applied after rotation by the *angle* property. - dy : float - The vertical offset, in pixels, between the text label and its anchor point. The - offset is applied after rotation by the *angle* property. - ellipsis : string - The ellipsis string for text truncated in response to the limit parameter. - - **Default value:** ``"…"`` - fill : :class:`Color` - Default Fill Color. This has higher precedence than ``config.color`` - - **Default value:** (None) - fillOpacity : float - The fill opacity (value between [0,1]). - - **Default value:** ``1`` - filled : boolean - Whether the mark's color should be used as fill color instead of stroke color. - - **Default value:** ``false`` for ``point``, ``line`` and ``rule`` ; otherwise, - ``true``. - - **Note:** This property cannot be used in a `style config - `__. - font : string - The typeface to set the text in (e.g., ``"Helvetica Neue"`` ). - fontSize : float - The font size, in pixels. - fontStyle : :class:`FontStyle` - The font style (e.g., ``"italic"`` ). - fontWeight : :class:`FontWeight` - The font weight. - This can be either a string (e.g ``"bold"``, ``"normal"`` ) or a number ( ``100``, - ``200``, ``300``, ..., ``900`` where ``"normal"`` = ``400`` and ``"bold"`` = ``700`` - ). - height : float - Height of the marks. - href : string - A URL to load upon mouse click. If defined, the mark acts as a hyperlink. - interpolate : :class:`Interpolate` - The line interpolation method to use for line and area marks. One of the following: - - - * ``"linear"`` : piecewise linear segments, as in a polyline. - * ``"linear-closed"`` : close the linear segments to form a polygon. - * ``"step"`` : alternate between horizontal and vertical segments, as in a step - function. - * ``"step-before"`` : alternate between vertical and horizontal segments, as in a - step function. - * ``"step-after"`` : alternate between horizontal and vertical segments, as in a - step function. - * ``"basis"`` : a B-spline, with control point duplication on the ends. - * ``"basis-open"`` : an open B-spline; may not intersect the start or end. - * ``"basis-closed"`` : a closed B-spline, as in a loop. - * ``"cardinal"`` : a Cardinal spline, with control point duplication on the ends. - * ``"cardinal-open"`` : an open Cardinal spline; may not intersect the start or end, - but will intersect other control points. - * ``"cardinal-closed"`` : a closed Cardinal spline, as in a loop. - * ``"bundle"`` : equivalent to basis, except the tension parameter is used to - straighten the spline. - * ``"monotone"`` : cubic interpolation that preserves monotonicity in y. - limit : float - The maximum length of the text mark in pixels. The text value will be automatically - truncated if the rendered size exceeds the limit. - - **Default value:** ``0``, indicating no limit - opacity : float - The overall opacity (value between [0,1]). - - **Default value:** ``0.7`` for non-aggregate plots with ``point``, ``tick``, - ``circle``, or ``square`` marks or layered ``bar`` charts and ``1`` otherwise. - order : anyOf(None, boolean) - For line and trail marks, this ``order`` property can be set to ``null`` or - ``false`` to make the lines use the original order in the data sources. - orient : :class:`Orientation` - The orientation of a non-stacked bar, tick, area, and line charts. - The value is either horizontal (default) or vertical. - - - * For bar, rule and tick, this determines whether the size of the bar and tick - should be applied to x or y dimension. - * For area, this property determines the orient property of the Vega output. - * For line and trail marks, this property determines the sort order of the points in - the line - if ``config.sortLineBy`` is not specified. - For stacked charts, this is always determined by the orientation of the stack; - therefore explicitly specified value will be ignored. - radius : float - Polar coordinate radial offset, in pixels, of the text label from the origin - determined by the ``x`` and ``y`` properties. - shape : string - Shape of the point marks. Supported values include: - - - * plotting shapes: ``"circle"``, ``"square"``, ``"cross"``, ``"diamond"``, - ``"triangle-up"``, ``"triangle-down"``, ``"triangle-right"``, or - ``"triangle-left"``. - * the line symbol ``"stroke"`` - * centered directional shapes ``"arrow"``, ``"wedge"``, or ``"triangle"`` - * a custom `SVG path string - `__ (For correct - sizing, custom shape paths should be defined within a square bounding box with - coordinates ranging from -1 to 1 along both the x and y dimensions.) - - **Default value:** ``"circle"`` - size : float - Default size for marks. - - - * For ``point`` / ``circle`` / ``square``, this represents the pixel area of the - marks. For example: in the case of circles, the radius is determined in part by - the square root of the size value. - * For ``bar``, this represents the band size of the bar, in pixels. - * For ``text``, this represents the font size, in pixels. - - **Default value:** ``30`` for point, circle, square marks; ``rangeStep`` - 1 for bar - marks with discrete dimensions; ``5`` for bar marks with continuous dimensions; - ``11`` for text marks. - stroke : :class:`Color` - Default Stroke Color. This has higher precedence than ``config.color`` - - **Default value:** (None) - strokeCap : :class:`StrokeCap` - The stroke cap for line ending style. One of ``"butt"``, ``"round"``, or - ``"square"``. - - **Default value:** ``"square"`` - strokeDash : List(float) - An array of alternating stroke, space lengths for creating dashed or dotted lines. - strokeDashOffset : float - The offset (in pixels) into which to begin drawing with the stroke dash array. - strokeJoin : :class:`StrokeJoin` - The stroke line join method. One of ``"miter"``, ``"round"`` or ``"bevel"``. - - **Default value:** ``"miter"`` - strokeMiterLimit : float - The miter limit at which to bevel a line join. - strokeOpacity : float - The stroke opacity (value between [0,1]). - - **Default value:** ``1`` - strokeWidth : float - The stroke width, in pixels. - tension : float - Depending on the interpolation type, sets the tension parameter (for line and area - marks). - text : string - Placeholder text if the ``text`` channel is not specified - theta : float - Polar coordinate angle, in radians, of the text label from the origin determined by - the ``x`` and ``y`` properties. Values for ``theta`` follow the same convention of - ``arc`` mark ``startAngle`` and ``endAngle`` properties: angles are measured in - radians, with ``0`` indicating "north". - tooltip : anyOf(:class:`Value`, :class:`TooltipContent`, None) - The tooltip text string to show upon mouse hover or an object defining which fields - should the tooltip be derived from. - - - * If ``tooltip`` is ``{"content": "encoding"}``, then all fields from ``encoding`` - will be used. - * If ``tooltip`` is ``{"content": "data"}``, then all fields that appear in the - highlighted data point will be used. - * If set to ``null``, then no tooltip will be used. - width : float - Width of the marks. - x : anyOf(float, enum('width')) - X coordinates of the marks, or width of horizontal ``"bar"`` and ``"area"`` without - specified ``x2`` or ``width``. - - The ``value`` of this channel can be a number or a string ``"width"`` for the width - of the plot. - x2 : anyOf(float, enum('width')) - X2 coordinates for ranged ``"area"``, ``"bar"``, ``"rect"``, and ``"rule"``. - - The ``value`` of this channel can be a number or a string ``"width"`` for the width - of the plot. - y : anyOf(float, enum('height')) - Y coordinates of the marks, or height of vertical ``"bar"`` and ``"area"`` without - specified ``y2`` or ``height``. - - The ``value`` of this channel can be a number or a string ``"height"`` for the - height of the plot. - y2 : anyOf(float, enum('width')) - Y2 coordinates for ranged ``"area"``, ``"bar"``, ``"rect"``, and ``"rule"``. - - The ``value`` of this channel can be a number or a string ``"height"`` for the - height of the plot. - """ - _schema = {'$ref': '#/definitions/MarkConfig'} - - def __init__(self, align=Undefined, angle=Undefined, baseline=Undefined, color=Undefined, - cornerRadius=Undefined, cursor=Undefined, dir=Undefined, dx=Undefined, dy=Undefined, - ellipsis=Undefined, fill=Undefined, fillOpacity=Undefined, filled=Undefined, - font=Undefined, fontSize=Undefined, fontStyle=Undefined, fontWeight=Undefined, - height=Undefined, href=Undefined, interpolate=Undefined, limit=Undefined, - opacity=Undefined, order=Undefined, orient=Undefined, radius=Undefined, - shape=Undefined, size=Undefined, stroke=Undefined, strokeCap=Undefined, - strokeDash=Undefined, strokeDashOffset=Undefined, strokeJoin=Undefined, - strokeMiterLimit=Undefined, strokeOpacity=Undefined, strokeWidth=Undefined, - tension=Undefined, text=Undefined, theta=Undefined, tooltip=Undefined, width=Undefined, - x=Undefined, x2=Undefined, y=Undefined, y2=Undefined, **kwds): - super(MarkConfig, self).__init__(align=align, angle=angle, baseline=baseline, color=color, - cornerRadius=cornerRadius, cursor=cursor, dir=dir, dx=dx, - dy=dy, ellipsis=ellipsis, fill=fill, fillOpacity=fillOpacity, - filled=filled, font=font, fontSize=fontSize, - fontStyle=fontStyle, fontWeight=fontWeight, height=height, - href=href, interpolate=interpolate, limit=limit, - opacity=opacity, order=order, orient=orient, radius=radius, - shape=shape, size=size, stroke=stroke, strokeCap=strokeCap, - strokeDash=strokeDash, strokeDashOffset=strokeDashOffset, - strokeJoin=strokeJoin, strokeMiterLimit=strokeMiterLimit, - strokeOpacity=strokeOpacity, strokeWidth=strokeWidth, - tension=tension, text=text, theta=theta, tooltip=tooltip, - width=width, x=x, x2=x2, y=y, y2=y2, **kwds) - - -class MarkDef(AnyMark): - """MarkDef schema wrapper - - Mapping(required=[type]) - - Attributes - ---------- - - type : :class:`Mark` - The mark type. This could a primitive mark type - (one of ``"bar"``, ``"circle"``, ``"square"``, ``"tick"``, ``"line"``, - ``"area"``, ``"point"``, ``"geoshape"``, ``"rule"``, and ``"text"`` ) - or a composite mark type ( ``"boxplot"``, ``"errorband"``, ``"errorbar"`` ). - align : :class:`Align` - The horizontal alignment of the text. One of ``"left"``, ``"right"``, ``"center"``. - angle : float - The rotation angle of the text, in degrees. - baseline : :class:`TextBaseline` - The vertical alignment of the text. One of ``"top"``, ``"middle"``, ``"bottom"``. - - **Default value:** ``"middle"`` - binSpacing : float - Offset between bars for binned field. Ideal value for this is either 0 (Preferred - by statisticians) or 1 (Vega-Lite Default, D3 example style). - - **Default value:** ``1`` - clip : boolean - Whether a mark be clipped to the enclosing group’s width and height. - color : :class:`Color` - Default color. Note that ``fill`` and ``stroke`` have higher precedence than - ``color`` and will override ``color``. - - **Default value:** :raw-html:`` - ``"#4682b4"`` - - **Note:** This property cannot be used in a `style config - `__. - cornerRadius : float - The radius in pixels of rounded rectangle corners. - - **Default value:** ``0`` - cursor : :class:`Cursor` - The mouse cursor used over the mark. Any valid `CSS cursor type - `__ can be used. - dir : :class:`Dir` - The direction of the text. One of ``"ltr"`` (left-to-right) or ``"rtl"`` - (right-to-left). This property determines on which side is truncated in response to - the limit parameter. - - **Default value:** ``"ltr"`` - dx : float - The horizontal offset, in pixels, between the text label and its anchor point. The - offset is applied after rotation by the *angle* property. - dy : float - The vertical offset, in pixels, between the text label and its anchor point. The - offset is applied after rotation by the *angle* property. - ellipsis : string - The ellipsis string for text truncated in response to the limit parameter. - - **Default value:** ``"…"`` - fill : :class:`Color` - Default Fill Color. This has higher precedence than ``config.color`` - - **Default value:** (None) - fillOpacity : float - The fill opacity (value between [0,1]). - - **Default value:** ``1`` - filled : boolean - Whether the mark's color should be used as fill color instead of stroke color. - - **Default value:** ``false`` for ``point``, ``line`` and ``rule`` ; otherwise, - ``true``. - - **Note:** This property cannot be used in a `style config - `__. - font : string - The typeface to set the text in (e.g., ``"Helvetica Neue"`` ). - fontSize : float - The font size, in pixels. - fontStyle : :class:`FontStyle` - The font style (e.g., ``"italic"`` ). - fontWeight : :class:`FontWeight` - The font weight. - This can be either a string (e.g ``"bold"``, ``"normal"`` ) or a number ( ``100``, - ``200``, ``300``, ..., ``900`` where ``"normal"`` = ``400`` and ``"bold"`` = ``700`` - ). - height : float - Height of the marks. - href : string - A URL to load upon mouse click. If defined, the mark acts as a hyperlink. - interpolate : :class:`Interpolate` - The line interpolation method to use for line and area marks. One of the following: - - - * ``"linear"`` : piecewise linear segments, as in a polyline. - * ``"linear-closed"`` : close the linear segments to form a polygon. - * ``"step"`` : alternate between horizontal and vertical segments, as in a step - function. - * ``"step-before"`` : alternate between vertical and horizontal segments, as in a - step function. - * ``"step-after"`` : alternate between horizontal and vertical segments, as in a - step function. - * ``"basis"`` : a B-spline, with control point duplication on the ends. - * ``"basis-open"`` : an open B-spline; may not intersect the start or end. - * ``"basis-closed"`` : a closed B-spline, as in a loop. - * ``"cardinal"`` : a Cardinal spline, with control point duplication on the ends. - * ``"cardinal-open"`` : an open Cardinal spline; may not intersect the start or end, - but will intersect other control points. - * ``"cardinal-closed"`` : a closed Cardinal spline, as in a loop. - * ``"bundle"`` : equivalent to basis, except the tension parameter is used to - straighten the spline. - * ``"monotone"`` : cubic interpolation that preserves monotonicity in y. - limit : float - The maximum length of the text mark in pixels. The text value will be automatically - truncated if the rendered size exceeds the limit. - - **Default value:** ``0``, indicating no limit - line : anyOf(boolean, :class:`OverlayMarkDef`) - A flag for overlaying line on top of area marks, or an object defining the - properties of the overlayed lines. - - - If this value is an empty object ( ``{}`` ) or ``true``, lines with default - properties will be used. - - If this value is ``false``, no lines would be automatically added to area marks. - - **Default value:** ``false``. - opacity : float - The overall opacity (value between [0,1]). - - **Default value:** ``0.7`` for non-aggregate plots with ``point``, ``tick``, - ``circle``, or ``square`` marks or layered ``bar`` charts and ``1`` otherwise. - order : anyOf(None, boolean) - For line and trail marks, this ``order`` property can be set to ``null`` or - ``false`` to make the lines use the original order in the data sources. - orient : :class:`Orientation` - The orientation of a non-stacked bar, tick, area, and line charts. - The value is either horizontal (default) or vertical. - - - * For bar, rule and tick, this determines whether the size of the bar and tick - should be applied to x or y dimension. - * For area, this property determines the orient property of the Vega output. - * For line and trail marks, this property determines the sort order of the points in - the line - if ``config.sortLineBy`` is not specified. - For stacked charts, this is always determined by the orientation of the stack; - therefore explicitly specified value will be ignored. - point : anyOf(boolean, :class:`OverlayMarkDef`, enum('transparent')) - A flag for overlaying points on top of line or area marks, or an object defining the - properties of the overlayed points. - - - If this property is ``"transparent"``, transparent points will be used (for - enhancing tooltips and selections). - - If this property is an empty object ( ``{}`` ) or ``true``, filled points with - default properties will be used. - - If this property is ``false``, no points would be automatically added to line or - area marks. - - **Default value:** ``false``. - radius : float - Polar coordinate radial offset, in pixels, of the text label from the origin - determined by the ``x`` and ``y`` properties. - shape : string - Shape of the point marks. Supported values include: - - - * plotting shapes: ``"circle"``, ``"square"``, ``"cross"``, ``"diamond"``, - ``"triangle-up"``, ``"triangle-down"``, ``"triangle-right"``, or - ``"triangle-left"``. - * the line symbol ``"stroke"`` - * centered directional shapes ``"arrow"``, ``"wedge"``, or ``"triangle"`` - * a custom `SVG path string - `__ (For correct - sizing, custom shape paths should be defined within a square bounding box with - coordinates ranging from -1 to 1 along both the x and y dimensions.) - - **Default value:** ``"circle"`` - size : float - Default size for marks. - - - * For ``point`` / ``circle`` / ``square``, this represents the pixel area of the - marks. For example: in the case of circles, the radius is determined in part by - the square root of the size value. - * For ``bar``, this represents the band size of the bar, in pixels. - * For ``text``, this represents the font size, in pixels. - - **Default value:** ``30`` for point, circle, square marks; ``rangeStep`` - 1 for bar - marks with discrete dimensions; ``5`` for bar marks with continuous dimensions; - ``11`` for text marks. - stroke : :class:`Color` - Default Stroke Color. This has higher precedence than ``config.color`` - - **Default value:** (None) - strokeCap : :class:`StrokeCap` - The stroke cap for line ending style. One of ``"butt"``, ``"round"``, or - ``"square"``. - - **Default value:** ``"square"`` - strokeDash : List(float) - An array of alternating stroke, space lengths for creating dashed or dotted lines. - strokeDashOffset : float - The offset (in pixels) into which to begin drawing with the stroke dash array. - strokeJoin : :class:`StrokeJoin` - The stroke line join method. One of ``"miter"``, ``"round"`` or ``"bevel"``. - - **Default value:** ``"miter"`` - strokeMiterLimit : float - The miter limit at which to bevel a line join. - strokeOpacity : float - The stroke opacity (value between [0,1]). - - **Default value:** ``1`` - strokeWidth : float - The stroke width, in pixels. - style : anyOf(string, List(string)) - A string or array of strings indicating the name of custom styles to apply to the - mark. A style is a named collection of mark property defaults defined within the - `style configuration - `__. If style is an - array, later styles will override earlier styles. Any `mark properties - `__ explicitly - defined within the ``encoding`` will override a style default. - - **Default value:** The mark's name. For example, a bar mark will have style - ``"bar"`` by default. - **Note:** Any specified style will augment the default style. For example, a bar - mark with ``"style": "foo"`` will receive from ``config.style.bar`` and - ``config.style.foo`` (the specified style ``"foo"`` has higher precedence). - tension : float - Depending on the interpolation type, sets the tension parameter (for line and area - marks). - text : string - Placeholder text if the ``text`` channel is not specified - theta : float - Polar coordinate angle, in radians, of the text label from the origin determined by - the ``x`` and ``y`` properties. Values for ``theta`` follow the same convention of - ``arc`` mark ``startAngle`` and ``endAngle`` properties: angles are measured in - radians, with ``0`` indicating "north". - thickness : float - Thickness of the tick mark. - - **Default value:** ``1`` - tooltip : anyOf(:class:`Value`, :class:`TooltipContent`, None) - The tooltip text string to show upon mouse hover or an object defining which fields - should the tooltip be derived from. - - - * If ``tooltip`` is ``{"content": "encoding"}``, then all fields from ``encoding`` - will be used. - * If ``tooltip`` is ``{"content": "data"}``, then all fields that appear in the - highlighted data point will be used. - * If set to ``null``, then no tooltip will be used. - width : float - Width of the marks. - x : anyOf(float, enum('width')) - X coordinates of the marks, or width of horizontal ``"bar"`` and ``"area"`` without - specified ``x2`` or ``width``. - - The ``value`` of this channel can be a number or a string ``"width"`` for the width - of the plot. - x2 : anyOf(float, enum('width')) - X2 coordinates for ranged ``"area"``, ``"bar"``, ``"rect"``, and ``"rule"``. - - The ``value`` of this channel can be a number or a string ``"width"`` for the width - of the plot. - x2Offset : float - Offset for x2-position. - xOffset : float - Offset for x-position. - y : anyOf(float, enum('height')) - Y coordinates of the marks, or height of vertical ``"bar"`` and ``"area"`` without - specified ``y2`` or ``height``. - - The ``value`` of this channel can be a number or a string ``"height"`` for the - height of the plot. - y2 : anyOf(float, enum('width')) - Y2 coordinates for ranged ``"area"``, ``"bar"``, ``"rect"``, and ``"rule"``. - - The ``value`` of this channel can be a number or a string ``"height"`` for the - height of the plot. - y2Offset : float - Offset for y2-position. - yOffset : float - Offset for y-position. - """ - _schema = {'$ref': '#/definitions/MarkDef'} - - def __init__(self, type=Undefined, align=Undefined, angle=Undefined, baseline=Undefined, - binSpacing=Undefined, clip=Undefined, color=Undefined, cornerRadius=Undefined, - cursor=Undefined, dir=Undefined, dx=Undefined, dy=Undefined, ellipsis=Undefined, - fill=Undefined, fillOpacity=Undefined, filled=Undefined, font=Undefined, - fontSize=Undefined, fontStyle=Undefined, fontWeight=Undefined, height=Undefined, - href=Undefined, interpolate=Undefined, limit=Undefined, line=Undefined, - opacity=Undefined, order=Undefined, orient=Undefined, point=Undefined, - radius=Undefined, shape=Undefined, size=Undefined, stroke=Undefined, - strokeCap=Undefined, strokeDash=Undefined, strokeDashOffset=Undefined, - strokeJoin=Undefined, strokeMiterLimit=Undefined, strokeOpacity=Undefined, - strokeWidth=Undefined, style=Undefined, tension=Undefined, text=Undefined, - theta=Undefined, thickness=Undefined, tooltip=Undefined, width=Undefined, x=Undefined, - x2=Undefined, x2Offset=Undefined, xOffset=Undefined, y=Undefined, y2=Undefined, - y2Offset=Undefined, yOffset=Undefined, **kwds): - super(MarkDef, self).__init__(type=type, align=align, angle=angle, baseline=baseline, - binSpacing=binSpacing, clip=clip, color=color, - cornerRadius=cornerRadius, cursor=cursor, dir=dir, dx=dx, dy=dy, - ellipsis=ellipsis, fill=fill, fillOpacity=fillOpacity, - filled=filled, font=font, fontSize=fontSize, fontStyle=fontStyle, - fontWeight=fontWeight, height=height, href=href, - interpolate=interpolate, limit=limit, line=line, opacity=opacity, - order=order, orient=orient, point=point, radius=radius, - shape=shape, size=size, stroke=stroke, strokeCap=strokeCap, - strokeDash=strokeDash, strokeDashOffset=strokeDashOffset, - strokeJoin=strokeJoin, strokeMiterLimit=strokeMiterLimit, - strokeOpacity=strokeOpacity, strokeWidth=strokeWidth, style=style, - tension=tension, text=text, theta=theta, thickness=thickness, - tooltip=tooltip, width=width, x=x, x2=x2, x2Offset=x2Offset, - xOffset=xOffset, y=y, y2=y2, y2Offset=y2Offset, yOffset=yOffset, - **kwds) - - -class Month(VegaLiteSchema): - """Month schema wrapper - - float - """ - _schema = {'$ref': '#/definitions/Month'} - - def __init__(self, *args): - super(Month, self).__init__(*args) - - -class MultiSelectionConfig(VegaLiteSchema): - """MultiSelectionConfig schema wrapper - - Mapping(required=[]) - - Attributes - ---------- - - clear : anyOf(:class:`EventStream`, boolean) - Clears the selection, emptying it of all values. Can be an - `EventStream `__ or ``false`` to - disable. - - **Default value:** ``dblclick``. - - **See also:** `clear `__ - documentation. - empty : enum('all', 'none') - By default, ``all`` data values are considered to lie within an empty selection. - When set to ``none``, empty selections contain no data values. - encodings : List(:class:`SingleDefUnitChannel`) - An array of encoding channels. The corresponding data field values - must match for a data tuple to fall within the selection. - - **See also:** `encodings `__ - documentation. - fields : List(:class:`FieldName`) - An array of field names whose values must match for a data tuple to - fall within the selection. - - **See also:** `fields `__ - documentation. - init : anyOf(:class:`SelectionInitMapping`, List(:class:`SelectionInitMapping`)) - Initialize the selection with a mapping between `projected channels or field names - `__ and an initial - value (or array of values). - - **See also:** `init `__ - documentation. - nearest : boolean - When true, an invisible voronoi diagram is computed to accelerate discrete - selection. The data value *nearest* the mouse cursor is added to the selection. - - **See also:** `nearest `__ - documentation. - on : :class:`EventStream` - A `Vega event stream `__ (object or - selector) that triggers the selection. - For interval selections, the event stream must specify a `start and end - `__. - resolve : :class:`SelectionResolution` - With layered and multi-view displays, a strategy that determines how - selections' data queries are resolved when applied in a filter transform, - conditional encoding rule, or scale domain. - - **See also:** `resolve - `__ documentation. - toggle : anyOf(string, boolean) - Controls whether data values should be toggled or only ever inserted into - multi selections. Can be ``true``, ``false`` (for insertion only), or a - `Vega expression `__. - - **Default value:** ``true``, which corresponds to ``event.shiftKey`` (i.e., - data values are toggled when a user interacts with the shift-key pressed). - - **See also:** `toggle `__ - documentation. - """ - _schema = {'$ref': '#/definitions/MultiSelectionConfig'} - - def __init__(self, clear=Undefined, empty=Undefined, encodings=Undefined, fields=Undefined, - init=Undefined, nearest=Undefined, on=Undefined, resolve=Undefined, toggle=Undefined, - **kwds): - super(MultiSelectionConfig, self).__init__(clear=clear, empty=empty, encodings=encodings, - fields=fields, init=init, nearest=nearest, on=on, - resolve=resolve, toggle=toggle, **kwds) - - -class NamedData(DataSource): - """NamedData schema wrapper - - Mapping(required=[name]) - - Attributes - ---------- - - name : string - Provide a placeholder name and bind data at runtime. - format : :class:`DataFormat` - An object that specifies the format for parsing the data. - """ - _schema = {'$ref': '#/definitions/NamedData'} - - def __init__(self, name=Undefined, format=Undefined, **kwds): - super(NamedData, self).__init__(name=name, format=format, **kwds) - - -class NiceTime(VegaLiteSchema): - """NiceTime schema wrapper - - enum('second', 'minute', 'hour', 'day', 'week', 'month', 'year') - """ - _schema = {'$ref': '#/definitions/NiceTime'} - - def __init__(self, *args): - super(NiceTime, self).__init__(*args) - - -class NumberValueDef(VegaLiteSchema): - """NumberValueDef schema wrapper - - Mapping(required=[value]) - Definition object for a constant value of an encoding channel. - - Attributes - ---------- - - value : float - A constant value in visual domain (e.g., ``"red"`` / "#0099ff" for color, values - between ``0`` to ``1`` for opacity). - """ - _schema = {'$ref': '#/definitions/NumberValueDef'} - - def __init__(self, value=Undefined, **kwds): - super(NumberValueDef, self).__init__(value=value, **kwds) - - -class NumericFieldDefWithCondition(VegaLiteSchema): - """NumericFieldDefWithCondition schema wrapper - - Mapping(required=[type]) - A FieldDef with Condition :raw-html:`` - - Attributes - ---------- - - type : :class:`StandardType` - The encoded field's type of measurement ( ``"quantitative"``, ``"temporal"``, - ``"ordinal"``, or ``"nominal"`` ). - It can also be a ``"geojson"`` type for encoding `'geoshape' - `__. - - **Note:** - - - * Data values for a temporal field can be either a date-time string (e.g., - ``"2015-03-07 12:32:17"``, ``"17:01"``, ``"2015-03-16"``. ``"2015"`` ) or a - timestamp number (e.g., ``1552199579097`` ). - * Data ``type`` describes the semantics of the data rather than the primitive data - types ( ``number``, ``string``, etc.). The same primitive data type can have - different types of measurement. For example, numeric data can represent - quantitative, ordinal, or nominal data. - * When using with `bin `__, the - ``type`` property can be either ``"quantitative"`` (for using a linear bin scale) - or `"ordinal" (for using an ordinal bin scale) - `__. - * When using with `timeUnit - `__, the ``type`` property - can be either ``"temporal"`` (for using a temporal scale) or `"ordinal" (for using - an ordinal scale) `__. - * When using with `aggregate - `__, the ``type`` property - refers to the post-aggregation data type. For example, we can calculate count - ``distinct`` of a categorical field ``"cat"`` using ``{"aggregate": "distinct", - "field": "cat", "type": "quantitative"}``. The ``"type"`` of the aggregate output - is ``"quantitative"``. - * Secondary channels (e.g., ``x2``, ``y2``, ``xError``, ``yError`` ) do not have - ``type`` as they have exactly the same type as their primary channels (e.g., - ``x``, ``y`` ). - - **See also:** `type `__ - documentation. - aggregate : :class:`Aggregate` - Aggregation function for the field - (e.g., ``mean``, ``sum``, ``median``, ``min``, ``max``, ``count`` ). - - **Default value:** ``undefined`` (None) - - **See also:** `aggregate `__ - documentation. - bin : anyOf(boolean, :class:`BinParams`, None) - A flag for binning a ``quantitative`` field, `an object defining binning parameters - `__, or indicating that the - data for ``x`` or ``y`` channel are binned before they are imported into Vega-Lite ( - ``"binned"`` ). - - - If ``true``, default `binning parameters - `__ will be applied. - - If ``"binned"``, this indicates that the data for the ``x`` (or ``y`` ) channel are - already binned. You can map the bin-start field to ``x`` (or ``y`` ) and the bin-end - field to ``x2`` (or ``y2`` ). The scale and axis will be formatted similar to - binning in Vega-lite. To adjust the axis ticks based on the bin step, you can also - set the axis's `tickMinStep - `__ property. - - **Default value:** ``false`` - - **See also:** `bin `__ - documentation. - condition : anyOf(:class:`ConditionalNumberValueDef`, - List(:class:`ConditionalNumberValueDef`)) - One or more value definition(s) with `a selection or a test predicate - `__. - - **Note:** A field definition's ``condition`` property can only contain `conditional - value definitions `__ - since Vega-Lite only allows at most one encoded field per encoding channel. - field : :class:`Field` - **Required.** A string defining the name of the field from which to pull a data - value - or an object defining iterated values from the `repeat - `__ operator. - - **See also:** `field `__ - documentation. - - **Notes:** - 1) Dots ( ``.`` ) and brackets ( ``[`` and ``]`` ) can be used to access nested - objects (e.g., ``"field": "foo.bar"`` and ``"field": "foo['bar']"`` ). - If field names contain dots or brackets but are not nested, you can use ``\\`` to - escape dots and brackets (e.g., ``"a\\.b"`` and ``"a\\[0\\]"`` ). - See more details about escaping in the `field documentation - `__. - 2) ``field`` is not required if ``aggregate`` is ``count``. - legend : anyOf(:class:`Legend`, None) - An object defining properties of the legend. - If ``null``, the legend for the encoding channel will be removed. - - **Default value:** If undefined, default `legend properties - `__ are applied. - - **See also:** `legend `__ - documentation. - scale : anyOf(:class:`Scale`, None) - An object defining properties of the channel's scale, which is the function that - transforms values in the data domain (numbers, dates, strings, etc) to visual values - (pixels, colors, sizes) of the encoding channels. - - If ``null``, the scale will be `disabled and the data value will be directly encoded - `__. - - **Default value:** If undefined, default `scale properties - `__ are applied. - - **See also:** `scale `__ - documentation. - sort : :class:`Sort` - Sort order for the encoded field. - - For continuous fields (quantitative or temporal), ``sort`` can be either - ``"ascending"`` or ``"descending"``. - - For discrete fields, ``sort`` can be one of the following: - - - * ``"ascending"`` or ``"descending"`` -- for sorting by the values' natural order in - Javascript. - * `A sort-by-encoding definition - `__ for sorting - by another encoding channel. (This type of sort definition is not available for - ``row`` and ``column`` channels.) - * `A sort field definition - `__ for sorting by - another field. - * `An array specifying the field values in preferred order - `__. In this case, the - sort order will obey the values in the array, followed by any unspecified values - in their original order. For discrete time field, values in the sort array can be - `date-time definition objects `__. In addition, for time units - ``"month"`` and ``"day"``, the values can be the month or day names (case - insensitive) or their 3-letter initials (e.g., ``"Mon"``, ``"Tue"`` ). - * ``null`` indicating no sort. - - **Default value:** ``"ascending"`` - - **Note:** ``null`` is not supported for ``row`` and ``column``. - - **See also:** `sort `__ - documentation. - timeUnit : :class:`TimeUnit` - Time unit (e.g., ``year``, ``yearmonth``, ``month``, ``hours`` ) for a temporal - field. - or `a temporal field that gets casted as ordinal - `__. - - **Default value:** ``undefined`` (None) - - **See also:** `timeUnit `__ - documentation. - title : anyOf(string, None) - A title for the field. If ``null``, the title will be removed. - - **Default value:** derived from the field's name and transformation function ( - ``aggregate``, ``bin`` and ``timeUnit`` ). If the field has an aggregate function, - the function is displayed as part of the title (e.g., ``"Sum of Profit"`` ). If the - field is binned or has a time unit applied, the applied function is shown in - parentheses (e.g., ``"Profit (binned)"``, ``"Transaction Date (year-month)"`` ). - Otherwise, the title is simply the field name. - - **Notes** : - - 1) You can customize the default field title format by providing the `fieldTitle - `__ property in - the `config `__ or `fieldTitle - function via the compile function's options - `__. - - 2) If both field definition's ``title`` and axis, header, or legend ``title`` are - defined, axis/header/legend title will be used. - """ - _schema = {'$ref': '#/definitions/NumericFieldDefWithCondition'} - - def __init__(self, type=Undefined, aggregate=Undefined, bin=Undefined, condition=Undefined, - field=Undefined, legend=Undefined, scale=Undefined, sort=Undefined, timeUnit=Undefined, - title=Undefined, **kwds): - super(NumericFieldDefWithCondition, self).__init__(type=type, aggregate=aggregate, bin=bin, - condition=condition, field=field, - legend=legend, scale=scale, sort=sort, - timeUnit=timeUnit, title=title, **kwds) - - -class NumericValueDefWithCondition(VegaLiteSchema): - """NumericValueDefWithCondition schema wrapper - - Mapping(required=[]) - A ValueDef with Condition where either the condition or the value are - optional. - - Attributes - ---------- - - condition : anyOf(:class:`ConditionalMarkPropFieldDef`, :class:`ConditionalNumberValueDef`, - List(:class:`ConditionalNumberValueDef`)) - A field definition or one or more value definition(s) with a selection predicate. - value : float - A constant value in visual domain (e.g., ``"red"`` / "#0099ff" for color, values - between ``0`` to ``1`` for opacity). - """ - _schema = {'$ref': '#/definitions/NumericValueDefWithCondition'} - - def __init__(self, condition=Undefined, value=Undefined, **kwds): - super(NumericValueDefWithCondition, self).__init__(condition=condition, value=value, **kwds) - - -class OrderFieldDef(VegaLiteSchema): - """OrderFieldDef schema wrapper - - Mapping(required=[type]) - - Attributes - ---------- - - type : :class:`StandardType` - The encoded field's type of measurement ( ``"quantitative"``, ``"temporal"``, - ``"ordinal"``, or ``"nominal"`` ). - It can also be a ``"geojson"`` type for encoding `'geoshape' - `__. - - **Note:** - - - * Data values for a temporal field can be either a date-time string (e.g., - ``"2015-03-07 12:32:17"``, ``"17:01"``, ``"2015-03-16"``. ``"2015"`` ) or a - timestamp number (e.g., ``1552199579097`` ). - * Data ``type`` describes the semantics of the data rather than the primitive data - types ( ``number``, ``string``, etc.). The same primitive data type can have - different types of measurement. For example, numeric data can represent - quantitative, ordinal, or nominal data. - * When using with `bin `__, the - ``type`` property can be either ``"quantitative"`` (for using a linear bin scale) - or `"ordinal" (for using an ordinal bin scale) - `__. - * When using with `timeUnit - `__, the ``type`` property - can be either ``"temporal"`` (for using a temporal scale) or `"ordinal" (for using - an ordinal scale) `__. - * When using with `aggregate - `__, the ``type`` property - refers to the post-aggregation data type. For example, we can calculate count - ``distinct`` of a categorical field ``"cat"`` using ``{"aggregate": "distinct", - "field": "cat", "type": "quantitative"}``. The ``"type"`` of the aggregate output - is ``"quantitative"``. - * Secondary channels (e.g., ``x2``, ``y2``, ``xError``, ``yError`` ) do not have - ``type`` as they have exactly the same type as their primary channels (e.g., - ``x``, ``y`` ). - - **See also:** `type `__ - documentation. - aggregate : :class:`Aggregate` - Aggregation function for the field - (e.g., ``mean``, ``sum``, ``median``, ``min``, ``max``, ``count`` ). - - **Default value:** ``undefined`` (None) - - **See also:** `aggregate `__ - documentation. - bin : anyOf(boolean, :class:`BinParams`, enum('binned'), None) - A flag for binning a ``quantitative`` field, `an object defining binning parameters - `__, or indicating that the - data for ``x`` or ``y`` channel are binned before they are imported into Vega-Lite ( - ``"binned"`` ). - - - If ``true``, default `binning parameters - `__ will be applied. - - If ``"binned"``, this indicates that the data for the ``x`` (or ``y`` ) channel are - already binned. You can map the bin-start field to ``x`` (or ``y`` ) and the bin-end - field to ``x2`` (or ``y2`` ). The scale and axis will be formatted similar to - binning in Vega-lite. To adjust the axis ticks based on the bin step, you can also - set the axis's `tickMinStep - `__ property. - - **Default value:** ``false`` - - **See also:** `bin `__ - documentation. - field : :class:`Field` - **Required.** A string defining the name of the field from which to pull a data - value - or an object defining iterated values from the `repeat - `__ operator. - - **See also:** `field `__ - documentation. - - **Notes:** - 1) Dots ( ``.`` ) and brackets ( ``[`` and ``]`` ) can be used to access nested - objects (e.g., ``"field": "foo.bar"`` and ``"field": "foo['bar']"`` ). - If field names contain dots or brackets but are not nested, you can use ``\\`` to - escape dots and brackets (e.g., ``"a\\.b"`` and ``"a\\[0\\]"`` ). - See more details about escaping in the `field documentation - `__. - 2) ``field`` is not required if ``aggregate`` is ``count``. - sort : :class:`SortOrder` - The sort order. One of ``"ascending"`` (default) or ``"descending"``. - timeUnit : :class:`TimeUnit` - Time unit (e.g., ``year``, ``yearmonth``, ``month``, ``hours`` ) for a temporal - field. - or `a temporal field that gets casted as ordinal - `__. - - **Default value:** ``undefined`` (None) - - **See also:** `timeUnit `__ - documentation. - title : anyOf(string, None) - A title for the field. If ``null``, the title will be removed. - - **Default value:** derived from the field's name and transformation function ( - ``aggregate``, ``bin`` and ``timeUnit`` ). If the field has an aggregate function, - the function is displayed as part of the title (e.g., ``"Sum of Profit"`` ). If the - field is binned or has a time unit applied, the applied function is shown in - parentheses (e.g., ``"Profit (binned)"``, ``"Transaction Date (year-month)"`` ). - Otherwise, the title is simply the field name. - - **Notes** : - - 1) You can customize the default field title format by providing the `fieldTitle - `__ property in - the `config `__ or `fieldTitle - function via the compile function's options - `__. - - 2) If both field definition's ``title`` and axis, header, or legend ``title`` are - defined, axis/header/legend title will be used. - """ - _schema = {'$ref': '#/definitions/OrderFieldDef'} - - def __init__(self, type=Undefined, aggregate=Undefined, bin=Undefined, field=Undefined, - sort=Undefined, timeUnit=Undefined, title=Undefined, **kwds): - super(OrderFieldDef, self).__init__(type=type, aggregate=aggregate, bin=bin, field=field, - sort=sort, timeUnit=timeUnit, title=title, **kwds) - - -class Orient(VegaLiteSchema): - """Orient schema wrapper - - enum('left', 'right', 'top', 'bottom') - """ - _schema = {'$ref': '#/definitions/Orient'} - - def __init__(self, *args): - super(Orient, self).__init__(*args) - - -class Orientation(VegaLiteSchema): - """Orientation schema wrapper - - enum('horizontal', 'vertical') - """ - _schema = {'$ref': '#/definitions/Orientation'} - - def __init__(self, *args): - super(Orientation, self).__init__(*args) - - -class OverlayMarkDef(VegaLiteSchema): - """OverlayMarkDef schema wrapper - - Mapping(required=[]) - - Attributes - ---------- - - align : :class:`Align` - The horizontal alignment of the text. One of ``"left"``, ``"right"``, ``"center"``. - angle : float - The rotation angle of the text, in degrees. - baseline : :class:`TextBaseline` - The vertical alignment of the text. One of ``"top"``, ``"middle"``, ``"bottom"``. - - **Default value:** ``"middle"`` - clip : boolean - Whether a mark be clipped to the enclosing group’s width and height. - color : :class:`Color` - Default color. Note that ``fill`` and ``stroke`` have higher precedence than - ``color`` and will override ``color``. - - **Default value:** :raw-html:`` - ``"#4682b4"`` - - **Note:** This property cannot be used in a `style config - `__. - cornerRadius : float - The radius in pixels of rounded rectangle corners. - - **Default value:** ``0`` - cursor : :class:`Cursor` - The mouse cursor used over the mark. Any valid `CSS cursor type - `__ can be used. - dir : :class:`Dir` - The direction of the text. One of ``"ltr"`` (left-to-right) or ``"rtl"`` - (right-to-left). This property determines on which side is truncated in response to - the limit parameter. - - **Default value:** ``"ltr"`` - dx : float - The horizontal offset, in pixels, between the text label and its anchor point. The - offset is applied after rotation by the *angle* property. - dy : float - The vertical offset, in pixels, between the text label and its anchor point. The - offset is applied after rotation by the *angle* property. - ellipsis : string - The ellipsis string for text truncated in response to the limit parameter. - - **Default value:** ``"…"`` - fill : :class:`Color` - Default Fill Color. This has higher precedence than ``config.color`` - - **Default value:** (None) - fillOpacity : float - The fill opacity (value between [0,1]). - - **Default value:** ``1`` - filled : boolean - Whether the mark's color should be used as fill color instead of stroke color. - - **Default value:** ``false`` for ``point``, ``line`` and ``rule`` ; otherwise, - ``true``. - - **Note:** This property cannot be used in a `style config - `__. - font : string - The typeface to set the text in (e.g., ``"Helvetica Neue"`` ). - fontSize : float - The font size, in pixels. - fontStyle : :class:`FontStyle` - The font style (e.g., ``"italic"`` ). - fontWeight : :class:`FontWeight` - The font weight. - This can be either a string (e.g ``"bold"``, ``"normal"`` ) or a number ( ``100``, - ``200``, ``300``, ..., ``900`` where ``"normal"`` = ``400`` and ``"bold"`` = ``700`` - ). - height : float - Height of the marks. - href : string - A URL to load upon mouse click. If defined, the mark acts as a hyperlink. - interpolate : :class:`Interpolate` - The line interpolation method to use for line and area marks. One of the following: - - - * ``"linear"`` : piecewise linear segments, as in a polyline. - * ``"linear-closed"`` : close the linear segments to form a polygon. - * ``"step"`` : alternate between horizontal and vertical segments, as in a step - function. - * ``"step-before"`` : alternate between vertical and horizontal segments, as in a - step function. - * ``"step-after"`` : alternate between horizontal and vertical segments, as in a - step function. - * ``"basis"`` : a B-spline, with control point duplication on the ends. - * ``"basis-open"`` : an open B-spline; may not intersect the start or end. - * ``"basis-closed"`` : a closed B-spline, as in a loop. - * ``"cardinal"`` : a Cardinal spline, with control point duplication on the ends. - * ``"cardinal-open"`` : an open Cardinal spline; may not intersect the start or end, - but will intersect other control points. - * ``"cardinal-closed"`` : a closed Cardinal spline, as in a loop. - * ``"bundle"`` : equivalent to basis, except the tension parameter is used to - straighten the spline. - * ``"monotone"`` : cubic interpolation that preserves monotonicity in y. - limit : float - The maximum length of the text mark in pixels. The text value will be automatically - truncated if the rendered size exceeds the limit. - - **Default value:** ``0``, indicating no limit - opacity : float - The overall opacity (value between [0,1]). - - **Default value:** ``0.7`` for non-aggregate plots with ``point``, ``tick``, - ``circle``, or ``square`` marks or layered ``bar`` charts and ``1`` otherwise. - order : anyOf(None, boolean) - For line and trail marks, this ``order`` property can be set to ``null`` or - ``false`` to make the lines use the original order in the data sources. - orient : :class:`Orientation` - The orientation of a non-stacked bar, tick, area, and line charts. - The value is either horizontal (default) or vertical. - - - * For bar, rule and tick, this determines whether the size of the bar and tick - should be applied to x or y dimension. - * For area, this property determines the orient property of the Vega output. - * For line and trail marks, this property determines the sort order of the points in - the line - if ``config.sortLineBy`` is not specified. - For stacked charts, this is always determined by the orientation of the stack; - therefore explicitly specified value will be ignored. - radius : float - Polar coordinate radial offset, in pixels, of the text label from the origin - determined by the ``x`` and ``y`` properties. - shape : string - Shape of the point marks. Supported values include: - - - * plotting shapes: ``"circle"``, ``"square"``, ``"cross"``, ``"diamond"``, - ``"triangle-up"``, ``"triangle-down"``, ``"triangle-right"``, or - ``"triangle-left"``. - * the line symbol ``"stroke"`` - * centered directional shapes ``"arrow"``, ``"wedge"``, or ``"triangle"`` - * a custom `SVG path string - `__ (For correct - sizing, custom shape paths should be defined within a square bounding box with - coordinates ranging from -1 to 1 along both the x and y dimensions.) - - **Default value:** ``"circle"`` - size : float - Default size for marks. - - - * For ``point`` / ``circle`` / ``square``, this represents the pixel area of the - marks. For example: in the case of circles, the radius is determined in part by - the square root of the size value. - * For ``bar``, this represents the band size of the bar, in pixels. - * For ``text``, this represents the font size, in pixels. - - **Default value:** ``30`` for point, circle, square marks; ``rangeStep`` - 1 for bar - marks with discrete dimensions; ``5`` for bar marks with continuous dimensions; - ``11`` for text marks. - stroke : :class:`Color` - Default Stroke Color. This has higher precedence than ``config.color`` - - **Default value:** (None) - strokeCap : :class:`StrokeCap` - The stroke cap for line ending style. One of ``"butt"``, ``"round"``, or - ``"square"``. - - **Default value:** ``"square"`` - strokeDash : List(float) - An array of alternating stroke, space lengths for creating dashed or dotted lines. - strokeDashOffset : float - The offset (in pixels) into which to begin drawing with the stroke dash array. - strokeJoin : :class:`StrokeJoin` - The stroke line join method. One of ``"miter"``, ``"round"`` or ``"bevel"``. - - **Default value:** ``"miter"`` - strokeMiterLimit : float - The miter limit at which to bevel a line join. - strokeOpacity : float - The stroke opacity (value between [0,1]). - - **Default value:** ``1`` - strokeWidth : float - The stroke width, in pixels. - style : anyOf(string, List(string)) - A string or array of strings indicating the name of custom styles to apply to the - mark. A style is a named collection of mark property defaults defined within the - `style configuration - `__. If style is an - array, later styles will override earlier styles. Any `mark properties - `__ explicitly - defined within the ``encoding`` will override a style default. - - **Default value:** The mark's name. For example, a bar mark will have style - ``"bar"`` by default. - **Note:** Any specified style will augment the default style. For example, a bar - mark with ``"style": "foo"`` will receive from ``config.style.bar`` and - ``config.style.foo`` (the specified style ``"foo"`` has higher precedence). - tension : float - Depending on the interpolation type, sets the tension parameter (for line and area - marks). - text : string - Placeholder text if the ``text`` channel is not specified - theta : float - Polar coordinate angle, in radians, of the text label from the origin determined by - the ``x`` and ``y`` properties. Values for ``theta`` follow the same convention of - ``arc`` mark ``startAngle`` and ``endAngle`` properties: angles are measured in - radians, with ``0`` indicating "north". - tooltip : anyOf(:class:`Value`, :class:`TooltipContent`, None) - The tooltip text string to show upon mouse hover or an object defining which fields - should the tooltip be derived from. - - - * If ``tooltip`` is ``{"content": "encoding"}``, then all fields from ``encoding`` - will be used. - * If ``tooltip`` is ``{"content": "data"}``, then all fields that appear in the - highlighted data point will be used. - * If set to ``null``, then no tooltip will be used. - width : float - Width of the marks. - x : anyOf(float, enum('width')) - X coordinates of the marks, or width of horizontal ``"bar"`` and ``"area"`` without - specified ``x2`` or ``width``. - - The ``value`` of this channel can be a number or a string ``"width"`` for the width - of the plot. - x2 : anyOf(float, enum('width')) - X2 coordinates for ranged ``"area"``, ``"bar"``, ``"rect"``, and ``"rule"``. - - The ``value`` of this channel can be a number or a string ``"width"`` for the width - of the plot. - x2Offset : float - Offset for x2-position. - xOffset : float - Offset for x-position. - y : anyOf(float, enum('height')) - Y coordinates of the marks, or height of vertical ``"bar"`` and ``"area"`` without - specified ``y2`` or ``height``. - - The ``value`` of this channel can be a number or a string ``"height"`` for the - height of the plot. - y2 : anyOf(float, enum('width')) - Y2 coordinates for ranged ``"area"``, ``"bar"``, ``"rect"``, and ``"rule"``. - - The ``value`` of this channel can be a number or a string ``"height"`` for the - height of the plot. - y2Offset : float - Offset for y2-position. - yOffset : float - Offset for y-position. - """ - _schema = {'$ref': '#/definitions/OverlayMarkDef'} - - def __init__(self, align=Undefined, angle=Undefined, baseline=Undefined, clip=Undefined, - color=Undefined, cornerRadius=Undefined, cursor=Undefined, dir=Undefined, dx=Undefined, - dy=Undefined, ellipsis=Undefined, fill=Undefined, fillOpacity=Undefined, - filled=Undefined, font=Undefined, fontSize=Undefined, fontStyle=Undefined, - fontWeight=Undefined, height=Undefined, href=Undefined, interpolate=Undefined, - limit=Undefined, opacity=Undefined, order=Undefined, orient=Undefined, - radius=Undefined, shape=Undefined, size=Undefined, stroke=Undefined, - strokeCap=Undefined, strokeDash=Undefined, strokeDashOffset=Undefined, - strokeJoin=Undefined, strokeMiterLimit=Undefined, strokeOpacity=Undefined, - strokeWidth=Undefined, style=Undefined, tension=Undefined, text=Undefined, - theta=Undefined, tooltip=Undefined, width=Undefined, x=Undefined, x2=Undefined, - x2Offset=Undefined, xOffset=Undefined, y=Undefined, y2=Undefined, y2Offset=Undefined, - yOffset=Undefined, **kwds): - super(OverlayMarkDef, self).__init__(align=align, angle=angle, baseline=baseline, clip=clip, - color=color, cornerRadius=cornerRadius, cursor=cursor, - dir=dir, dx=dx, dy=dy, ellipsis=ellipsis, fill=fill, - fillOpacity=fillOpacity, filled=filled, font=font, - fontSize=fontSize, fontStyle=fontStyle, - fontWeight=fontWeight, height=height, href=href, - interpolate=interpolate, limit=limit, opacity=opacity, - order=order, orient=orient, radius=radius, shape=shape, - size=size, stroke=stroke, strokeCap=strokeCap, - strokeDash=strokeDash, strokeDashOffset=strokeDashOffset, - strokeJoin=strokeJoin, strokeMiterLimit=strokeMiterLimit, - strokeOpacity=strokeOpacity, strokeWidth=strokeWidth, - style=style, tension=tension, text=text, theta=theta, - tooltip=tooltip, width=width, x=x, x2=x2, - x2Offset=x2Offset, xOffset=xOffset, y=y, y2=y2, - y2Offset=y2Offset, yOffset=yOffset, **kwds) - - -class Padding(VegaLiteSchema): - """Padding schema wrapper - - anyOf(float, Mapping(required=[])) - """ - _schema = {'$ref': '#/definitions/Padding'} - - def __init__(self, *args, **kwds): - super(Padding, self).__init__(*args, **kwds) - - -class Parse(VegaLiteSchema): - """Parse schema wrapper - - Mapping(required=[]) - """ - _schema = {'$ref': '#/definitions/Parse'} - - def __init__(self, **kwds): - super(Parse, self).__init__(**kwds) - - -class ParseValue(VegaLiteSchema): - """ParseValue schema wrapper - - anyOf(None, string, enum('string'), enum('boolean'), enum('date'), enum('number')) - """ - _schema = {'$ref': '#/definitions/ParseValue'} - - def __init__(self, *args, **kwds): - super(ParseValue, self).__init__(*args, **kwds) - - -class PartsMixinsBoxPlotPart(VegaLiteSchema): - """PartsMixinsBoxPlotPart schema wrapper - - Mapping(required=[]) - - Attributes - ---------- - - box : anyOf(boolean, :class:`MarkConfig`) - - median : anyOf(boolean, :class:`MarkConfig`) - - outliers : anyOf(boolean, :class:`MarkConfig`) - - rule : anyOf(boolean, :class:`MarkConfig`) - - ticks : anyOf(boolean, :class:`MarkConfig`) - - """ - _schema = {'$ref': '#/definitions/PartsMixins'} - - def __init__(self, box=Undefined, median=Undefined, outliers=Undefined, rule=Undefined, - ticks=Undefined, **kwds): - super(PartsMixinsBoxPlotPart, self).__init__(box=box, median=median, outliers=outliers, - rule=rule, ticks=ticks, **kwds) - - -class PartsMixinsErrorBandPart(VegaLiteSchema): - """PartsMixinsErrorBandPart schema wrapper - - Mapping(required=[]) - - Attributes - ---------- - - band : anyOf(boolean, :class:`MarkConfig`) - - borders : anyOf(boolean, :class:`MarkConfig`) - - """ - _schema = {'$ref': '#/definitions/PartsMixins'} - - def __init__(self, band=Undefined, borders=Undefined, **kwds): - super(PartsMixinsErrorBandPart, self).__init__(band=band, borders=borders, **kwds) - - -class PartsMixinsErrorBarPart(VegaLiteSchema): - """PartsMixinsErrorBarPart schema wrapper - - Mapping(required=[]) - - Attributes - ---------- - - rule : anyOf(boolean, :class:`MarkConfig`) - - ticks : anyOf(boolean, :class:`MarkConfig`) - - """ - _schema = {'$ref': '#/definitions/PartsMixins'} - - def __init__(self, rule=Undefined, ticks=Undefined, **kwds): - super(PartsMixinsErrorBarPart, self).__init__(rule=rule, ticks=ticks, **kwds) - - -class PositionFieldDef(VegaLiteSchema): - """PositionFieldDef schema wrapper - - Mapping(required=[type]) - - Attributes - ---------- - - type : :class:`StandardType` - The encoded field's type of measurement ( ``"quantitative"``, ``"temporal"``, - ``"ordinal"``, or ``"nominal"`` ). - It can also be a ``"geojson"`` type for encoding `'geoshape' - `__. - - **Note:** - - - * Data values for a temporal field can be either a date-time string (e.g., - ``"2015-03-07 12:32:17"``, ``"17:01"``, ``"2015-03-16"``. ``"2015"`` ) or a - timestamp number (e.g., ``1552199579097`` ). - * Data ``type`` describes the semantics of the data rather than the primitive data - types ( ``number``, ``string``, etc.). The same primitive data type can have - different types of measurement. For example, numeric data can represent - quantitative, ordinal, or nominal data. - * When using with `bin `__, the - ``type`` property can be either ``"quantitative"`` (for using a linear bin scale) - or `"ordinal" (for using an ordinal bin scale) - `__. - * When using with `timeUnit - `__, the ``type`` property - can be either ``"temporal"`` (for using a temporal scale) or `"ordinal" (for using - an ordinal scale) `__. - * When using with `aggregate - `__, the ``type`` property - refers to the post-aggregation data type. For example, we can calculate count - ``distinct`` of a categorical field ``"cat"`` using ``{"aggregate": "distinct", - "field": "cat", "type": "quantitative"}``. The ``"type"`` of the aggregate output - is ``"quantitative"``. - * Secondary channels (e.g., ``x2``, ``y2``, ``xError``, ``yError`` ) do not have - ``type`` as they have exactly the same type as their primary channels (e.g., - ``x``, ``y`` ). - - **See also:** `type `__ - documentation. - aggregate : :class:`Aggregate` - Aggregation function for the field - (e.g., ``mean``, ``sum``, ``median``, ``min``, ``max``, ``count`` ). - - **Default value:** ``undefined`` (None) - - **See also:** `aggregate `__ - documentation. - axis : anyOf(:class:`Axis`, None) - An object defining properties of axis's gridlines, ticks and labels. - If ``null``, the axis for the encoding channel will be removed. - - **Default value:** If undefined, default `axis properties - `__ are applied. - - **See also:** `axis `__ - documentation. - bin : anyOf(boolean, :class:`BinParams`, enum('binned'), None) - A flag for binning a ``quantitative`` field, `an object defining binning parameters - `__, or indicating that the - data for ``x`` or ``y`` channel are binned before they are imported into Vega-Lite ( - ``"binned"`` ). - - - If ``true``, default `binning parameters - `__ will be applied. - - If ``"binned"``, this indicates that the data for the ``x`` (or ``y`` ) channel are - already binned. You can map the bin-start field to ``x`` (or ``y`` ) and the bin-end - field to ``x2`` (or ``y2`` ). The scale and axis will be formatted similar to - binning in Vega-lite. To adjust the axis ticks based on the bin step, you can also - set the axis's `tickMinStep - `__ property. - - **Default value:** ``false`` - - **See also:** `bin `__ - documentation. - field : :class:`Field` - **Required.** A string defining the name of the field from which to pull a data - value - or an object defining iterated values from the `repeat - `__ operator. - - **See also:** `field `__ - documentation. - - **Notes:** - 1) Dots ( ``.`` ) and brackets ( ``[`` and ``]`` ) can be used to access nested - objects (e.g., ``"field": "foo.bar"`` and ``"field": "foo['bar']"`` ). - If field names contain dots or brackets but are not nested, you can use ``\\`` to - escape dots and brackets (e.g., ``"a\\.b"`` and ``"a\\[0\\]"`` ). - See more details about escaping in the `field documentation - `__. - 2) ``field`` is not required if ``aggregate`` is ``count``. - impute : :class:`ImputeParams` - An object defining the properties of the Impute Operation to be applied. - The field value of the other positional channel is taken as ``key`` of the - ``Impute`` Operation. - The field of the ``color`` channel if specified is used as ``groupby`` of the - ``Impute`` Operation. - - **See also:** `impute `__ - documentation. - scale : anyOf(:class:`Scale`, None) - An object defining properties of the channel's scale, which is the function that - transforms values in the data domain (numbers, dates, strings, etc) to visual values - (pixels, colors, sizes) of the encoding channels. - - If ``null``, the scale will be `disabled and the data value will be directly encoded - `__. - - **Default value:** If undefined, default `scale properties - `__ are applied. - - **See also:** `scale `__ - documentation. - sort : :class:`Sort` - Sort order for the encoded field. - - For continuous fields (quantitative or temporal), ``sort`` can be either - ``"ascending"`` or ``"descending"``. - - For discrete fields, ``sort`` can be one of the following: - - - * ``"ascending"`` or ``"descending"`` -- for sorting by the values' natural order in - Javascript. - * `A sort-by-encoding definition - `__ for sorting - by another encoding channel. (This type of sort definition is not available for - ``row`` and ``column`` channels.) - * `A sort field definition - `__ for sorting by - another field. - * `An array specifying the field values in preferred order - `__. In this case, the - sort order will obey the values in the array, followed by any unspecified values - in their original order. For discrete time field, values in the sort array can be - `date-time definition objects `__. In addition, for time units - ``"month"`` and ``"day"``, the values can be the month or day names (case - insensitive) or their 3-letter initials (e.g., ``"Mon"``, ``"Tue"`` ). - * ``null`` indicating no sort. - - **Default value:** ``"ascending"`` - - **Note:** ``null`` is not supported for ``row`` and ``column``. - - **See also:** `sort `__ - documentation. - stack : anyOf(:class:`StackOffset`, None, boolean) - Type of stacking offset if the field should be stacked. - ``stack`` is only applicable for ``x`` and ``y`` channels with continuous domains. - For example, ``stack`` of ``y`` can be used to customize stacking for a vertical bar - chart. - - ``stack`` can be one of the following values: - - - * ``"zero"`` or `true`: stacking with baseline offset at zero value of the scale - (for creating typical stacked - [bar](https://vega.github.io/vega-lite/docs/stack.html#bar) and `area - `__ chart). - * ``"normalize"`` - stacking with normalized domain (for creating `normalized - stacked bar and area charts - `__. - :raw-html:`
      ` - - ``"center"`` - stacking with center baseline (for `streamgraph - `__ ). - * ``null`` or ``false`` - No-stacking. This will produce layered `bar - `__ and area - chart. - - **Default value:** ``zero`` for plots with all of the following conditions are true: - (1) the mark is ``bar`` or ``area`` ; - (2) the stacked measure channel (x or y) has a linear scale; - (3) At least one of non-position channels mapped to an unaggregated field that is - different from x and y. Otherwise, ``null`` by default. - - **See also:** `stack `__ - documentation. - timeUnit : :class:`TimeUnit` - Time unit (e.g., ``year``, ``yearmonth``, ``month``, ``hours`` ) for a temporal - field. - or `a temporal field that gets casted as ordinal - `__. - - **Default value:** ``undefined`` (None) - - **See also:** `timeUnit `__ - documentation. - title : anyOf(string, None) - A title for the field. If ``null``, the title will be removed. - - **Default value:** derived from the field's name and transformation function ( - ``aggregate``, ``bin`` and ``timeUnit`` ). If the field has an aggregate function, - the function is displayed as part of the title (e.g., ``"Sum of Profit"`` ). If the - field is binned or has a time unit applied, the applied function is shown in - parentheses (e.g., ``"Profit (binned)"``, ``"Transaction Date (year-month)"`` ). - Otherwise, the title is simply the field name. - - **Notes** : - - 1) You can customize the default field title format by providing the `fieldTitle - `__ property in - the `config `__ or `fieldTitle - function via the compile function's options - `__. - - 2) If both field definition's ``title`` and axis, header, or legend ``title`` are - defined, axis/header/legend title will be used. - """ - _schema = {'$ref': '#/definitions/PositionFieldDef'} - - def __init__(self, type=Undefined, aggregate=Undefined, axis=Undefined, bin=Undefined, - field=Undefined, impute=Undefined, scale=Undefined, sort=Undefined, stack=Undefined, - timeUnit=Undefined, title=Undefined, **kwds): - super(PositionFieldDef, self).__init__(type=type, aggregate=aggregate, axis=axis, bin=bin, - field=field, impute=impute, scale=scale, sort=sort, - stack=stack, timeUnit=timeUnit, title=title, **kwds) - - -class Predicate(LogicalOperandPredicate): - """Predicate schema wrapper - - anyOf(:class:`FieldEqualPredicate`, :class:`FieldRangePredicate`, - :class:`FieldOneOfPredicate`, :class:`FieldLTPredicate`, :class:`FieldGTPredicate`, - :class:`FieldLTEPredicate`, :class:`FieldGTEPredicate`, :class:`FieldValidPredicate`, - :class:`SelectionPredicate`, string) - """ - _schema = {'$ref': '#/definitions/Predicate'} - - def __init__(self, *args, **kwds): - super(Predicate, self).__init__(*args, **kwds) - - -class FieldEqualPredicate(Predicate): - """FieldEqualPredicate schema wrapper - - Mapping(required=[equal, field]) - - Attributes - ---------- - - equal : anyOf(string, float, boolean, :class:`DateTime`) - The value that the field should be equal to. - field : :class:`FieldName` - Field to be filtered. - timeUnit : :class:`TimeUnit` - Time unit for the field to be filtered. - """ - _schema = {'$ref': '#/definitions/FieldEqualPredicate'} - - def __init__(self, equal=Undefined, field=Undefined, timeUnit=Undefined, **kwds): - super(FieldEqualPredicate, self).__init__(equal=equal, field=field, timeUnit=timeUnit, **kwds) - - -class FieldGTEPredicate(Predicate): - """FieldGTEPredicate schema wrapper - - Mapping(required=[field, gte]) - - Attributes - ---------- - - field : :class:`FieldName` - Field to be filtered. - gte : anyOf(string, float, :class:`DateTime`) - The value that the field should be greater than or equals to. - timeUnit : :class:`TimeUnit` - Time unit for the field to be filtered. - """ - _schema = {'$ref': '#/definitions/FieldGTEPredicate'} - - def __init__(self, field=Undefined, gte=Undefined, timeUnit=Undefined, **kwds): - super(FieldGTEPredicate, self).__init__(field=field, gte=gte, timeUnit=timeUnit, **kwds) - - -class FieldGTPredicate(Predicate): - """FieldGTPredicate schema wrapper - - Mapping(required=[field, gt]) - - Attributes - ---------- - - field : :class:`FieldName` - Field to be filtered. - gt : anyOf(string, float, :class:`DateTime`) - The value that the field should be greater than. - timeUnit : :class:`TimeUnit` - Time unit for the field to be filtered. - """ - _schema = {'$ref': '#/definitions/FieldGTPredicate'} - - def __init__(self, field=Undefined, gt=Undefined, timeUnit=Undefined, **kwds): - super(FieldGTPredicate, self).__init__(field=field, gt=gt, timeUnit=timeUnit, **kwds) - - -class FieldLTEPredicate(Predicate): - """FieldLTEPredicate schema wrapper - - Mapping(required=[field, lte]) - - Attributes - ---------- - - field : :class:`FieldName` - Field to be filtered. - lte : anyOf(string, float, :class:`DateTime`) - The value that the field should be less than or equals to. - timeUnit : :class:`TimeUnit` - Time unit for the field to be filtered. - """ - _schema = {'$ref': '#/definitions/FieldLTEPredicate'} - - def __init__(self, field=Undefined, lte=Undefined, timeUnit=Undefined, **kwds): - super(FieldLTEPredicate, self).__init__(field=field, lte=lte, timeUnit=timeUnit, **kwds) - - -class FieldLTPredicate(Predicate): - """FieldLTPredicate schema wrapper - - Mapping(required=[field, lt]) - - Attributes - ---------- - - field : :class:`FieldName` - Field to be filtered. - lt : anyOf(string, float, :class:`DateTime`) - The value that the field should be less than. - timeUnit : :class:`TimeUnit` - Time unit for the field to be filtered. - """ - _schema = {'$ref': '#/definitions/FieldLTPredicate'} - - def __init__(self, field=Undefined, lt=Undefined, timeUnit=Undefined, **kwds): - super(FieldLTPredicate, self).__init__(field=field, lt=lt, timeUnit=timeUnit, **kwds) - - -class FieldOneOfPredicate(Predicate): - """FieldOneOfPredicate schema wrapper - - Mapping(required=[field, oneOf]) - - Attributes - ---------- - - field : :class:`FieldName` - Field to be filtered. - oneOf : anyOf(List(string), List(float), List(boolean), List(:class:`DateTime`)) - A set of values that the ``field`` 's value should be a member of, - for a data item included in the filtered data. - timeUnit : :class:`TimeUnit` - Time unit for the field to be filtered. - """ - _schema = {'$ref': '#/definitions/FieldOneOfPredicate'} - - def __init__(self, field=Undefined, oneOf=Undefined, timeUnit=Undefined, **kwds): - super(FieldOneOfPredicate, self).__init__(field=field, oneOf=oneOf, timeUnit=timeUnit, **kwds) - - -class FieldRangePredicate(Predicate): - """FieldRangePredicate schema wrapper - - Mapping(required=[field, range]) - - Attributes - ---------- - - field : :class:`FieldName` - Field to be filtered. - range : List(anyOf(float, :class:`DateTime`, None)) - An array of inclusive minimum and maximum values - for a field value of a data item to be included in the filtered data. - timeUnit : :class:`TimeUnit` - Time unit for the field to be filtered. - """ - _schema = {'$ref': '#/definitions/FieldRangePredicate'} - - def __init__(self, field=Undefined, range=Undefined, timeUnit=Undefined, **kwds): - super(FieldRangePredicate, self).__init__(field=field, range=range, timeUnit=timeUnit, **kwds) - - -class FieldValidPredicate(Predicate): - """FieldValidPredicate schema wrapper - - Mapping(required=[field, valid]) - - Attributes - ---------- - - field : :class:`FieldName` - Field to be filtered. - valid : boolean - If set to true the field's value has to be valid, meaning both not ``null`` and not - `NaN - `__. - timeUnit : :class:`TimeUnit` - Time unit for the field to be filtered. - """ - _schema = {'$ref': '#/definitions/FieldValidPredicate'} - - def __init__(self, field=Undefined, valid=Undefined, timeUnit=Undefined, **kwds): - super(FieldValidPredicate, self).__init__(field=field, valid=valid, timeUnit=timeUnit, **kwds) - - -class Projection(VegaLiteSchema): - """Projection schema wrapper - - Mapping(required=[]) - - Attributes - ---------- - - center : List(float) - Sets the projection’s center to the specified center, a two-element array of - longitude and latitude in degrees. - - **Default value:** ``[0, 0]`` - clipAngle : float - Sets the projection’s clipping circle radius to the specified angle in degrees. If - ``null``, switches to `antimeridian `__ cutting - rather than small-circle clipping. - clipExtent : List(List(float)) - Sets the projection’s viewport clip extent to the specified bounds in pixels. The - extent bounds are specified as an array ``[[x0, y0], [x1, y1]]``, where ``x0`` is - the left-side of the viewport, ``y0`` is the top, ``x1`` is the right and ``y1`` is - the bottom. If ``null``, no viewport clipping is performed. - coefficient : float - - distance : float - - fraction : float - - lobes : float - - parallel : float - - precision : float - Sets the threshold for the projection’s `adaptive resampling - `__ to the specified value in pixels. This - value corresponds to the `Douglas–Peucker distance - `__. - If precision is not specified, returns the projection’s current resampling precision - which defaults to ``√0.5 ≅ 0.70710…``. - radius : float - - ratio : float - - reflectX : boolean - - reflectY : boolean - - rotate : List(float) - Sets the projection’s three-axis rotation to the specified angles, which must be a - two- or three-element array of numbers [ ``lambda``, ``phi``, ``gamma`` ] specifying - the rotation angles in degrees about each spherical axis. (These correspond to yaw, - pitch and roll.) - - **Default value:** ``[0, 0, 0]`` - scale : float - Sets the projection's scale (zoom) value, overriding automatic fitting. - spacing : float - - tilt : float - - translate : List(float) - Sets the projection's translation (pan) value, overriding automatic fitting. - type : :class:`ProjectionType` - The cartographic projection to use. This value is case-insensitive, for example - ``"albers"`` and ``"Albers"`` indicate the same projection type. You can find all - valid projection types `in the documentation - `__. - - **Default value:** ``mercator`` - """ - _schema = {'$ref': '#/definitions/Projection'} - - def __init__(self, center=Undefined, clipAngle=Undefined, clipExtent=Undefined, - coefficient=Undefined, distance=Undefined, fraction=Undefined, lobes=Undefined, - parallel=Undefined, precision=Undefined, radius=Undefined, ratio=Undefined, - reflectX=Undefined, reflectY=Undefined, rotate=Undefined, scale=Undefined, - spacing=Undefined, tilt=Undefined, translate=Undefined, type=Undefined, **kwds): - super(Projection, self).__init__(center=center, clipAngle=clipAngle, clipExtent=clipExtent, - coefficient=coefficient, distance=distance, fraction=fraction, - lobes=lobes, parallel=parallel, precision=precision, - radius=radius, ratio=ratio, reflectX=reflectX, - reflectY=reflectY, rotate=rotate, scale=scale, spacing=spacing, - tilt=tilt, translate=translate, type=type, **kwds) - - -class ProjectionConfig(VegaLiteSchema): - """ProjectionConfig schema wrapper - - Mapping(required=[]) - - Attributes - ---------- - - center : List(float) - Sets the projection’s center to the specified center, a two-element array of - longitude and latitude in degrees. - - **Default value:** ``[0, 0]`` - clipAngle : float - Sets the projection’s clipping circle radius to the specified angle in degrees. If - ``null``, switches to `antimeridian `__ cutting - rather than small-circle clipping. - clipExtent : List(List(float)) - Sets the projection’s viewport clip extent to the specified bounds in pixels. The - extent bounds are specified as an array ``[[x0, y0], [x1, y1]]``, where ``x0`` is - the left-side of the viewport, ``y0`` is the top, ``x1`` is the right and ``y1`` is - the bottom. If ``null``, no viewport clipping is performed. - coefficient : float - - distance : float - - fraction : float - - lobes : float - - parallel : float - - precision : float - Sets the threshold for the projection’s `adaptive resampling - `__ to the specified value in pixels. This - value corresponds to the `Douglas–Peucker distance - `__. - If precision is not specified, returns the projection’s current resampling precision - which defaults to ``√0.5 ≅ 0.70710…``. - radius : float - - ratio : float - - reflectX : boolean - - reflectY : boolean - - rotate : List(float) - Sets the projection’s three-axis rotation to the specified angles, which must be a - two- or three-element array of numbers [ ``lambda``, ``phi``, ``gamma`` ] specifying - the rotation angles in degrees about each spherical axis. (These correspond to yaw, - pitch and roll.) - - **Default value:** ``[0, 0, 0]`` - scale : float - Sets the projection's scale (zoom) value, overriding automatic fitting. - spacing : float - - tilt : float - - translate : List(float) - Sets the projection's translation (pan) value, overriding automatic fitting. - type : :class:`ProjectionType` - The cartographic projection to use. This value is case-insensitive, for example - ``"albers"`` and ``"Albers"`` indicate the same projection type. You can find all - valid projection types `in the documentation - `__. - - **Default value:** ``mercator`` - """ - _schema = {'$ref': '#/definitions/ProjectionConfig'} - - def __init__(self, center=Undefined, clipAngle=Undefined, clipExtent=Undefined, - coefficient=Undefined, distance=Undefined, fraction=Undefined, lobes=Undefined, - parallel=Undefined, precision=Undefined, radius=Undefined, ratio=Undefined, - reflectX=Undefined, reflectY=Undefined, rotate=Undefined, scale=Undefined, - spacing=Undefined, tilt=Undefined, translate=Undefined, type=Undefined, **kwds): - super(ProjectionConfig, self).__init__(center=center, clipAngle=clipAngle, - clipExtent=clipExtent, coefficient=coefficient, - distance=distance, fraction=fraction, lobes=lobes, - parallel=parallel, precision=precision, radius=radius, - ratio=ratio, reflectX=reflectX, reflectY=reflectY, - rotate=rotate, scale=scale, spacing=spacing, tilt=tilt, - translate=translate, type=type, **kwds) - - -class ProjectionType(VegaLiteSchema): - """ProjectionType schema wrapper - - enum('albers', 'albersUsa', 'azimuthalEqualArea', 'azimuthalEquidistant', 'conicConformal', - 'conicEqualArea', 'conicEquidistant', 'equirectangular', 'gnomonic', 'identity', 'mercator', - 'naturalEarth1', 'orthographic', 'stereographic', 'transverseMercator') - """ - _schema = {'$ref': '#/definitions/ProjectionType'} - - def __init__(self, *args): - super(ProjectionType, self).__init__(*args) - - -class RangeConfig(VegaLiteSchema): - """RangeConfig schema wrapper - - Mapping(required=[]) - - Attributes - ---------- - - category : anyOf(List(string), :class:`SchemeConfig`) - Default range for *nominal* (categorical) fields. - diverging : anyOf(List(string), :class:`SchemeConfig`) - Default range for diverging *quantitative* fields. - heatmap : anyOf(List(string), :class:`SchemeConfig`) - Default range for *quantitative* heatmaps. - ordinal : anyOf(List(string), :class:`SchemeConfig`) - Default range for *ordinal* fields. - ramp : anyOf(List(string), :class:`SchemeConfig`) - Default range for *quantitative* and *temporal* fields. - symbol : List(string) - Default range palette for the ``shape`` channel. - """ - _schema = {'$ref': '#/definitions/RangeConfig'} - - def __init__(self, category=Undefined, diverging=Undefined, heatmap=Undefined, ordinal=Undefined, - ramp=Undefined, symbol=Undefined, **kwds): - super(RangeConfig, self).__init__(category=category, diverging=diverging, heatmap=heatmap, - ordinal=ordinal, ramp=ramp, symbol=symbol, **kwds) - - -class RangeConfigValue(VegaLiteSchema): - """RangeConfigValue schema wrapper - - anyOf(List(anyOf(float, string)), :class:`SchemeConfig`, Mapping(required=[step])) - """ - _schema = {'$ref': '#/definitions/RangeConfigValue'} - - def __init__(self, *args, **kwds): - super(RangeConfigValue, self).__init__(*args, **kwds) - - -class RectConfig(VegaLiteSchema): - """RectConfig schema wrapper - - Mapping(required=[]) - - Attributes - ---------- - - align : :class:`Align` - The horizontal alignment of the text. One of ``"left"``, ``"right"``, ``"center"``. - angle : float - The rotation angle of the text, in degrees. - baseline : :class:`TextBaseline` - The vertical alignment of the text. One of ``"top"``, ``"middle"``, ``"bottom"``. - - **Default value:** ``"middle"`` - binSpacing : float - Offset between bars for binned field. Ideal value for this is either 0 (Preferred - by statisticians) or 1 (Vega-Lite Default, D3 example style). - - **Default value:** ``1`` - color : :class:`Color` - Default color. Note that ``fill`` and ``stroke`` have higher precedence than - ``color`` and will override ``color``. - - **Default value:** :raw-html:`` - ``"#4682b4"`` - - **Note:** This property cannot be used in a `style config - `__. - continuousBandSize : float - The default size of the bars on continuous scales. - - **Default value:** ``5`` - cornerRadius : float - The radius in pixels of rounded rectangle corners. - - **Default value:** ``0`` - cursor : :class:`Cursor` - The mouse cursor used over the mark. Any valid `CSS cursor type - `__ can be used. - dir : :class:`Dir` - The direction of the text. One of ``"ltr"`` (left-to-right) or ``"rtl"`` - (right-to-left). This property determines on which side is truncated in response to - the limit parameter. - - **Default value:** ``"ltr"`` - discreteBandSize : float - The default size of the bars with discrete dimensions. If unspecified, the default - size is ``bandSize-1``, - which provides 1 pixel offset between bars. - dx : float - The horizontal offset, in pixels, between the text label and its anchor point. The - offset is applied after rotation by the *angle* property. - dy : float - The vertical offset, in pixels, between the text label and its anchor point. The - offset is applied after rotation by the *angle* property. - ellipsis : string - The ellipsis string for text truncated in response to the limit parameter. - - **Default value:** ``"…"`` - fill : :class:`Color` - Default Fill Color. This has higher precedence than ``config.color`` - - **Default value:** (None) - fillOpacity : float - The fill opacity (value between [0,1]). - - **Default value:** ``1`` - filled : boolean - Whether the mark's color should be used as fill color instead of stroke color. - - **Default value:** ``false`` for ``point``, ``line`` and ``rule`` ; otherwise, - ``true``. - - **Note:** This property cannot be used in a `style config - `__. - font : string - The typeface to set the text in (e.g., ``"Helvetica Neue"`` ). - fontSize : float - The font size, in pixels. - fontStyle : :class:`FontStyle` - The font style (e.g., ``"italic"`` ). - fontWeight : :class:`FontWeight` - The font weight. - This can be either a string (e.g ``"bold"``, ``"normal"`` ) or a number ( ``100``, - ``200``, ``300``, ..., ``900`` where ``"normal"`` = ``400`` and ``"bold"`` = ``700`` - ). - height : float - Height of the marks. - href : string - A URL to load upon mouse click. If defined, the mark acts as a hyperlink. - interpolate : :class:`Interpolate` - The line interpolation method to use for line and area marks. One of the following: - - - * ``"linear"`` : piecewise linear segments, as in a polyline. - * ``"linear-closed"`` : close the linear segments to form a polygon. - * ``"step"`` : alternate between horizontal and vertical segments, as in a step - function. - * ``"step-before"`` : alternate between vertical and horizontal segments, as in a - step function. - * ``"step-after"`` : alternate between horizontal and vertical segments, as in a - step function. - * ``"basis"`` : a B-spline, with control point duplication on the ends. - * ``"basis-open"`` : an open B-spline; may not intersect the start or end. - * ``"basis-closed"`` : a closed B-spline, as in a loop. - * ``"cardinal"`` : a Cardinal spline, with control point duplication on the ends. - * ``"cardinal-open"`` : an open Cardinal spline; may not intersect the start or end, - but will intersect other control points. - * ``"cardinal-closed"`` : a closed Cardinal spline, as in a loop. - * ``"bundle"`` : equivalent to basis, except the tension parameter is used to - straighten the spline. - * ``"monotone"`` : cubic interpolation that preserves monotonicity in y. - limit : float - The maximum length of the text mark in pixels. The text value will be automatically - truncated if the rendered size exceeds the limit. - - **Default value:** ``0``, indicating no limit - opacity : float - The overall opacity (value between [0,1]). - - **Default value:** ``0.7`` for non-aggregate plots with ``point``, ``tick``, - ``circle``, or ``square`` marks or layered ``bar`` charts and ``1`` otherwise. - order : anyOf(None, boolean) - For line and trail marks, this ``order`` property can be set to ``null`` or - ``false`` to make the lines use the original order in the data sources. - orient : :class:`Orientation` - The orientation of a non-stacked bar, tick, area, and line charts. - The value is either horizontal (default) or vertical. - - - * For bar, rule and tick, this determines whether the size of the bar and tick - should be applied to x or y dimension. - * For area, this property determines the orient property of the Vega output. - * For line and trail marks, this property determines the sort order of the points in - the line - if ``config.sortLineBy`` is not specified. - For stacked charts, this is always determined by the orientation of the stack; - therefore explicitly specified value will be ignored. - radius : float - Polar coordinate radial offset, in pixels, of the text label from the origin - determined by the ``x`` and ``y`` properties. - shape : string - Shape of the point marks. Supported values include: - - - * plotting shapes: ``"circle"``, ``"square"``, ``"cross"``, ``"diamond"``, - ``"triangle-up"``, ``"triangle-down"``, ``"triangle-right"``, or - ``"triangle-left"``. - * the line symbol ``"stroke"`` - * centered directional shapes ``"arrow"``, ``"wedge"``, or ``"triangle"`` - * a custom `SVG path string - `__ (For correct - sizing, custom shape paths should be defined within a square bounding box with - coordinates ranging from -1 to 1 along both the x and y dimensions.) - - **Default value:** ``"circle"`` - size : float - Default size for marks. - - - * For ``point`` / ``circle`` / ``square``, this represents the pixel area of the - marks. For example: in the case of circles, the radius is determined in part by - the square root of the size value. - * For ``bar``, this represents the band size of the bar, in pixels. - * For ``text``, this represents the font size, in pixels. - - **Default value:** ``30`` for point, circle, square marks; ``rangeStep`` - 1 for bar - marks with discrete dimensions; ``5`` for bar marks with continuous dimensions; - ``11`` for text marks. - stroke : :class:`Color` - Default Stroke Color. This has higher precedence than ``config.color`` - - **Default value:** (None) - strokeCap : :class:`StrokeCap` - The stroke cap for line ending style. One of ``"butt"``, ``"round"``, or - ``"square"``. - - **Default value:** ``"square"`` - strokeDash : List(float) - An array of alternating stroke, space lengths for creating dashed or dotted lines. - strokeDashOffset : float - The offset (in pixels) into which to begin drawing with the stroke dash array. - strokeJoin : :class:`StrokeJoin` - The stroke line join method. One of ``"miter"``, ``"round"`` or ``"bevel"``. - - **Default value:** ``"miter"`` - strokeMiterLimit : float - The miter limit at which to bevel a line join. - strokeOpacity : float - The stroke opacity (value between [0,1]). - - **Default value:** ``1`` - strokeWidth : float - The stroke width, in pixels. - tension : float - Depending on the interpolation type, sets the tension parameter (for line and area - marks). - text : string - Placeholder text if the ``text`` channel is not specified - theta : float - Polar coordinate angle, in radians, of the text label from the origin determined by - the ``x`` and ``y`` properties. Values for ``theta`` follow the same convention of - ``arc`` mark ``startAngle`` and ``endAngle`` properties: angles are measured in - radians, with ``0`` indicating "north". - tooltip : anyOf(:class:`Value`, :class:`TooltipContent`, None) - The tooltip text string to show upon mouse hover or an object defining which fields - should the tooltip be derived from. - - - * If ``tooltip`` is ``{"content": "encoding"}``, then all fields from ``encoding`` - will be used. - * If ``tooltip`` is ``{"content": "data"}``, then all fields that appear in the - highlighted data point will be used. - * If set to ``null``, then no tooltip will be used. - width : float - Width of the marks. - x : anyOf(float, enum('width')) - X coordinates of the marks, or width of horizontal ``"bar"`` and ``"area"`` without - specified ``x2`` or ``width``. - - The ``value`` of this channel can be a number or a string ``"width"`` for the width - of the plot. - x2 : anyOf(float, enum('width')) - X2 coordinates for ranged ``"area"``, ``"bar"``, ``"rect"``, and ``"rule"``. - - The ``value`` of this channel can be a number or a string ``"width"`` for the width - of the plot. - y : anyOf(float, enum('height')) - Y coordinates of the marks, or height of vertical ``"bar"`` and ``"area"`` without - specified ``y2`` or ``height``. - - The ``value`` of this channel can be a number or a string ``"height"`` for the - height of the plot. - y2 : anyOf(float, enum('width')) - Y2 coordinates for ranged ``"area"``, ``"bar"``, ``"rect"``, and ``"rule"``. - - The ``value`` of this channel can be a number or a string ``"height"`` for the - height of the plot. - """ - _schema = {'$ref': '#/definitions/RectConfig'} - - def __init__(self, align=Undefined, angle=Undefined, baseline=Undefined, binSpacing=Undefined, - color=Undefined, continuousBandSize=Undefined, cornerRadius=Undefined, - cursor=Undefined, dir=Undefined, discreteBandSize=Undefined, dx=Undefined, - dy=Undefined, ellipsis=Undefined, fill=Undefined, fillOpacity=Undefined, - filled=Undefined, font=Undefined, fontSize=Undefined, fontStyle=Undefined, - fontWeight=Undefined, height=Undefined, href=Undefined, interpolate=Undefined, - limit=Undefined, opacity=Undefined, order=Undefined, orient=Undefined, - radius=Undefined, shape=Undefined, size=Undefined, stroke=Undefined, - strokeCap=Undefined, strokeDash=Undefined, strokeDashOffset=Undefined, - strokeJoin=Undefined, strokeMiterLimit=Undefined, strokeOpacity=Undefined, - strokeWidth=Undefined, tension=Undefined, text=Undefined, theta=Undefined, - tooltip=Undefined, width=Undefined, x=Undefined, x2=Undefined, y=Undefined, - y2=Undefined, **kwds): - super(RectConfig, self).__init__(align=align, angle=angle, baseline=baseline, - binSpacing=binSpacing, color=color, - continuousBandSize=continuousBandSize, - cornerRadius=cornerRadius, cursor=cursor, dir=dir, - discreteBandSize=discreteBandSize, dx=dx, dy=dy, - ellipsis=ellipsis, fill=fill, fillOpacity=fillOpacity, - filled=filled, font=font, fontSize=fontSize, - fontStyle=fontStyle, fontWeight=fontWeight, height=height, - href=href, interpolate=interpolate, limit=limit, - opacity=opacity, order=order, orient=orient, radius=radius, - shape=shape, size=size, stroke=stroke, strokeCap=strokeCap, - strokeDash=strokeDash, strokeDashOffset=strokeDashOffset, - strokeJoin=strokeJoin, strokeMiterLimit=strokeMiterLimit, - strokeOpacity=strokeOpacity, strokeWidth=strokeWidth, - tension=tension, text=text, theta=theta, tooltip=tooltip, - width=width, x=x, x2=x2, y=y, y2=y2, **kwds) - - -class RepeatMapping(VegaLiteSchema): - """RepeatMapping schema wrapper - - Mapping(required=[]) - - Attributes - ---------- - - column : List(string) - An array of fields to be repeated horizontally. - row : List(string) - An array of fields to be repeated vertically. - """ - _schema = {'$ref': '#/definitions/RepeatMapping'} - - def __init__(self, column=Undefined, row=Undefined, **kwds): - super(RepeatMapping, self).__init__(column=column, row=row, **kwds) - - -class RepeatRef(Field): - """RepeatRef schema wrapper - - Mapping(required=[repeat]) - A ValueDef with optional Condition - Reference to a repeated value. - - Attributes - ---------- - - repeat : enum('row', 'column', 'repeat') - - """ - _schema = {'$ref': '#/definitions/RepeatRef'} - - def __init__(self, repeat=Undefined, **kwds): - super(RepeatRef, self).__init__(repeat=repeat, **kwds) - - -class Resolve(VegaLiteSchema): - """Resolve schema wrapper - - Mapping(required=[]) - Defines how scales, axes, and legends from different specs should be combined. Resolve is a - mapping from ``scale``, ``axis``, and ``legend`` to a mapping from channels to resolutions. - - Attributes - ---------- - - axis : :class:`AxisResolveMap` - - legend : :class:`LegendResolveMap` - - scale : :class:`ScaleResolveMap` - - """ - _schema = {'$ref': '#/definitions/Resolve'} - - def __init__(self, axis=Undefined, legend=Undefined, scale=Undefined, **kwds): - super(Resolve, self).__init__(axis=axis, legend=legend, scale=scale, **kwds) - - -class ResolveMode(VegaLiteSchema): - """ResolveMode schema wrapper - - enum('independent', 'shared') - """ - _schema = {'$ref': '#/definitions/ResolveMode'} - - def __init__(self, *args): - super(ResolveMode, self).__init__(*args) - - -class RowColLayoutAlign(VegaLiteSchema): - """RowColLayoutAlign schema wrapper - - Mapping(required=[]) - - Attributes - ---------- - - column : :class:`LayoutAlign` - - row : :class:`LayoutAlign` - - """ - _schema = {'$ref': '#/definitions/RowCol'} - - def __init__(self, column=Undefined, row=Undefined, **kwds): - super(RowColLayoutAlign, self).__init__(column=column, row=row, **kwds) - - -class RowColboolean(VegaLiteSchema): - """RowColboolean schema wrapper - - Mapping(required=[]) - - Attributes - ---------- - - column : boolean - - row : boolean - - """ - _schema = {'$ref': '#/definitions/RowCol'} - - def __init__(self, column=Undefined, row=Undefined, **kwds): - super(RowColboolean, self).__init__(column=column, row=row, **kwds) - - -class RowColnumber(VegaLiteSchema): - """RowColnumber schema wrapper - - Mapping(required=[]) - - Attributes - ---------- - - column : float - - row : float - - """ - _schema = {'$ref': '#/definitions/RowCol'} - - def __init__(self, column=Undefined, row=Undefined, **kwds): - super(RowColnumber, self).__init__(column=column, row=row, **kwds) - - -class Scale(VegaLiteSchema): - """Scale schema wrapper - - Mapping(required=[]) - - Attributes - ---------- - - align : float - The alignment of the steps within the scale range. - - This value must lie in the range ``[0,1]``. A value of ``0.5`` indicates that the - steps should be centered within the range. A value of ``0`` or ``1`` may be used to - shift the bands to one side, say to position them adjacent to an axis. - - **Default value:** ``0.5`` - base : float - The logarithm base of the ``log`` scale (default ``10`` ). - bins : List(float) - An array of bin boundaries over the scale domain. If provided, axes and legends will - use the bin boundaries to inform the choice of tick marks and text labels. - clamp : boolean - If ``true``, values that exceed the data domain are clamped to either the minimum or - maximum range value - - **Default value:** derived from the `scale config - `__ 's ``clamp`` ( - ``true`` by default). - constant : float - A constant determining the slope of the symlog function around zero. Only used for - ``symlog`` scales. - - **Default value:** ``1`` - domain : anyOf(List(float), List(string), List(boolean), List(:class:`DateTime`), - enum('unaggregated'), :class:`SelectionDomain`) - Customized domain values. - - For *quantitative* fields, ``domain`` can take the form of a two-element array with - minimum and maximum values. `Piecewise scales - `__ can be created by - providing a ``domain`` with more than two entries. - If the input field is aggregated, ``domain`` can also be a string value - ``"unaggregated"``, indicating that the domain should include the raw data values - prior to the aggregation. - - For *temporal* fields, ``domain`` can be a two-element array minimum and maximum - values, in the form of either timestamps or the `DateTime definition objects - `__. - - For *ordinal* and *nominal* fields, ``domain`` can be an array that lists valid - input values. - - The ``selection`` property can be used to `interactively determine - `__ the scale - domain. - exponent : float - The exponent of the ``pow`` scale. - interpolate : anyOf(:class:`ScaleInterpolate`, :class:`ScaleInterpolateParams`) - The interpolation method for range values. By default, a general interpolator for - numbers, dates, strings and colors (in HCL space) is used. For color ranges, this - property allows interpolation in alternative color spaces. Legal values include - ``rgb``, ``hsl``, ``hsl-long``, ``lab``, ``hcl``, ``hcl-long``, ``cubehelix`` and - ``cubehelix-long`` ('-long' variants use longer paths in polar coordinate spaces). - If object-valued, this property accepts an object with a string-valued *type* - property and an optional numeric *gamma* property applicable to rgb and cubehelix - interpolators. For more, see the `d3-interpolate documentation - `__. - - - * **Default value:** ``hcl`` - nice : anyOf(boolean, float, :class:`NiceTime`, Mapping(required=[interval, step])) - Extending the domain so that it starts and ends on nice round values. This method - typically modifies the scale’s domain, and may only extend the bounds to the nearest - round value. Nicing is useful if the domain is computed from data and may be - irregular. For example, for a domain of *[0.201479…, 0.996679…]*, a nice domain - might be *[0.2, 1.0]*. - - For quantitative scales such as linear, ``nice`` can be either a boolean flag or a - number. If ``nice`` is a number, it will represent a desired tick count. This allows - greater control over the step size used to extend the bounds, guaranteeing that the - returned ticks will exactly cover the domain. - - For temporal fields with time and utc scales, the ``nice`` value can be a string - indicating the desired time interval. Legal values are ``"millisecond"``, - ``"second"``, ``"minute"``, ``"hour"``, ``"day"``, ``"week"``, ``"month"``, and - ``"year"``. Alternatively, ``time`` and ``utc`` scales can accept an object-valued - interval specifier of the form ``{"interval": "month", "step": 3}``, which includes - a desired number of interval steps. Here, the domain would snap to quarter (Jan, - Apr, Jul, Oct) boundaries. - - **Default value:** ``true`` for unbinned *quantitative* fields; ``false`` otherwise. - padding : float - For * `continuous `__ * - scales, expands the scale domain to accommodate the specified number of pixels on - each of the scale range. The scale range must represent pixels for this parameter to - function as intended. Padding adjustment is performed prior to all other - adjustments, including the effects of the  ``zero``,  ``nice``,  ``domainMin``, and - ``domainMax``  properties. - - For * `band `__ * scales, - shortcut for setting ``paddingInner`` and ``paddingOuter`` to the same value. - - For * `point `__ * scales, - alias for ``paddingOuter``. - - **Default value:** For *continuous* scales, derived from the `scale config - `__ 's - ``continuousPadding``. - For *band and point* scales, see ``paddingInner`` and ``paddingOuter``. By default, - Vega-Lite sets padding such that *width/height = number of unique values * step*. - paddingInner : float - The inner padding (spacing) within each band step of band scales, as a fraction of - the step size. This value must lie in the range [0,1]. - - For point scale, this property is invalid as point scales do not have internal band - widths (only step sizes between bands). - - **Default value:** derived from the `scale config - `__ 's - ``bandPaddingInner``. - paddingOuter : float - The outer padding (spacing) at the ends of the range of band and point scales, - as a fraction of the step size. This value must lie in the range [0,1]. - - **Default value:** derived from the `scale config - `__ 's ``bandPaddingOuter`` - for band scales and ``pointPadding`` for point scales. - By default, Vega-Lite sets outer padding such that *width/height = number of unique - values * step*. - range : anyOf(List(float), List(string), string) - The range of the scale. One of: - - - A string indicating a `pre-defined named scale range - `__ (e.g., example, - ``"symbol"``, or ``"diverging"`` ). - - For `continuous scales - `__, two-element array - indicating minimum and maximum values, or an array with more than two entries for - specifying a `piecewise scale - `__. - - For `discrete `__ and - `discretizing `__ - scales, an array of desired output values. - - **Notes:** - - 1) For color scales you can also specify a color `scheme - `__ instead of ``range``. - - 2) Any directly specified ``range`` for ``x`` and ``y`` channels will be ignored. - Range can be customized via the view's corresponding `size - `__ ( ``width`` and ``height`` ) or - via `range steps and paddings properties <#range-step>`__ for `band <#band>`__ and - `point <#point>`__ scales. - rangeStep : anyOf(float, None) - The distance between the starts of adjacent bands or points in `band - `__ and `point - `__ scales. - - If ``rangeStep`` is ``null`` or if the view contains the scale's corresponding `size - `__ ( ``width`` for ``x`` scales - and ``height`` for ``y`` scales), ``rangeStep`` will be automatically determined to - fit the size of the view. - - **Default value:** derived the `scale config - `__ 's - ``textXRangeStep`` ( ``90`` by default) for x-scales of ``text`` marks and - ``rangeStep`` ( ``21`` by default) for x-scales of other marks and y-scales. - - **Warning** : If ``rangeStep`` is ``null`` and the cardinality of the scale's domain - is higher than ``width`` or ``height``, the rangeStep might become less than one - pixel and the mark might not appear correctly. - round : boolean - If ``true``, rounds numeric output values to integers. This can be helpful for - snapping to the pixel grid. - - **Default value:** ``false``. - scheme : anyOf(string, :class:`SchemeParams`) - A string indicating a color `scheme - `__ name (e.g., - ``"category10"`` or ``"blues"`` ) or a `scheme parameter object - `__. - - Discrete color schemes may be used with `discrete - `__ or `discretizing - `__ scales. - Continuous color schemes are intended for use with color scales. - - For the full list of supported schemes, please refer to the `Vega Scheme - `__ reference. - type : :class:`ScaleType` - The type of scale. Vega-Lite supports the following categories of scale types: - - 1) `Continuous Scales - `__ -- mapping - continuous domains to continuous output ranges ( `"linear" - `__, `"pow" - `__, `"sqrt" - `__, `"symlog" - `__, `"log" - `__, `"time" - `__, `"utc" - `__. - - 2) `Discrete Scales `__ - -- mapping discrete domains to discrete ( `"ordinal" - `__ ) or continuous ( - `"band" `__ and `"point" - `__ ) output ranges. - - 3) `Discretizing Scales - `__ -- mapping - continuous domains to discrete output ranges `"bin-ordinal" - `__, `"quantile" - `__, `"quantize" - `__ and `"threshold" - `__. - - **Default value:** please see the `scale type table - `__. - zero : boolean - If ``true``, ensures that a zero baseline value is included in the scale domain. - - **Default value:** ``true`` for x and y channels if the quantitative field is not - binned and no custom ``domain`` is provided; ``false`` otherwise. - - **Note:** Log, time, and utc scales do not support ``zero``. - """ - _schema = {'$ref': '#/definitions/Scale'} - - def __init__(self, align=Undefined, base=Undefined, bins=Undefined, clamp=Undefined, - constant=Undefined, domain=Undefined, exponent=Undefined, interpolate=Undefined, - nice=Undefined, padding=Undefined, paddingInner=Undefined, paddingOuter=Undefined, - range=Undefined, rangeStep=Undefined, round=Undefined, scheme=Undefined, - type=Undefined, zero=Undefined, **kwds): - super(Scale, self).__init__(align=align, base=base, bins=bins, clamp=clamp, constant=constant, - domain=domain, exponent=exponent, interpolate=interpolate, - nice=nice, padding=padding, paddingInner=paddingInner, - paddingOuter=paddingOuter, range=range, rangeStep=rangeStep, - round=round, scheme=scheme, type=type, zero=zero, **kwds) - - -class ScaleConfig(VegaLiteSchema): - """ScaleConfig schema wrapper - - Mapping(required=[]) - - Attributes - ---------- - - bandPaddingInner : float - Default inner padding for ``x`` and ``y`` band-ordinal scales. - - **Default value:** - - - * ``barBandPaddingInner`` for bar marks ( ``0.1`` by default) - * ``rectBandPaddingInner`` for rect and other marks ( ``0`` by default) - bandPaddingOuter : float - Default outer padding for ``x`` and ``y`` band-ordinal scales. - - **Default value:** ``paddingInner/2`` (which makes *width/height = number of unique - values * step* ) - barBandPaddingInner : float - Default inner padding for ``x`` and ``y`` band-ordinal scales of ``"bar"`` marks. - - **Default value:** ``0.1`` - barBandPaddingOuter : float - Default outer padding for ``x`` and ``y`` band-ordinal scales of ``"bar"`` marks. - If not specified, by default, band scale's paddingOuter is paddingInner/2. - clamp : boolean - If true, values that exceed the data domain are clamped to either the minimum or - maximum range value - continuousPadding : float - Default padding for continuous scales. - - **Default:** ``5`` for continuous x-scale of a vertical bar and continuous y-scale - of a horizontal bar.; ``0`` otherwise. - maxBandSize : float - The default max value for mapping quantitative fields to bar's size/bandSize. - - If undefined (default), we will use the scale's ``rangeStep`` - 1. - maxFontSize : float - The default max value for mapping quantitative fields to text's size/fontSize. - - **Default value:** ``40`` - maxOpacity : float - Default max opacity for mapping a field to opacity. - - **Default value:** ``0.8`` - maxSize : float - Default max value for point size scale. - maxStrokeWidth : float - Default max strokeWidth for the scale of strokeWidth for rule and line marks and of - size for trail marks. - - **Default value:** ``4`` - minBandSize : float - The default min value for mapping quantitative fields to bar and tick's - size/bandSize scale with zero=false. - - **Default value:** ``2`` - minFontSize : float - The default min value for mapping quantitative fields to tick's size/fontSize scale - with zero=false - - **Default value:** ``8`` - minOpacity : float - Default minimum opacity for mapping a field to opacity. - - **Default value:** ``0.3`` - minSize : float - Default minimum value for point size scale with zero=false. - - **Default value:** ``9`` - minStrokeWidth : float - Default minimum strokeWidth for the scale of strokeWidth for rule and line marks and - of size for trail marks with zero=false. - - **Default value:** ``1`` - pointPadding : float - Default outer padding for ``x`` and ``y`` point-ordinal scales. - - **Default value:** ``0.5`` (which makes *width/height = number of unique values * - step* ) - quantileCount : float - Default range cardinality for `quantile - `__ scale. - - **Default value:** ``4`` - quantizeCount : float - Default range cardinality for `quantize - `__ scale. - - **Default value:** ``4`` - rangeStep : anyOf(float, None) - Default range step for band and point scales of (1) the ``y`` channel - and (2) the ``x`` channel when the mark is not ``text``. - - **Default value:** ``20`` - rectBandPaddingInner : float - Default inner padding for ``x`` and ``y`` band-ordinal scales of ``"rect"`` marks. - - **Default value:** ``0`` - rectBandPaddingOuter : float - Default outer padding for ``x`` and ``y`` band-ordinal scales of ``"rect"`` marks. - If not specified, by default, band scale's paddingOuter is paddingInner/2. - round : boolean - If true, rounds numeric output values to integers. - This can be helpful for snapping to the pixel grid. - (Only available for ``x``, ``y``, and ``size`` scales.) - textXRangeStep : float - Default range step for ``x`` band and point scales of text marks. - - **Default value:** ``90`` - useUnaggregatedDomain : boolean - Use the source data range before aggregation as scale domain instead of aggregated - data for aggregate axis. - - This is equivalent to setting ``domain`` to ``"unaggregate"`` for aggregated - *quantitative* fields by default. - - This property only works with aggregate functions that produce values within the raw - data domain ( ``"mean"``, ``"average"``, ``"median"``, ``"q1"``, ``"q3"``, - ``"min"``, ``"max"`` ). For other aggregations that produce values outside of the - raw data domain (e.g. ``"count"``, ``"sum"`` ), this property is ignored. - - **Default value:** ``false`` - """ - _schema = {'$ref': '#/definitions/ScaleConfig'} - - def __init__(self, bandPaddingInner=Undefined, bandPaddingOuter=Undefined, - barBandPaddingInner=Undefined, barBandPaddingOuter=Undefined, clamp=Undefined, - continuousPadding=Undefined, maxBandSize=Undefined, maxFontSize=Undefined, - maxOpacity=Undefined, maxSize=Undefined, maxStrokeWidth=Undefined, - minBandSize=Undefined, minFontSize=Undefined, minOpacity=Undefined, minSize=Undefined, - minStrokeWidth=Undefined, pointPadding=Undefined, quantileCount=Undefined, - quantizeCount=Undefined, rangeStep=Undefined, rectBandPaddingInner=Undefined, - rectBandPaddingOuter=Undefined, round=Undefined, textXRangeStep=Undefined, - useUnaggregatedDomain=Undefined, **kwds): - super(ScaleConfig, self).__init__(bandPaddingInner=bandPaddingInner, - bandPaddingOuter=bandPaddingOuter, - barBandPaddingInner=barBandPaddingInner, - barBandPaddingOuter=barBandPaddingOuter, clamp=clamp, - continuousPadding=continuousPadding, maxBandSize=maxBandSize, - maxFontSize=maxFontSize, maxOpacity=maxOpacity, - maxSize=maxSize, maxStrokeWidth=maxStrokeWidth, - minBandSize=minBandSize, minFontSize=minFontSize, - minOpacity=minOpacity, minSize=minSize, - minStrokeWidth=minStrokeWidth, pointPadding=pointPadding, - quantileCount=quantileCount, quantizeCount=quantizeCount, - rangeStep=rangeStep, - rectBandPaddingInner=rectBandPaddingInner, - rectBandPaddingOuter=rectBandPaddingOuter, round=round, - textXRangeStep=textXRangeStep, - useUnaggregatedDomain=useUnaggregatedDomain, **kwds) - - -class ScaleInterpolate(VegaLiteSchema): - """ScaleInterpolate schema wrapper - - enum('rgb', 'lab', 'hcl', 'hsl', 'hsl-long', 'hcl-long', 'cubehelix', 'cubehelix-long') - """ - _schema = {'$ref': '#/definitions/ScaleInterpolate'} - - def __init__(self, *args): - super(ScaleInterpolate, self).__init__(*args) - - -class ScaleInterpolateParams(VegaLiteSchema): - """ScaleInterpolateParams schema wrapper - - Mapping(required=[type]) - - Attributes - ---------- - - type : enum('rgb', 'cubehelix', 'cubehelix-long') - - gamma : float - - """ - _schema = {'$ref': '#/definitions/ScaleInterpolateParams'} - - def __init__(self, type=Undefined, gamma=Undefined, **kwds): - super(ScaleInterpolateParams, self).__init__(type=type, gamma=gamma, **kwds) - - -class ScaleResolveMap(VegaLiteSchema): - """ScaleResolveMap schema wrapper - - Mapping(required=[]) - - Attributes - ---------- - - color : :class:`ResolveMode` - - fill : :class:`ResolveMode` - - fillOpacity : :class:`ResolveMode` - - opacity : :class:`ResolveMode` - - shape : :class:`ResolveMode` - - size : :class:`ResolveMode` - - stroke : :class:`ResolveMode` - - strokeOpacity : :class:`ResolveMode` - - strokeWidth : :class:`ResolveMode` - - x : :class:`ResolveMode` - - y : :class:`ResolveMode` - - """ - _schema = {'$ref': '#/definitions/ScaleResolveMap'} - - def __init__(self, color=Undefined, fill=Undefined, fillOpacity=Undefined, opacity=Undefined, - shape=Undefined, size=Undefined, stroke=Undefined, strokeOpacity=Undefined, - strokeWidth=Undefined, x=Undefined, y=Undefined, **kwds): - super(ScaleResolveMap, self).__init__(color=color, fill=fill, fillOpacity=fillOpacity, - opacity=opacity, shape=shape, size=size, stroke=stroke, - strokeOpacity=strokeOpacity, strokeWidth=strokeWidth, x=x, - y=y, **kwds) - - -class ScaleType(VegaLiteSchema): - """ScaleType schema wrapper - - enum('linear', 'log', 'pow', 'sqrt', 'symlog', 'time', 'utc', 'quantile', 'quantize', - 'threshold', 'bin-ordinal', 'ordinal', 'point', 'band') - """ - _schema = {'$ref': '#/definitions/ScaleType'} - - def __init__(self, *args): - super(ScaleType, self).__init__(*args) - - -class SchemeConfig(RangeConfigValue): - """SchemeConfig schema wrapper - - Mapping(required=[scheme]) - - Attributes - ---------- - - scheme : string - - count : float - - extent : List(float) - - """ - _schema = {'$ref': '#/definitions/SchemeConfig'} - - def __init__(self, scheme=Undefined, count=Undefined, extent=Undefined, **kwds): - super(SchemeConfig, self).__init__(scheme=scheme, count=count, extent=extent, **kwds) - - -class SchemeParams(VegaLiteSchema): - """SchemeParams schema wrapper - - Mapping(required=[name]) - - Attributes - ---------- - - name : string - A color scheme name for ordinal scales (e.g., ``"category10"`` or ``"blues"`` ). - - For the full list of supported schemes, please refer to the `Vega Scheme - `__ reference. - count : float - The number of colors to use in the scheme. This can be useful for scale types such - as ``"quantize"``, which use the length of the scale range to determine the number - of discrete bins for the scale domain. - extent : List(float) - The extent of the color range to use. For example ``[0.2, 1]`` will rescale the - color scheme such that color values in the range *[0, 0.2)* are excluded from the - scheme. - """ - _schema = {'$ref': '#/definitions/SchemeParams'} - - def __init__(self, name=Undefined, count=Undefined, extent=Undefined, **kwds): - super(SchemeParams, self).__init__(name=name, count=count, extent=extent, **kwds) - - -class SecondaryFieldDef(VegaLiteSchema): - """SecondaryFieldDef schema wrapper - - Mapping(required=[]) - A field definition of a secondary channel that shares a scale with another primary channel. - For example, ``x2``, ``xError`` and ``xError2`` share the same scale with ``x``. - - Attributes - ---------- - - aggregate : :class:`Aggregate` - Aggregation function for the field - (e.g., ``mean``, ``sum``, ``median``, ``min``, ``max``, ``count`` ). - - **Default value:** ``undefined`` (None) - - **See also:** `aggregate `__ - documentation. - bin : None - A flag for binning a ``quantitative`` field, `an object defining binning parameters - `__, or indicating that the - data for ``x`` or ``y`` channel are binned before they are imported into Vega-Lite ( - ``"binned"`` ). - - - If ``true``, default `binning parameters - `__ will be applied. - - If ``"binned"``, this indicates that the data for the ``x`` (or ``y`` ) channel are - already binned. You can map the bin-start field to ``x`` (or ``y`` ) and the bin-end - field to ``x2`` (or ``y2`` ). The scale and axis will be formatted similar to - binning in Vega-lite. To adjust the axis ticks based on the bin step, you can also - set the axis's `tickMinStep - `__ property. - - **Default value:** ``false`` - - **See also:** `bin `__ - documentation. - field : :class:`Field` - **Required.** A string defining the name of the field from which to pull a data - value - or an object defining iterated values from the `repeat - `__ operator. - - **See also:** `field `__ - documentation. - - **Notes:** - 1) Dots ( ``.`` ) and brackets ( ``[`` and ``]`` ) can be used to access nested - objects (e.g., ``"field": "foo.bar"`` and ``"field": "foo['bar']"`` ). - If field names contain dots or brackets but are not nested, you can use ``\\`` to - escape dots and brackets (e.g., ``"a\\.b"`` and ``"a\\[0\\]"`` ). - See more details about escaping in the `field documentation - `__. - 2) ``field`` is not required if ``aggregate`` is ``count``. - timeUnit : :class:`TimeUnit` - Time unit (e.g., ``year``, ``yearmonth``, ``month``, ``hours`` ) for a temporal - field. - or `a temporal field that gets casted as ordinal - `__. - - **Default value:** ``undefined`` (None) - - **See also:** `timeUnit `__ - documentation. - title : anyOf(string, None) - A title for the field. If ``null``, the title will be removed. - - **Default value:** derived from the field's name and transformation function ( - ``aggregate``, ``bin`` and ``timeUnit`` ). If the field has an aggregate function, - the function is displayed as part of the title (e.g., ``"Sum of Profit"`` ). If the - field is binned or has a time unit applied, the applied function is shown in - parentheses (e.g., ``"Profit (binned)"``, ``"Transaction Date (year-month)"`` ). - Otherwise, the title is simply the field name. - - **Notes** : - - 1) You can customize the default field title format by providing the `fieldTitle - `__ property in - the `config `__ or `fieldTitle - function via the compile function's options - `__. - - 2) If both field definition's ``title`` and axis, header, or legend ``title`` are - defined, axis/header/legend title will be used. - """ - _schema = {'$ref': '#/definitions/SecondaryFieldDef'} - - def __init__(self, aggregate=Undefined, bin=Undefined, field=Undefined, timeUnit=Undefined, - title=Undefined, **kwds): - super(SecondaryFieldDef, self).__init__(aggregate=aggregate, bin=bin, field=field, - timeUnit=timeUnit, title=title, **kwds) - - -class SelectionConfig(VegaLiteSchema): - """SelectionConfig schema wrapper - - Mapping(required=[]) - - Attributes - ---------- - - interval : :class:`IntervalSelectionConfig` - The default definition for an `interval - `__ selection. All - properties and transformations - for an interval selection definition (except ``type`` ) may be specified here. - - For instance, setting ``interval`` to ``{"translate": false}`` disables the ability - to move - interval selections by default. - multi : :class:`MultiSelectionConfig` - The default definition for a `multi - `__ selection. All - properties and transformations - for a multi selection definition (except ``type`` ) may be specified here. - - For instance, setting ``multi`` to ``{"toggle": "event.altKey"}`` adds additional - values to - multi selections when clicking with the alt-key pressed by default. - single : :class:`SingleSelectionConfig` - The default definition for a `single - `__ selection. All - properties and transformations - for a single selection definition (except ``type`` ) may be specified here. - - For instance, setting ``single`` to ``{"on": "dblclick"}`` populates single - selections on double-click by default. - """ - _schema = {'$ref': '#/definitions/SelectionConfig'} - - def __init__(self, interval=Undefined, multi=Undefined, single=Undefined, **kwds): - super(SelectionConfig, self).__init__(interval=interval, multi=multi, single=single, **kwds) - - -class SelectionDef(VegaLiteSchema): - """SelectionDef schema wrapper - - anyOf(:class:`SingleSelection`, :class:`MultiSelection`, :class:`IntervalSelection`) - """ - _schema = {'$ref': '#/definitions/SelectionDef'} - - def __init__(self, *args, **kwds): - super(SelectionDef, self).__init__(*args, **kwds) - - -class IntervalSelection(SelectionDef): - """IntervalSelection schema wrapper - - Mapping(required=[type]) - - Attributes - ---------- - - type : enum('interval') - Determines the default event processing and data query for the selection. Vega-Lite - currently supports three selection types: - - - * ``single`` -- to select a single discrete data value on ``click``. - * ``multi`` -- to select multiple discrete data value; the first value is selected - on ``click`` and additional values toggled on shift- ``click``. - * ``interval`` -- to select a continuous range of data values on ``drag``. - bind : enum('scales') - Establishes a two-way binding between the interval selection and the scales - used within the same view. This allows a user to interactively pan and - zoom the view. - - **See also:** `bind `__ - documentation. - clear : anyOf(:class:`EventStream`, boolean) - Clears the selection, emptying it of all values. Can be an - `EventStream `__ or ``false`` to - disable. - - **Default value:** ``dblclick``. - - **See also:** `clear `__ - documentation. - empty : enum('all', 'none') - By default, ``all`` data values are considered to lie within an empty selection. - When set to ``none``, empty selections contain no data values. - encodings : List(:class:`SingleDefUnitChannel`) - An array of encoding channels. The corresponding data field values - must match for a data tuple to fall within the selection. - - **See also:** `encodings `__ - documentation. - fields : List(:class:`FieldName`) - An array of field names whose values must match for a data tuple to - fall within the selection. - - **See also:** `fields `__ - documentation. - init : :class:`SelectionInitIntervalMapping` - Initialize the selection with a mapping between `projected channels or field names - `__ and arrays of - initial values. - - **See also:** `init `__ - documentation. - mark : :class:`BrushConfig` - An interval selection also adds a rectangle mark to depict the - extents of the interval. The ``mark`` property can be used to customize the - appearance of the mark. - - **See also:** `mark `__ - documentation. - on : :class:`EventStream` - A `Vega event stream `__ (object or - selector) that triggers the selection. - For interval selections, the event stream must specify a `start and end - `__. - resolve : :class:`SelectionResolution` - With layered and multi-view displays, a strategy that determines how - selections' data queries are resolved when applied in a filter transform, - conditional encoding rule, or scale domain. - - **See also:** `resolve - `__ documentation. - translate : anyOf(string, boolean) - When truthy, allows a user to interactively move an interval selection - back-and-forth. Can be ``true``, ``false`` (to disable panning), or a - `Vega event stream definition `__ - which must include a start and end event to trigger continuous panning. - - **Default value:** ``true``, which corresponds to - ``[mousedown, window:mouseup] > window:mousemove!`` which corresponds to - clicks and dragging within an interval selection to reposition it. - - **See also:** `translate `__ - documentation. - zoom : anyOf(string, boolean) - When truthy, allows a user to interactively resize an interval selection. - Can be ``true``, ``false`` (to disable zooming), or a `Vega event stream - definition `__. Currently, - only ``wheel`` events are supported. - - **Default value:** ``true``, which corresponds to ``wheel!``. - - **See also:** `zoom `__ - documentation. - """ - _schema = {'$ref': '#/definitions/IntervalSelection'} - - def __init__(self, type=Undefined, bind=Undefined, clear=Undefined, empty=Undefined, - encodings=Undefined, fields=Undefined, init=Undefined, mark=Undefined, on=Undefined, - resolve=Undefined, translate=Undefined, zoom=Undefined, **kwds): - super(IntervalSelection, self).__init__(type=type, bind=bind, clear=clear, empty=empty, - encodings=encodings, fields=fields, init=init, - mark=mark, on=on, resolve=resolve, translate=translate, - zoom=zoom, **kwds) - - -class MultiSelection(SelectionDef): - """MultiSelection schema wrapper - - Mapping(required=[type]) - - Attributes - ---------- - - type : enum('multi') - Determines the default event processing and data query for the selection. Vega-Lite - currently supports three selection types: - - - * ``single`` -- to select a single discrete data value on ``click``. - * ``multi`` -- to select multiple discrete data value; the first value is selected - on ``click`` and additional values toggled on shift- ``click``. - * ``interval`` -- to select a continuous range of data values on ``drag``. - clear : anyOf(:class:`EventStream`, boolean) - Clears the selection, emptying it of all values. Can be an - `EventStream `__ or ``false`` to - disable. - - **Default value:** ``dblclick``. - - **See also:** `clear `__ - documentation. - empty : enum('all', 'none') - By default, ``all`` data values are considered to lie within an empty selection. - When set to ``none``, empty selections contain no data values. - encodings : List(:class:`SingleDefUnitChannel`) - An array of encoding channels. The corresponding data field values - must match for a data tuple to fall within the selection. - - **See also:** `encodings `__ - documentation. - fields : List(:class:`FieldName`) - An array of field names whose values must match for a data tuple to - fall within the selection. - - **See also:** `fields `__ - documentation. - init : anyOf(:class:`SelectionInitMapping`, List(:class:`SelectionInitMapping`)) - Initialize the selection with a mapping between `projected channels or field names - `__ and an initial - value (or array of values). - - **See also:** `init `__ - documentation. - nearest : boolean - When true, an invisible voronoi diagram is computed to accelerate discrete - selection. The data value *nearest* the mouse cursor is added to the selection. - - **See also:** `nearest `__ - documentation. - on : :class:`EventStream` - A `Vega event stream `__ (object or - selector) that triggers the selection. - For interval selections, the event stream must specify a `start and end - `__. - resolve : :class:`SelectionResolution` - With layered and multi-view displays, a strategy that determines how - selections' data queries are resolved when applied in a filter transform, - conditional encoding rule, or scale domain. - - **See also:** `resolve - `__ documentation. - toggle : anyOf(string, boolean) - Controls whether data values should be toggled or only ever inserted into - multi selections. Can be ``true``, ``false`` (for insertion only), or a - `Vega expression `__. - - **Default value:** ``true``, which corresponds to ``event.shiftKey`` (i.e., - data values are toggled when a user interacts with the shift-key pressed). - - **See also:** `toggle `__ - documentation. - """ - _schema = {'$ref': '#/definitions/MultiSelection'} - - def __init__(self, type=Undefined, clear=Undefined, empty=Undefined, encodings=Undefined, - fields=Undefined, init=Undefined, nearest=Undefined, on=Undefined, resolve=Undefined, - toggle=Undefined, **kwds): - super(MultiSelection, self).__init__(type=type, clear=clear, empty=empty, encodings=encodings, - fields=fields, init=init, nearest=nearest, on=on, - resolve=resolve, toggle=toggle, **kwds) - - -class SelectionDomain(VegaLiteSchema): - """SelectionDomain schema wrapper - - anyOf(Mapping(required=[selection]), Mapping(required=[selection])) - """ - _schema = {'$ref': '#/definitions/SelectionDomain'} - - def __init__(self, *args, **kwds): - super(SelectionDomain, self).__init__(*args, **kwds) - - -class SelectionInit(VegaLiteSchema): - """SelectionInit schema wrapper - - anyOf(boolean, float, string, :class:`DateTime`) - """ - _schema = {'$ref': '#/definitions/SelectionInit'} - - def __init__(self, *args, **kwds): - super(SelectionInit, self).__init__(*args, **kwds) - - -class DateTime(SelectionInit): - """DateTime schema wrapper - - Mapping(required=[]) - Object for defining datetime in Vega-Lite Filter. - If both month and quarter are provided, month has higher precedence. - ``day`` cannot be combined with other date. - We accept string for month and day names. - - Attributes - ---------- - - date : float - Integer value representing the date from 1-31. - day : anyOf(:class:`Day`, string) - Value representing the day of a week. This can be one of: (1) integer value -- - ``1`` represents Monday; (2) case-insensitive day name (e.g., ``"Monday"`` ); (3) - case-insensitive, 3-character short day name (e.g., ``"Mon"`` ). :raw-html:`
      ` - **Warning:** A DateTime definition object with ``day`` ** should not be combined - with ``year``, ``quarter``, ``month``, or ``date``. - hours : float - Integer value representing the hour of a day from 0-23. - milliseconds : float - Integer value representing the millisecond segment of time. - minutes : float - Integer value representing the minute segment of time from 0-59. - month : anyOf(:class:`Month`, string) - One of: (1) integer value representing the month from ``1`` - ``12``. ``1`` - represents January; (2) case-insensitive month name (e.g., ``"January"`` ); (3) - case-insensitive, 3-character short month name (e.g., ``"Jan"`` ). - quarter : float - Integer value representing the quarter of the year (from 1-4). - seconds : float - Integer value representing the second segment (0-59) of a time value - utc : boolean - A boolean flag indicating if date time is in utc time. If false, the date time is in - local time - year : float - Integer value representing the year. - """ - _schema = {'$ref': '#/definitions/DateTime'} - - def __init__(self, date=Undefined, day=Undefined, hours=Undefined, milliseconds=Undefined, - minutes=Undefined, month=Undefined, quarter=Undefined, seconds=Undefined, - utc=Undefined, year=Undefined, **kwds): - super(DateTime, self).__init__(date=date, day=day, hours=hours, milliseconds=milliseconds, - minutes=minutes, month=month, quarter=quarter, seconds=seconds, - utc=utc, year=year, **kwds) - - -class SelectionInitInterval(VegaLiteSchema): - """SelectionInitInterval schema wrapper - - anyOf(List([boolean, boolean]), List([float, float]), List([string, string]), - List([:class:`DateTime`, :class:`DateTime`])) - """ - _schema = {'$ref': '#/definitions/SelectionInitInterval'} - - def __init__(self, *args, **kwds): - super(SelectionInitInterval, self).__init__(*args, **kwds) - - -class SelectionInitIntervalMapping(VegaLiteSchema): - """SelectionInitIntervalMapping schema wrapper - - Mapping(required=[]) - """ - _schema = {'$ref': '#/definitions/SelectionInitIntervalMapping'} - - def __init__(self, **kwds): - super(SelectionInitIntervalMapping, self).__init__(**kwds) - - -class SelectionInitMapping(VegaLiteSchema): - """SelectionInitMapping schema wrapper - - Mapping(required=[]) - """ - _schema = {'$ref': '#/definitions/SelectionInitMapping'} - - def __init__(self, **kwds): - super(SelectionInitMapping, self).__init__(**kwds) - - -class SelectionOperand(VegaLiteSchema): - """SelectionOperand schema wrapper - - anyOf(:class:`SelectionNot`, :class:`SelectionAnd`, :class:`SelectionOr`, string) - """ - _schema = {'$ref': '#/definitions/SelectionOperand'} - - def __init__(self, *args, **kwds): - super(SelectionOperand, self).__init__(*args, **kwds) - - -class SelectionAnd(SelectionOperand): - """SelectionAnd schema wrapper - - Mapping(required=[and]) - - Attributes - ---------- - - and : List(:class:`SelectionOperand`) - - """ - _schema = {'$ref': '#/definitions/SelectionAnd'} - - def __init__(self, **kwds): - super(SelectionAnd, self).__init__(**kwds) - - -class SelectionNot(SelectionOperand): - """SelectionNot schema wrapper - - Mapping(required=[not]) - - Attributes - ---------- - - not : :class:`SelectionOperand` - - """ - _schema = {'$ref': '#/definitions/SelectionNot'} - - def __init__(self, **kwds): - super(SelectionNot, self).__init__(**kwds) - - -class SelectionOr(SelectionOperand): - """SelectionOr schema wrapper - - Mapping(required=[or]) - - Attributes - ---------- - - or : List(:class:`SelectionOperand`) - - """ - _schema = {'$ref': '#/definitions/SelectionOr'} - - def __init__(self, **kwds): - super(SelectionOr, self).__init__(**kwds) - - -class SelectionPredicate(Predicate): - """SelectionPredicate schema wrapper - - Mapping(required=[selection]) - - Attributes - ---------- - - selection : :class:`SelectionOperand` - Filter using a selection name. - """ - _schema = {'$ref': '#/definitions/SelectionPredicate'} - - def __init__(self, selection=Undefined, **kwds): - super(SelectionPredicate, self).__init__(selection=selection, **kwds) - - -class SelectionResolution(VegaLiteSchema): - """SelectionResolution schema wrapper - - enum('global', 'union', 'intersect') - """ - _schema = {'$ref': '#/definitions/SelectionResolution'} - - def __init__(self, *args): - super(SelectionResolution, self).__init__(*args) - - -class SequenceGenerator(Generator): - """SequenceGenerator schema wrapper - - Mapping(required=[sequence]) - - Attributes - ---------- - - sequence : :class:`SequenceParams` - Generate a sequence of numbers. - name : string - Provide a placeholder name and bind data at runtime. - """ - _schema = {'$ref': '#/definitions/SequenceGenerator'} - - def __init__(self, sequence=Undefined, name=Undefined, **kwds): - super(SequenceGenerator, self).__init__(sequence=sequence, name=name, **kwds) - - -class SequenceParams(VegaLiteSchema): - """SequenceParams schema wrapper - - Mapping(required=[start, stop]) - - Attributes - ---------- - - start : float - The starting value of the sequence (inclusive). - stop : float - The ending value of the sequence (exclusive). - step : float - The step value between sequence entries. - - **Default value:** ``1`` - as : :class:`FieldName` - The name of the generated sequence field. - - **Default value:** ``"data"`` - """ - _schema = {'$ref': '#/definitions/SequenceParams'} - - def __init__(self, start=Undefined, stop=Undefined, step=Undefined, **kwds): - super(SequenceParams, self).__init__(start=start, stop=stop, step=step, **kwds) - - -class ShapeFieldDefWithCondition(VegaLiteSchema): - """ShapeFieldDefWithCondition schema wrapper - - Mapping(required=[type]) - A FieldDef with Condition :raw-html:`` - - Attributes - ---------- - - type : :class:`TypeForShape` - The encoded field's type of measurement ( ``"quantitative"``, ``"temporal"``, - ``"ordinal"``, or ``"nominal"`` ). - It can also be a ``"geojson"`` type for encoding `'geoshape' - `__. - - **Note:** - - - * Data values for a temporal field can be either a date-time string (e.g., - ``"2015-03-07 12:32:17"``, ``"17:01"``, ``"2015-03-16"``. ``"2015"`` ) or a - timestamp number (e.g., ``1552199579097`` ). - * Data ``type`` describes the semantics of the data rather than the primitive data - types ( ``number``, ``string``, etc.). The same primitive data type can have - different types of measurement. For example, numeric data can represent - quantitative, ordinal, or nominal data. - * When using with `bin `__, the - ``type`` property can be either ``"quantitative"`` (for using a linear bin scale) - or `"ordinal" (for using an ordinal bin scale) - `__. - * When using with `timeUnit - `__, the ``type`` property - can be either ``"temporal"`` (for using a temporal scale) or `"ordinal" (for using - an ordinal scale) `__. - * When using with `aggregate - `__, the ``type`` property - refers to the post-aggregation data type. For example, we can calculate count - ``distinct`` of a categorical field ``"cat"`` using ``{"aggregate": "distinct", - "field": "cat", "type": "quantitative"}``. The ``"type"`` of the aggregate output - is ``"quantitative"``. - * Secondary channels (e.g., ``x2``, ``y2``, ``xError``, ``yError`` ) do not have - ``type`` as they have exactly the same type as their primary channels (e.g., - ``x``, ``y`` ). - - **See also:** `type `__ - documentation. - aggregate : :class:`Aggregate` - Aggregation function for the field - (e.g., ``mean``, ``sum``, ``median``, ``min``, ``max``, ``count`` ). - - **Default value:** ``undefined`` (None) - - **See also:** `aggregate `__ - documentation. - bin : anyOf(boolean, :class:`BinParams`, None) - A flag for binning a ``quantitative`` field, `an object defining binning parameters - `__, or indicating that the - data for ``x`` or ``y`` channel are binned before they are imported into Vega-Lite ( - ``"binned"`` ). - - - If ``true``, default `binning parameters - `__ will be applied. - - If ``"binned"``, this indicates that the data for the ``x`` (or ``y`` ) channel are - already binned. You can map the bin-start field to ``x`` (or ``y`` ) and the bin-end - field to ``x2`` (or ``y2`` ). The scale and axis will be formatted similar to - binning in Vega-lite. To adjust the axis ticks based on the bin step, you can also - set the axis's `tickMinStep - `__ property. - - **Default value:** ``false`` - - **See also:** `bin `__ - documentation. - condition : anyOf(:class:`ConditionalStringValueDef`, - List(:class:`ConditionalStringValueDef`)) - One or more value definition(s) with `a selection or a test predicate - `__. - - **Note:** A field definition's ``condition`` property can only contain `conditional - value definitions `__ - since Vega-Lite only allows at most one encoded field per encoding channel. - field : :class:`Field` - **Required.** A string defining the name of the field from which to pull a data - value - or an object defining iterated values from the `repeat - `__ operator. - - **See also:** `field `__ - documentation. - - **Notes:** - 1) Dots ( ``.`` ) and brackets ( ``[`` and ``]`` ) can be used to access nested - objects (e.g., ``"field": "foo.bar"`` and ``"field": "foo['bar']"`` ). - If field names contain dots or brackets but are not nested, you can use ``\\`` to - escape dots and brackets (e.g., ``"a\\.b"`` and ``"a\\[0\\]"`` ). - See more details about escaping in the `field documentation - `__. - 2) ``field`` is not required if ``aggregate`` is ``count``. - legend : anyOf(:class:`Legend`, None) - An object defining properties of the legend. - If ``null``, the legend for the encoding channel will be removed. - - **Default value:** If undefined, default `legend properties - `__ are applied. - - **See also:** `legend `__ - documentation. - scale : anyOf(:class:`Scale`, None) - An object defining properties of the channel's scale, which is the function that - transforms values in the data domain (numbers, dates, strings, etc) to visual values - (pixels, colors, sizes) of the encoding channels. - - If ``null``, the scale will be `disabled and the data value will be directly encoded - `__. - - **Default value:** If undefined, default `scale properties - `__ are applied. - - **See also:** `scale `__ - documentation. - sort : :class:`Sort` - Sort order for the encoded field. - - For continuous fields (quantitative or temporal), ``sort`` can be either - ``"ascending"`` or ``"descending"``. - - For discrete fields, ``sort`` can be one of the following: - - - * ``"ascending"`` or ``"descending"`` -- for sorting by the values' natural order in - Javascript. - * `A sort-by-encoding definition - `__ for sorting - by another encoding channel. (This type of sort definition is not available for - ``row`` and ``column`` channels.) - * `A sort field definition - `__ for sorting by - another field. - * `An array specifying the field values in preferred order - `__. In this case, the - sort order will obey the values in the array, followed by any unspecified values - in their original order. For discrete time field, values in the sort array can be - `date-time definition objects `__. In addition, for time units - ``"month"`` and ``"day"``, the values can be the month or day names (case - insensitive) or their 3-letter initials (e.g., ``"Mon"``, ``"Tue"`` ). - * ``null`` indicating no sort. - - **Default value:** ``"ascending"`` - - **Note:** ``null`` is not supported for ``row`` and ``column``. - - **See also:** `sort `__ - documentation. - timeUnit : :class:`TimeUnit` - Time unit (e.g., ``year``, ``yearmonth``, ``month``, ``hours`` ) for a temporal - field. - or `a temporal field that gets casted as ordinal - `__. - - **Default value:** ``undefined`` (None) - - **See also:** `timeUnit `__ - documentation. - title : anyOf(string, None) - A title for the field. If ``null``, the title will be removed. - - **Default value:** derived from the field's name and transformation function ( - ``aggregate``, ``bin`` and ``timeUnit`` ). If the field has an aggregate function, - the function is displayed as part of the title (e.g., ``"Sum of Profit"`` ). If the - field is binned or has a time unit applied, the applied function is shown in - parentheses (e.g., ``"Profit (binned)"``, ``"Transaction Date (year-month)"`` ). - Otherwise, the title is simply the field name. - - **Notes** : - - 1) You can customize the default field title format by providing the `fieldTitle - `__ property in - the `config `__ or `fieldTitle - function via the compile function's options - `__. - - 2) If both field definition's ``title`` and axis, header, or legend ``title`` are - defined, axis/header/legend title will be used. - """ - _schema = {'$ref': '#/definitions/ShapeFieldDefWithCondition'} - - def __init__(self, type=Undefined, aggregate=Undefined, bin=Undefined, condition=Undefined, - field=Undefined, legend=Undefined, scale=Undefined, sort=Undefined, timeUnit=Undefined, - title=Undefined, **kwds): - super(ShapeFieldDefWithCondition, self).__init__(type=type, aggregate=aggregate, bin=bin, - condition=condition, field=field, - legend=legend, scale=scale, sort=sort, - timeUnit=timeUnit, title=title, **kwds) - - -class ShapeValueDefWithCondition(VegaLiteSchema): - """ShapeValueDefWithCondition schema wrapper - - Mapping(required=[]) - A ValueDef with Condition where either the condition or the value are - optional. - - Attributes - ---------- - - condition : anyOf(:class:`ConditionalMarkPropFieldDefTypeForShape`, - :class:`ConditionalStringValueDef`, List(:class:`ConditionalStringValueDef`)) - A field definition or one or more value definition(s) with a selection predicate. - value : anyOf(string, None) - A constant value in visual domain (e.g., ``"red"`` / "#0099ff" for color, values - between ``0`` to ``1`` for opacity). - """ - _schema = {'$ref': '#/definitions/ShapeValueDefWithCondition'} - - def __init__(self, condition=Undefined, value=Undefined, **kwds): - super(ShapeValueDefWithCondition, self).__init__(condition=condition, value=value, **kwds) - - -class SignalRef(LayoutBounds): - """SignalRef schema wrapper - - Mapping(required=[signal]) - - Attributes - ---------- - - signal : string - - """ - _schema = {'$ref': '#/definitions/SignalRef'} - - def __init__(self, signal=Undefined, **kwds): - super(SignalRef, self).__init__(signal=signal, **kwds) - - -class SingleDefUnitChannel(VegaLiteSchema): - """SingleDefUnitChannel schema wrapper - - enum('x', 'y', 'x2', 'y2', 'longitude', 'latitude', 'longitude2', 'latitude2', 'color', - 'fill', 'stroke', 'opacity', 'fillOpacity', 'strokeOpacity', 'strokeWidth', 'size', 'shape', - 'key', 'text', 'tooltip', 'href') - """ - _schema = {'$ref': '#/definitions/SingleDefUnitChannel'} - - def __init__(self, *args): - super(SingleDefUnitChannel, self).__init__(*args) - - -class SingleSelection(SelectionDef): - """SingleSelection schema wrapper - - Mapping(required=[type]) - - Attributes - ---------- - - type : enum('single') - Determines the default event processing and data query for the selection. Vega-Lite - currently supports three selection types: - - - * ``single`` -- to select a single discrete data value on ``click``. - * ``multi`` -- to select multiple discrete data value; the first value is selected - on ``click`` and additional values toggled on shift- ``click``. - * ``interval`` -- to select a continuous range of data values on ``drag``. - bind : anyOf(:class:`Binding`, Mapping(required=[])) - Establish a two-way binding between a single selection and input elements - (also known as dynamic query widgets). A binding takes the form of - Vega's `input element binding definition - `__ - or can be a mapping between projected field/encodings and binding definitions. - - **See also:** `bind `__ - documentation. - clear : anyOf(:class:`EventStream`, boolean) - Clears the selection, emptying it of all values. Can be an - `EventStream `__ or ``false`` to - disable. - - **Default value:** ``dblclick``. - - **See also:** `clear `__ - documentation. - empty : enum('all', 'none') - By default, ``all`` data values are considered to lie within an empty selection. - When set to ``none``, empty selections contain no data values. - encodings : List(:class:`SingleDefUnitChannel`) - An array of encoding channels. The corresponding data field values - must match for a data tuple to fall within the selection. - - **See also:** `encodings `__ - documentation. - fields : List(:class:`FieldName`) - An array of field names whose values must match for a data tuple to - fall within the selection. - - **See also:** `fields `__ - documentation. - init : :class:`SelectionInitMapping` - Initialize the selection with a mapping between `projected channels or field names - `__ and initial values. - - **See also:** `init `__ - documentation. - nearest : boolean - When true, an invisible voronoi diagram is computed to accelerate discrete - selection. The data value *nearest* the mouse cursor is added to the selection. - - **See also:** `nearest `__ - documentation. - on : :class:`EventStream` - A `Vega event stream `__ (object or - selector) that triggers the selection. - For interval selections, the event stream must specify a `start and end - `__. - resolve : :class:`SelectionResolution` - With layered and multi-view displays, a strategy that determines how - selections' data queries are resolved when applied in a filter transform, - conditional encoding rule, or scale domain. - - **See also:** `resolve - `__ documentation. - """ - _schema = {'$ref': '#/definitions/SingleSelection'} - - def __init__(self, type=Undefined, bind=Undefined, clear=Undefined, empty=Undefined, - encodings=Undefined, fields=Undefined, init=Undefined, nearest=Undefined, on=Undefined, - resolve=Undefined, **kwds): - super(SingleSelection, self).__init__(type=type, bind=bind, clear=clear, empty=empty, - encodings=encodings, fields=fields, init=init, - nearest=nearest, on=on, resolve=resolve, **kwds) - - -class SingleSelectionConfig(VegaLiteSchema): - """SingleSelectionConfig schema wrapper - - Mapping(required=[]) - - Attributes - ---------- - - bind : anyOf(:class:`Binding`, Mapping(required=[])) - Establish a two-way binding between a single selection and input elements - (also known as dynamic query widgets). A binding takes the form of - Vega's `input element binding definition - `__ - or can be a mapping between projected field/encodings and binding definitions. - - **See also:** `bind `__ - documentation. - clear : anyOf(:class:`EventStream`, boolean) - Clears the selection, emptying it of all values. Can be an - `EventStream `__ or ``false`` to - disable. - - **Default value:** ``dblclick``. - - **See also:** `clear `__ - documentation. - empty : enum('all', 'none') - By default, ``all`` data values are considered to lie within an empty selection. - When set to ``none``, empty selections contain no data values. - encodings : List(:class:`SingleDefUnitChannel`) - An array of encoding channels. The corresponding data field values - must match for a data tuple to fall within the selection. - - **See also:** `encodings `__ - documentation. - fields : List(:class:`FieldName`) - An array of field names whose values must match for a data tuple to - fall within the selection. - - **See also:** `fields `__ - documentation. - init : :class:`SelectionInitMapping` - Initialize the selection with a mapping between `projected channels or field names - `__ and initial values. - - **See also:** `init `__ - documentation. - nearest : boolean - When true, an invisible voronoi diagram is computed to accelerate discrete - selection. The data value *nearest* the mouse cursor is added to the selection. - - **See also:** `nearest `__ - documentation. - on : :class:`EventStream` - A `Vega event stream `__ (object or - selector) that triggers the selection. - For interval selections, the event stream must specify a `start and end - `__. - resolve : :class:`SelectionResolution` - With layered and multi-view displays, a strategy that determines how - selections' data queries are resolved when applied in a filter transform, - conditional encoding rule, or scale domain. - - **See also:** `resolve - `__ documentation. - """ - _schema = {'$ref': '#/definitions/SingleSelectionConfig'} - - def __init__(self, bind=Undefined, clear=Undefined, empty=Undefined, encodings=Undefined, - fields=Undefined, init=Undefined, nearest=Undefined, on=Undefined, resolve=Undefined, - **kwds): - super(SingleSelectionConfig, self).__init__(bind=bind, clear=clear, empty=empty, - encodings=encodings, fields=fields, init=init, - nearest=nearest, on=on, resolve=resolve, **kwds) - - -class Sort(VegaLiteSchema): - """Sort schema wrapper - - anyOf(:class:`SortArray`, :class:`SortOrder`, :class:`EncodingSortField`, - :class:`SortByEncoding`, None) - """ - _schema = {'$ref': '#/definitions/Sort'} - - def __init__(self, *args, **kwds): - super(Sort, self).__init__(*args, **kwds) - - -class EncodingSortField(Sort): - """EncodingSortField schema wrapper - - Mapping(required=[]) - A sort definition for sorting a discrete scale in an encoding field definition. - - Attributes - ---------- - - field : :class:`Field` - The data `field `__ to sort by. - - **Default value:** If unspecified, defaults to the field specified in the outer data - reference. - op : :class:`AggregateOp` - An `aggregate operation - `__ to perform on the - field prior to sorting (e.g., ``"count"``, ``"mean"`` and ``"median"`` ). - An aggregation is required when there are multiple values of the sort field for each - encoded data field. - The input data objects will be aggregated, grouped by the encoded data field. - - For a full list of operations, please see the documentation for `aggregate - `__. - - **Default value:** ``"sum"`` for stacked plots. Otherwise, ``"mean"``. - order : anyOf(:class:`SortOrder`, None) - The sort order. One of ``"ascending"`` (default), ``"descending"``, or ``null`` (no - not sort). - """ - _schema = {'$ref': '#/definitions/EncodingSortField'} - - def __init__(self, field=Undefined, op=Undefined, order=Undefined, **kwds): - super(EncodingSortField, self).__init__(field=field, op=op, order=order, **kwds) - - -class SortArray(Sort): - """SortArray schema wrapper - - anyOf(List(float), List(string), List(boolean), List(:class:`DateTime`)) - """ - _schema = {'$ref': '#/definitions/SortArray'} - - def __init__(self, *args, **kwds): - super(SortArray, self).__init__(*args, **kwds) - - -class SortByEncoding(Sort): - """SortByEncoding schema wrapper - - Mapping(required=[encoding]) - - Attributes - ---------- - - encoding : :class:`SingleDefUnitChannel` - The `encoding channel - `__ to sort by (e.g., - ``"x"``, ``"y"`` ) - order : anyOf(:class:`SortOrder`, None) - The sort order. One of ``"ascending"`` (default), ``"descending"``, or ``null`` (no - not sort). - """ - _schema = {'$ref': '#/definitions/SortByEncoding'} - - def __init__(self, encoding=Undefined, order=Undefined, **kwds): - super(SortByEncoding, self).__init__(encoding=encoding, order=order, **kwds) - - -class SortField(VegaLiteSchema): - """SortField schema wrapper - - Mapping(required=[field]) - A sort definition for transform - - Attributes - ---------- - - field : :class:`FieldName` - The name of the field to sort. - order : anyOf(:class:`SortOrder`, None) - Whether to sort the field in ascending or descending order. One of ``"ascending"`` - (default), ``"descending"``, or ``null`` (no not sort). - """ - _schema = {'$ref': '#/definitions/SortField'} - - def __init__(self, field=Undefined, order=Undefined, **kwds): - super(SortField, self).__init__(field=field, order=order, **kwds) - - -class SortOrder(Sort): - """SortOrder schema wrapper - - enum('ascending', 'descending') - """ - _schema = {'$ref': '#/definitions/SortOrder'} - - def __init__(self, *args): - super(SortOrder, self).__init__(*args) - - -class Spec(VegaLiteSchema): - """Spec schema wrapper - - anyOf(:class:`FacetedUnitSpec`, :class:`LayerSpec`, :class:`FacetSpec`, :class:`RepeatSpec`, - :class:`ConcatSpec`, :class:`VConcatSpec`, :class:`HConcatSpec`) - Any specification in Vega-Lite. - """ - _schema = {'$ref': '#/definitions/Spec'} - - def __init__(self, *args, **kwds): - super(Spec, self).__init__(*args, **kwds) - - -class ConcatSpec(Spec): - """ConcatSpec schema wrapper - - Mapping(required=[concat]) - Base interface for a generalized concatenation specification. - - Attributes - ---------- - - concat : List(:class:`Spec`) - A list of views to be concatenated. - align : anyOf(:class:`LayoutAlign`, :class:`RowColLayoutAlign`) - The alignment to apply to grid rows and columns. - The supported string values are ``"all"``, ``"each"``, and ``"none"``. - - - * For ``"none"``, a flow layout will be used, in which adjacent subviews are simply - placed one after the other. - * For ``"each"``, subviews will be aligned into a clean grid structure, but each row - or column may be of variable size. - * For ``"all"``, subviews will be aligned and each row or column will be sized - identically based on the maximum observed size. String values for this property - will be applied to both grid rows and columns. - - Alternatively, an object value of the form ``{"row": string, "column": string}`` can - be used to supply different alignments for rows and columns. - - **Default value:** ``"all"``. - bounds : enum('full', 'flush') - The bounds calculation method to use for determining the extent of a sub-plot. One - of ``full`` (the default) or ``flush``. - - - * If set to ``full``, the entire calculated bounds (including axes, title, and - legend) will be used. - * If set to ``flush``, only the specified width and height values for the sub-view - will be used. The ``flush`` setting can be useful when attempting to place - sub-plots without axes or legends into a uniform grid structure. - - **Default value:** ``"full"`` - center : anyOf(boolean, :class:`RowColboolean`) - Boolean flag indicating if subviews should be centered relative to their respective - rows or columns. - - An object value of the form ``{"row": boolean, "column": boolean}`` can be used to - supply different centering values for rows and columns. - - **Default value:** ``false`` - columns : float - The number of columns to include in the view composition layout. - - **Default value** : ``undefined`` -- An infinite number of columns (a single row) - will be assumed. This is equivalent to - ``hconcat`` (for ``concat`` ) and to using the ``column`` channel (for ``facet`` and - ``repeat`` ). - - **Note** : - - 1) This property is only for: - - - * the general (wrappable) ``concat`` operator (not ``hconcat`` / ``vconcat`` ) - * the ``facet`` and ``repeat`` operator with one field/repetition definition - (without row/column nesting) - - 2) Setting the ``columns`` to ``1`` is equivalent to ``vconcat`` (for ``concat`` ) - and to using the ``row`` channel (for ``facet`` and ``repeat`` ). - data : anyOf(:class:`Data`, None) - An object describing the data source. Set to ``null`` to ignore the parent's data - source. If no data is set, it is derived from the parent. - description : string - Description of this mark for commenting purpose. - name : string - Name of the visualization for later reference. - resolve : :class:`Resolve` - Scale, axis, and legend resolutions for view composition specifications. - spacing : anyOf(float, :class:`RowColnumber`) - The spacing in pixels between sub-views of the composition operator. - An object of the form ``{"row": number, "column": number}`` can be used to set - different spacing values for rows and columns. - - **Default value** : Depends on ``"spacing"`` property of `the view composition - configuration `__ ( - ``20`` by default) - title : anyOf(string, :class:`TitleParams`) - Title for the plot. - transform : List(:class:`Transform`) - An array of data transformations such as filter and new field calculation. - """ - _schema = {'$ref': '#/definitions/ConcatSpec'} - - def __init__(self, concat=Undefined, align=Undefined, bounds=Undefined, center=Undefined, - columns=Undefined, data=Undefined, description=Undefined, name=Undefined, - resolve=Undefined, spacing=Undefined, title=Undefined, transform=Undefined, **kwds): - super(ConcatSpec, self).__init__(concat=concat, align=align, bounds=bounds, center=center, - columns=columns, data=data, description=description, name=name, - resolve=resolve, spacing=spacing, title=title, - transform=transform, **kwds) - - -class FacetSpec(Spec): - """FacetSpec schema wrapper - - Mapping(required=[facet, spec]) - Base interface for a facet specification. - - Attributes - ---------- - - facet : anyOf(:class:`FacetFieldDef`, :class:`FacetMapping`) - Definition for how to facet the data. One of: - 1) `a field definition for faceting the plot by one field - `__ - 2) `An object that maps row and column channels to their field definitions - `__ - spec : anyOf(:class:`LayerSpec`, :class:`FacetedUnitSpec`) - A specification of the view that gets faceted. - align : anyOf(:class:`LayoutAlign`, :class:`RowColLayoutAlign`) - The alignment to apply to grid rows and columns. - The supported string values are ``"all"``, ``"each"``, and ``"none"``. - - - * For ``"none"``, a flow layout will be used, in which adjacent subviews are simply - placed one after the other. - * For ``"each"``, subviews will be aligned into a clean grid structure, but each row - or column may be of variable size. - * For ``"all"``, subviews will be aligned and each row or column will be sized - identically based on the maximum observed size. String values for this property - will be applied to both grid rows and columns. - - Alternatively, an object value of the form ``{"row": string, "column": string}`` can - be used to supply different alignments for rows and columns. - - **Default value:** ``"all"``. - bounds : enum('full', 'flush') - The bounds calculation method to use for determining the extent of a sub-plot. One - of ``full`` (the default) or ``flush``. - - - * If set to ``full``, the entire calculated bounds (including axes, title, and - legend) will be used. - * If set to ``flush``, only the specified width and height values for the sub-view - will be used. The ``flush`` setting can be useful when attempting to place - sub-plots without axes or legends into a uniform grid structure. - - **Default value:** ``"full"`` - center : anyOf(boolean, :class:`RowColboolean`) - Boolean flag indicating if subviews should be centered relative to their respective - rows or columns. - - An object value of the form ``{"row": boolean, "column": boolean}`` can be used to - supply different centering values for rows and columns. - - **Default value:** ``false`` - columns : float - The number of columns to include in the view composition layout. - - **Default value** : ``undefined`` -- An infinite number of columns (a single row) - will be assumed. This is equivalent to - ``hconcat`` (for ``concat`` ) and to using the ``column`` channel (for ``facet`` and - ``repeat`` ). - - **Note** : - - 1) This property is only for: - - - * the general (wrappable) ``concat`` operator (not ``hconcat`` / ``vconcat`` ) - * the ``facet`` and ``repeat`` operator with one field/repetition definition - (without row/column nesting) - - 2) Setting the ``columns`` to ``1`` is equivalent to ``vconcat`` (for ``concat`` ) - and to using the ``row`` channel (for ``facet`` and ``repeat`` ). - data : anyOf(:class:`Data`, None) - An object describing the data source. Set to ``null`` to ignore the parent's data - source. If no data is set, it is derived from the parent. - description : string - Description of this mark for commenting purpose. - name : string - Name of the visualization for later reference. - resolve : :class:`Resolve` - Scale, axis, and legend resolutions for view composition specifications. - spacing : anyOf(float, :class:`RowColnumber`) - The spacing in pixels between sub-views of the composition operator. - An object of the form ``{"row": number, "column": number}`` can be used to set - different spacing values for rows and columns. - - **Default value** : Depends on ``"spacing"`` property of `the view composition - configuration `__ ( - ``20`` by default) - title : anyOf(string, :class:`TitleParams`) - Title for the plot. - transform : List(:class:`Transform`) - An array of data transformations such as filter and new field calculation. - """ - _schema = {'$ref': '#/definitions/FacetSpec'} - - def __init__(self, facet=Undefined, spec=Undefined, align=Undefined, bounds=Undefined, - center=Undefined, columns=Undefined, data=Undefined, description=Undefined, - name=Undefined, resolve=Undefined, spacing=Undefined, title=Undefined, - transform=Undefined, **kwds): - super(FacetSpec, self).__init__(facet=facet, spec=spec, align=align, bounds=bounds, - center=center, columns=columns, data=data, - description=description, name=name, resolve=resolve, - spacing=spacing, title=title, transform=transform, **kwds) - - -class FacetedUnitSpec(Spec): - """FacetedUnitSpec schema wrapper - - Mapping(required=[mark]) - Unit spec that can have a composite mark and row or column channels (shorthand for a facet - spec). - - Attributes - ---------- - - mark : :class:`AnyMark` - A string describing the mark type (one of ``"bar"``, ``"circle"``, ``"square"``, - ``"tick"``, ``"line"``, - ``"area"``, ``"point"``, ``"rule"``, ``"geoshape"``, and ``"text"`` ) or a `mark - definition object `__. - align : anyOf(:class:`LayoutAlign`, :class:`RowColLayoutAlign`) - The alignment to apply to grid rows and columns. - The supported string values are ``"all"``, ``"each"``, and ``"none"``. - - - * For ``"none"``, a flow layout will be used, in which adjacent subviews are simply - placed one after the other. - * For ``"each"``, subviews will be aligned into a clean grid structure, but each row - or column may be of variable size. - * For ``"all"``, subviews will be aligned and each row or column will be sized - identically based on the maximum observed size. String values for this property - will be applied to both grid rows and columns. - - Alternatively, an object value of the form ``{"row": string, "column": string}`` can - be used to supply different alignments for rows and columns. - - **Default value:** ``"all"``. - bounds : enum('full', 'flush') - The bounds calculation method to use for determining the extent of a sub-plot. One - of ``full`` (the default) or ``flush``. - - - * If set to ``full``, the entire calculated bounds (including axes, title, and - legend) will be used. - * If set to ``flush``, only the specified width and height values for the sub-view - will be used. The ``flush`` setting can be useful when attempting to place - sub-plots without axes or legends into a uniform grid structure. - - **Default value:** ``"full"`` - center : anyOf(boolean, :class:`RowColboolean`) - Boolean flag indicating if subviews should be centered relative to their respective - rows or columns. - - An object value of the form ``{"row": boolean, "column": boolean}`` can be used to - supply different centering values for rows and columns. - - **Default value:** ``false`` - columns : float - The number of columns to include in the view composition layout. - - **Default value** : ``undefined`` -- An infinite number of columns (a single row) - will be assumed. This is equivalent to - ``hconcat`` (for ``concat`` ) and to using the ``column`` channel (for ``facet`` and - ``repeat`` ). - - **Note** : - - 1) This property is only for: - - - * the general (wrappable) ``concat`` operator (not ``hconcat`` / ``vconcat`` ) - * the ``facet`` and ``repeat`` operator with one field/repetition definition - (without row/column nesting) - - 2) Setting the ``columns`` to ``1`` is equivalent to ``vconcat`` (for ``concat`` ) - and to using the ``row`` channel (for ``facet`` and ``repeat`` ). - data : anyOf(:class:`Data`, None) - An object describing the data source. Set to ``null`` to ignore the parent's data - source. If no data is set, it is derived from the parent. - description : string - Description of this mark for commenting purpose. - encoding : :class:`FacetedEncoding` - A key-value mapping between encoding channels and definition of fields. - height : float - The height of a visualization. - - **Default value:** - - - * If a view's `autosize - `__ type is ``"fit"`` or - its y-channel has a `continuous scale - `__, the height will - be the value of `config.view.height - `__. - * For y-axis with a band or point scale: if `rangeStep - `__ is a numeric value or - unspecified, the height is `determined by the range step, paddings, and the - cardinality of the field mapped to y-channel - `__. Otherwise, if the - ``rangeStep`` is ``null``, the height will be the value of `config.view.height - `__. - * If no field is mapped to ``y`` channel, the ``height`` will be the value of - ``rangeStep``. - - **Note** : For plots with `row and column channels - `__, this represents the - height of a single view. - - **See also:** The documentation for `width and height - `__ contains more examples. - name : string - Name of the visualization for later reference. - projection : :class:`Projection` - An object defining properties of geographic projection, which will be applied to - ``shape`` path for ``"geoshape"`` marks - and to ``latitude`` and ``"longitude"`` channels for other marks. - resolve : :class:`Resolve` - Scale, axis, and legend resolutions for view composition specifications. - selection : Mapping(required=[]) - A key-value mapping between selection names and definitions. - spacing : anyOf(float, :class:`RowColnumber`) - The spacing in pixels between sub-views of the composition operator. - An object of the form ``{"row": number, "column": number}`` can be used to set - different spacing values for rows and columns. - - **Default value** : Depends on ``"spacing"`` property of `the view composition - configuration `__ ( - ``20`` by default) - title : anyOf(string, :class:`TitleParams`) - Title for the plot. - transform : List(:class:`Transform`) - An array of data transformations such as filter and new field calculation. - view : :class:`ViewBackground` - An object defining the view background's fill and stroke. - - **Default value:** none (transparent) - width : float - The width of a visualization. - - **Default value:** This will be determined by the following rules: - - - * If a view's `autosize - `__ type is ``"fit"`` or - its x-channel has a `continuous scale - `__, the width will - be the value of `config.view.width - `__. - * For x-axis with a band or point scale: if `rangeStep - `__ is a numeric value or - unspecified, the width is `determined by the range step, paddings, and the - cardinality of the field mapped to x-channel - `__. Otherwise, if the - ``rangeStep`` is ``null``, the width will be the value of `config.view.width - `__. - * If no field is mapped to ``x`` channel, the ``width`` will be the value of - `config.scale.textXRangeStep - `__ for - ``text`` mark and the value of ``rangeStep`` for other marks. - - **Note:** For plots with `row and column channels - `__, this represents the - width of a single view. - - **See also:** The documentation for `width and height - `__ contains more examples. - """ - _schema = {'$ref': '#/definitions/FacetedUnitSpec'} - - def __init__(self, mark=Undefined, align=Undefined, bounds=Undefined, center=Undefined, - columns=Undefined, data=Undefined, description=Undefined, encoding=Undefined, - height=Undefined, name=Undefined, projection=Undefined, resolve=Undefined, - selection=Undefined, spacing=Undefined, title=Undefined, transform=Undefined, - view=Undefined, width=Undefined, **kwds): - super(FacetedUnitSpec, self).__init__(mark=mark, align=align, bounds=bounds, center=center, - columns=columns, data=data, description=description, - encoding=encoding, height=height, name=name, - projection=projection, resolve=resolve, - selection=selection, spacing=spacing, title=title, - transform=transform, view=view, width=width, **kwds) - - -class HConcatSpec(Spec): - """HConcatSpec schema wrapper - - Mapping(required=[hconcat]) - Base interface for a horizontal concatenation specification. - - Attributes - ---------- - - hconcat : List(:class:`Spec`) - A list of views to be concatenated and put into a row. - bounds : enum('full', 'flush') - The bounds calculation method to use for determining the extent of a sub-plot. One - of ``full`` (the default) or ``flush``. - - - * If set to ``full``, the entire calculated bounds (including axes, title, and - legend) will be used. - * If set to ``flush``, only the specified width and height values for the sub-view - will be used. The ``flush`` setting can be useful when attempting to place - sub-plots without axes or legends into a uniform grid structure. - - **Default value:** ``"full"`` - center : boolean - Boolean flag indicating if subviews should be centered relative to their respective - rows or columns. - - **Default value:** ``false`` - data : anyOf(:class:`Data`, None) - An object describing the data source. Set to ``null`` to ignore the parent's data - source. If no data is set, it is derived from the parent. - description : string - Description of this mark for commenting purpose. - name : string - Name of the visualization for later reference. - resolve : :class:`Resolve` - Scale, axis, and legend resolutions for view composition specifications. - spacing : float - The spacing in pixels between sub-views of the concat operator. - - **Default value** : ``10`` - title : anyOf(string, :class:`TitleParams`) - Title for the plot. - transform : List(:class:`Transform`) - An array of data transformations such as filter and new field calculation. - """ - _schema = {'$ref': '#/definitions/HConcatSpec'} - - def __init__(self, hconcat=Undefined, bounds=Undefined, center=Undefined, data=Undefined, - description=Undefined, name=Undefined, resolve=Undefined, spacing=Undefined, - title=Undefined, transform=Undefined, **kwds): - super(HConcatSpec, self).__init__(hconcat=hconcat, bounds=bounds, center=center, data=data, - description=description, name=name, resolve=resolve, - spacing=spacing, title=title, transform=transform, **kwds) - - -class LayerSpec(Spec): - """LayerSpec schema wrapper - - Mapping(required=[layer]) - A full layered plot specification, which may contains ``encoding`` and ``projection`` - properties that will be applied to underlying unit (single-view) specifications. - - Attributes - ---------- - - layer : List(anyOf(:class:`LayerSpec`, :class:`UnitSpec`)) - Layer or single view specifications to be layered. - - **Note** : Specifications inside ``layer`` cannot use ``row`` and ``column`` - channels as layering facet specifications is not allowed. Instead, use the `facet - operator `__ and place a layer - inside a facet. - data : anyOf(:class:`Data`, None) - An object describing the data source. Set to ``null`` to ignore the parent's data - source. If no data is set, it is derived from the parent. - description : string - Description of this mark for commenting purpose. - encoding : :class:`Encoding` - A shared key-value mapping between encoding channels and definition of fields in the - underlying layers. - height : float - The height of a visualization. - - **Default value:** - - - * If a view's `autosize - `__ type is ``"fit"`` or - its y-channel has a `continuous scale - `__, the height will - be the value of `config.view.height - `__. - * For y-axis with a band or point scale: if `rangeStep - `__ is a numeric value or - unspecified, the height is `determined by the range step, paddings, and the - cardinality of the field mapped to y-channel - `__. Otherwise, if the - ``rangeStep`` is ``null``, the height will be the value of `config.view.height - `__. - * If no field is mapped to ``y`` channel, the ``height`` will be the value of - ``rangeStep``. - - **Note** : For plots with `row and column channels - `__, this represents the - height of a single view. - - **See also:** The documentation for `width and height - `__ contains more examples. - name : string - Name of the visualization for later reference. - projection : :class:`Projection` - An object defining properties of the geographic projection shared by underlying - layers. - resolve : :class:`Resolve` - Scale, axis, and legend resolutions for view composition specifications. - title : anyOf(string, :class:`TitleParams`) - Title for the plot. - transform : List(:class:`Transform`) - An array of data transformations such as filter and new field calculation. - view : :class:`ViewBackground` - An object defining the view background's fill and stroke. - - **Default value:** none (transparent) - width : float - The width of a visualization. - - **Default value:** This will be determined by the following rules: - - - * If a view's `autosize - `__ type is ``"fit"`` or - its x-channel has a `continuous scale - `__, the width will - be the value of `config.view.width - `__. - * For x-axis with a band or point scale: if `rangeStep - `__ is a numeric value or - unspecified, the width is `determined by the range step, paddings, and the - cardinality of the field mapped to x-channel - `__. Otherwise, if the - ``rangeStep`` is ``null``, the width will be the value of `config.view.width - `__. - * If no field is mapped to ``x`` channel, the ``width`` will be the value of - `config.scale.textXRangeStep - `__ for - ``text`` mark and the value of ``rangeStep`` for other marks. - - **Note:** For plots with `row and column channels - `__, this represents the - width of a single view. - - **See also:** The documentation for `width and height - `__ contains more examples. - """ - _schema = {'$ref': '#/definitions/LayerSpec'} - - def __init__(self, layer=Undefined, data=Undefined, description=Undefined, encoding=Undefined, - height=Undefined, name=Undefined, projection=Undefined, resolve=Undefined, - title=Undefined, transform=Undefined, view=Undefined, width=Undefined, **kwds): - super(LayerSpec, self).__init__(layer=layer, data=data, description=description, - encoding=encoding, height=height, name=name, - projection=projection, resolve=resolve, title=title, - transform=transform, view=view, width=width, **kwds) - - -class RepeatSpec(Spec): - """RepeatSpec schema wrapper - - Mapping(required=[repeat, spec]) - Base interface for a repeat specification. - - Attributes - ---------- - - repeat : anyOf(List(string), :class:`RepeatMapping`) - Definition for fields to be repeated. One of: - 1) An array of fields to be repeated. If ``"repeat"`` is an array, the field can be - referred using ``{"repeat": "repeat"}`` - 2) An object that mapped ``"row"`` and/or ``"column"`` to the listed of fields to be - repeated along the particular orientations. The objects ``{"repeat": "row"}`` and - ``{"repeat": "column"}`` can be used to refer to the repeated field respectively. - spec : :class:`Spec` - A specification of the view that gets repeated. - align : anyOf(:class:`LayoutAlign`, :class:`RowColLayoutAlign`) - The alignment to apply to grid rows and columns. - The supported string values are ``"all"``, ``"each"``, and ``"none"``. - - - * For ``"none"``, a flow layout will be used, in which adjacent subviews are simply - placed one after the other. - * For ``"each"``, subviews will be aligned into a clean grid structure, but each row - or column may be of variable size. - * For ``"all"``, subviews will be aligned and each row or column will be sized - identically based on the maximum observed size. String values for this property - will be applied to both grid rows and columns. - - Alternatively, an object value of the form ``{"row": string, "column": string}`` can - be used to supply different alignments for rows and columns. - - **Default value:** ``"all"``. - bounds : enum('full', 'flush') - The bounds calculation method to use for determining the extent of a sub-plot. One - of ``full`` (the default) or ``flush``. - - - * If set to ``full``, the entire calculated bounds (including axes, title, and - legend) will be used. - * If set to ``flush``, only the specified width and height values for the sub-view - will be used. The ``flush`` setting can be useful when attempting to place - sub-plots without axes or legends into a uniform grid structure. - - **Default value:** ``"full"`` - center : anyOf(boolean, :class:`RowColboolean`) - Boolean flag indicating if subviews should be centered relative to their respective - rows or columns. - - An object value of the form ``{"row": boolean, "column": boolean}`` can be used to - supply different centering values for rows and columns. - - **Default value:** ``false`` - columns : float - The number of columns to include in the view composition layout. - - **Default value** : ``undefined`` -- An infinite number of columns (a single row) - will be assumed. This is equivalent to - ``hconcat`` (for ``concat`` ) and to using the ``column`` channel (for ``facet`` and - ``repeat`` ). - - **Note** : - - 1) This property is only for: - - - * the general (wrappable) ``concat`` operator (not ``hconcat`` / ``vconcat`` ) - * the ``facet`` and ``repeat`` operator with one field/repetition definition - (without row/column nesting) - - 2) Setting the ``columns`` to ``1`` is equivalent to ``vconcat`` (for ``concat`` ) - and to using the ``row`` channel (for ``facet`` and ``repeat`` ). - data : anyOf(:class:`Data`, None) - An object describing the data source. Set to ``null`` to ignore the parent's data - source. If no data is set, it is derived from the parent. - description : string - Description of this mark for commenting purpose. - name : string - Name of the visualization for later reference. - resolve : :class:`Resolve` - Scale, axis, and legend resolutions for view composition specifications. - spacing : anyOf(float, :class:`RowColnumber`) - The spacing in pixels between sub-views of the composition operator. - An object of the form ``{"row": number, "column": number}`` can be used to set - different spacing values for rows and columns. - - **Default value** : Depends on ``"spacing"`` property of `the view composition - configuration `__ ( - ``20`` by default) - title : anyOf(string, :class:`TitleParams`) - Title for the plot. - transform : List(:class:`Transform`) - An array of data transformations such as filter and new field calculation. - """ - _schema = {'$ref': '#/definitions/RepeatSpec'} - - def __init__(self, repeat=Undefined, spec=Undefined, align=Undefined, bounds=Undefined, - center=Undefined, columns=Undefined, data=Undefined, description=Undefined, - name=Undefined, resolve=Undefined, spacing=Undefined, title=Undefined, - transform=Undefined, **kwds): - super(RepeatSpec, self).__init__(repeat=repeat, spec=spec, align=align, bounds=bounds, - center=center, columns=columns, data=data, - description=description, name=name, resolve=resolve, - spacing=spacing, title=title, transform=transform, **kwds) - - -class SphereGenerator(Generator): - """SphereGenerator schema wrapper - - Mapping(required=[sphere]) - - Attributes - ---------- - - sphere : anyOf(enum(True), Mapping(required=[])) - Generate sphere GeoJSON data for the full globe. - name : string - Provide a placeholder name and bind data at runtime. - """ - _schema = {'$ref': '#/definitions/SphereGenerator'} - - def __init__(self, sphere=Undefined, name=Undefined, **kwds): - super(SphereGenerator, self).__init__(sphere=sphere, name=name, **kwds) - - -class StackOffset(VegaLiteSchema): - """StackOffset schema wrapper - - enum('zero', 'center', 'normalize') - """ - _schema = {'$ref': '#/definitions/StackOffset'} - - def __init__(self, *args): - super(StackOffset, self).__init__(*args) - - -class StandardType(VegaLiteSchema): - """StandardType schema wrapper - - enum('quantitative', 'ordinal', 'temporal', 'nominal') - """ - _schema = {'$ref': '#/definitions/StandardType'} - - def __init__(self, *args): - super(StandardType, self).__init__(*args) - - -class StringFieldDefWithCondition(VegaLiteSchema): - """StringFieldDefWithCondition schema wrapper - - Mapping(required=[type]) - A FieldDef with Condition :raw-html:`` - - Attributes - ---------- - - type : :class:`StandardType` - The encoded field's type of measurement ( ``"quantitative"``, ``"temporal"``, - ``"ordinal"``, or ``"nominal"`` ). - It can also be a ``"geojson"`` type for encoding `'geoshape' - `__. - - **Note:** - - - * Data values for a temporal field can be either a date-time string (e.g., - ``"2015-03-07 12:32:17"``, ``"17:01"``, ``"2015-03-16"``. ``"2015"`` ) or a - timestamp number (e.g., ``1552199579097`` ). - * Data ``type`` describes the semantics of the data rather than the primitive data - types ( ``number``, ``string``, etc.). The same primitive data type can have - different types of measurement. For example, numeric data can represent - quantitative, ordinal, or nominal data. - * When using with `bin `__, the - ``type`` property can be either ``"quantitative"`` (for using a linear bin scale) - or `"ordinal" (for using an ordinal bin scale) - `__. - * When using with `timeUnit - `__, the ``type`` property - can be either ``"temporal"`` (for using a temporal scale) or `"ordinal" (for using - an ordinal scale) `__. - * When using with `aggregate - `__, the ``type`` property - refers to the post-aggregation data type. For example, we can calculate count - ``distinct`` of a categorical field ``"cat"`` using ``{"aggregate": "distinct", - "field": "cat", "type": "quantitative"}``. The ``"type"`` of the aggregate output - is ``"quantitative"``. - * Secondary channels (e.g., ``x2``, ``y2``, ``xError``, ``yError`` ) do not have - ``type`` as they have exactly the same type as their primary channels (e.g., - ``x``, ``y`` ). - - **See also:** `type `__ - documentation. - aggregate : :class:`Aggregate` - Aggregation function for the field - (e.g., ``mean``, ``sum``, ``median``, ``min``, ``max``, ``count`` ). - - **Default value:** ``undefined`` (None) - - **See also:** `aggregate `__ - documentation. - bin : anyOf(boolean, :class:`BinParams`, None) - A flag for binning a ``quantitative`` field, `an object defining binning parameters - `__, or indicating that the - data for ``x`` or ``y`` channel are binned before they are imported into Vega-Lite ( - ``"binned"`` ). - - - If ``true``, default `binning parameters - `__ will be applied. - - If ``"binned"``, this indicates that the data for the ``x`` (or ``y`` ) channel are - already binned. You can map the bin-start field to ``x`` (or ``y`` ) and the bin-end - field to ``x2`` (or ``y2`` ). The scale and axis will be formatted similar to - binning in Vega-lite. To adjust the axis ticks based on the bin step, you can also - set the axis's `tickMinStep - `__ property. - - **Default value:** ``false`` - - **See also:** `bin `__ - documentation. - condition : anyOf(:class:`ConditionalStringValueDef`, - List(:class:`ConditionalStringValueDef`)) - One or more value definition(s) with `a selection or a test predicate - `__. - - **Note:** A field definition's ``condition`` property can only contain `conditional - value definitions `__ - since Vega-Lite only allows at most one encoded field per encoding channel. - field : :class:`Field` - **Required.** A string defining the name of the field from which to pull a data - value - or an object defining iterated values from the `repeat - `__ operator. - - **See also:** `field `__ - documentation. - - **Notes:** - 1) Dots ( ``.`` ) and brackets ( ``[`` and ``]`` ) can be used to access nested - objects (e.g., ``"field": "foo.bar"`` and ``"field": "foo['bar']"`` ). - If field names contain dots or brackets but are not nested, you can use ``\\`` to - escape dots and brackets (e.g., ``"a\\.b"`` and ``"a\\[0\\]"`` ). - See more details about escaping in the `field documentation - `__. - 2) ``field`` is not required if ``aggregate`` is ``count``. - legend : anyOf(:class:`Legend`, None) - An object defining properties of the legend. - If ``null``, the legend for the encoding channel will be removed. - - **Default value:** If undefined, default `legend properties - `__ are applied. - - **See also:** `legend `__ - documentation. - scale : anyOf(:class:`Scale`, None) - An object defining properties of the channel's scale, which is the function that - transforms values in the data domain (numbers, dates, strings, etc) to visual values - (pixels, colors, sizes) of the encoding channels. - - If ``null``, the scale will be `disabled and the data value will be directly encoded - `__. - - **Default value:** If undefined, default `scale properties - `__ are applied. - - **See also:** `scale `__ - documentation. - sort : :class:`Sort` - Sort order for the encoded field. - - For continuous fields (quantitative or temporal), ``sort`` can be either - ``"ascending"`` or ``"descending"``. - - For discrete fields, ``sort`` can be one of the following: - - - * ``"ascending"`` or ``"descending"`` -- for sorting by the values' natural order in - Javascript. - * `A sort-by-encoding definition - `__ for sorting - by another encoding channel. (This type of sort definition is not available for - ``row`` and ``column`` channels.) - * `A sort field definition - `__ for sorting by - another field. - * `An array specifying the field values in preferred order - `__. In this case, the - sort order will obey the values in the array, followed by any unspecified values - in their original order. For discrete time field, values in the sort array can be - `date-time definition objects `__. In addition, for time units - ``"month"`` and ``"day"``, the values can be the month or day names (case - insensitive) or their 3-letter initials (e.g., ``"Mon"``, ``"Tue"`` ). - * ``null`` indicating no sort. - - **Default value:** ``"ascending"`` - - **Note:** ``null`` is not supported for ``row`` and ``column``. - - **See also:** `sort `__ - documentation. - timeUnit : :class:`TimeUnit` - Time unit (e.g., ``year``, ``yearmonth``, ``month``, ``hours`` ) for a temporal - field. - or `a temporal field that gets casted as ordinal - `__. - - **Default value:** ``undefined`` (None) - - **See also:** `timeUnit `__ - documentation. - title : anyOf(string, None) - A title for the field. If ``null``, the title will be removed. - - **Default value:** derived from the field's name and transformation function ( - ``aggregate``, ``bin`` and ``timeUnit`` ). If the field has an aggregate function, - the function is displayed as part of the title (e.g., ``"Sum of Profit"`` ). If the - field is binned or has a time unit applied, the applied function is shown in - parentheses (e.g., ``"Profit (binned)"``, ``"Transaction Date (year-month)"`` ). - Otherwise, the title is simply the field name. - - **Notes** : - - 1) You can customize the default field title format by providing the `fieldTitle - `__ property in - the `config `__ or `fieldTitle - function via the compile function's options - `__. - - 2) If both field definition's ``title`` and axis, header, or legend ``title`` are - defined, axis/header/legend title will be used. - """ - _schema = {'$ref': '#/definitions/StringFieldDefWithCondition'} - - def __init__(self, type=Undefined, aggregate=Undefined, bin=Undefined, condition=Undefined, - field=Undefined, legend=Undefined, scale=Undefined, sort=Undefined, timeUnit=Undefined, - title=Undefined, **kwds): - super(StringFieldDefWithCondition, self).__init__(type=type, aggregate=aggregate, bin=bin, - condition=condition, field=field, - legend=legend, scale=scale, sort=sort, - timeUnit=timeUnit, title=title, **kwds) - - -class StringFieldDefWithConditionTypeForShape(VegaLiteSchema): - """StringFieldDefWithConditionTypeForShape schema wrapper - - Mapping(required=[type]) - A FieldDef with Condition :raw-html:`` - - Attributes - ---------- - - type : :class:`TypeForShape` - The encoded field's type of measurement ( ``"quantitative"``, ``"temporal"``, - ``"ordinal"``, or ``"nominal"`` ). - It can also be a ``"geojson"`` type for encoding `'geoshape' - `__. - - **Note:** - - - * Data values for a temporal field can be either a date-time string (e.g., - ``"2015-03-07 12:32:17"``, ``"17:01"``, ``"2015-03-16"``. ``"2015"`` ) or a - timestamp number (e.g., ``1552199579097`` ). - * Data ``type`` describes the semantics of the data rather than the primitive data - types ( ``number``, ``string``, etc.). The same primitive data type can have - different types of measurement. For example, numeric data can represent - quantitative, ordinal, or nominal data. - * When using with `bin `__, the - ``type`` property can be either ``"quantitative"`` (for using a linear bin scale) - or `"ordinal" (for using an ordinal bin scale) - `__. - * When using with `timeUnit - `__, the ``type`` property - can be either ``"temporal"`` (for using a temporal scale) or `"ordinal" (for using - an ordinal scale) `__. - * When using with `aggregate - `__, the ``type`` property - refers to the post-aggregation data type. For example, we can calculate count - ``distinct`` of a categorical field ``"cat"`` using ``{"aggregate": "distinct", - "field": "cat", "type": "quantitative"}``. The ``"type"`` of the aggregate output - is ``"quantitative"``. - * Secondary channels (e.g., ``x2``, ``y2``, ``xError``, ``yError`` ) do not have - ``type`` as they have exactly the same type as their primary channels (e.g., - ``x``, ``y`` ). - - **See also:** `type `__ - documentation. - aggregate : :class:`Aggregate` - Aggregation function for the field - (e.g., ``mean``, ``sum``, ``median``, ``min``, ``max``, ``count`` ). - - **Default value:** ``undefined`` (None) - - **See also:** `aggregate `__ - documentation. - bin : anyOf(boolean, :class:`BinParams`, None) - A flag for binning a ``quantitative`` field, `an object defining binning parameters - `__, or indicating that the - data for ``x`` or ``y`` channel are binned before they are imported into Vega-Lite ( - ``"binned"`` ). - - - If ``true``, default `binning parameters - `__ will be applied. - - If ``"binned"``, this indicates that the data for the ``x`` (or ``y`` ) channel are - already binned. You can map the bin-start field to ``x`` (or ``y`` ) and the bin-end - field to ``x2`` (or ``y2`` ). The scale and axis will be formatted similar to - binning in Vega-lite. To adjust the axis ticks based on the bin step, you can also - set the axis's `tickMinStep - `__ property. - - **Default value:** ``false`` - - **See also:** `bin `__ - documentation. - condition : anyOf(:class:`ConditionalStringValueDef`, - List(:class:`ConditionalStringValueDef`)) - One or more value definition(s) with `a selection or a test predicate - `__. - - **Note:** A field definition's ``condition`` property can only contain `conditional - value definitions `__ - since Vega-Lite only allows at most one encoded field per encoding channel. - field : :class:`Field` - **Required.** A string defining the name of the field from which to pull a data - value - or an object defining iterated values from the `repeat - `__ operator. - - **See also:** `field `__ - documentation. - - **Notes:** - 1) Dots ( ``.`` ) and brackets ( ``[`` and ``]`` ) can be used to access nested - objects (e.g., ``"field": "foo.bar"`` and ``"field": "foo['bar']"`` ). - If field names contain dots or brackets but are not nested, you can use ``\\`` to - escape dots and brackets (e.g., ``"a\\.b"`` and ``"a\\[0\\]"`` ). - See more details about escaping in the `field documentation - `__. - 2) ``field`` is not required if ``aggregate`` is ``count``. - legend : anyOf(:class:`Legend`, None) - An object defining properties of the legend. - If ``null``, the legend for the encoding channel will be removed. - - **Default value:** If undefined, default `legend properties - `__ are applied. - - **See also:** `legend `__ - documentation. - scale : anyOf(:class:`Scale`, None) - An object defining properties of the channel's scale, which is the function that - transforms values in the data domain (numbers, dates, strings, etc) to visual values - (pixels, colors, sizes) of the encoding channels. - - If ``null``, the scale will be `disabled and the data value will be directly encoded - `__. - - **Default value:** If undefined, default `scale properties - `__ are applied. - - **See also:** `scale `__ - documentation. - sort : :class:`Sort` - Sort order for the encoded field. - - For continuous fields (quantitative or temporal), ``sort`` can be either - ``"ascending"`` or ``"descending"``. - - For discrete fields, ``sort`` can be one of the following: - - - * ``"ascending"`` or ``"descending"`` -- for sorting by the values' natural order in - Javascript. - * `A sort-by-encoding definition - `__ for sorting - by another encoding channel. (This type of sort definition is not available for - ``row`` and ``column`` channels.) - * `A sort field definition - `__ for sorting by - another field. - * `An array specifying the field values in preferred order - `__. In this case, the - sort order will obey the values in the array, followed by any unspecified values - in their original order. For discrete time field, values in the sort array can be - `date-time definition objects `__. In addition, for time units - ``"month"`` and ``"day"``, the values can be the month or day names (case - insensitive) or their 3-letter initials (e.g., ``"Mon"``, ``"Tue"`` ). - * ``null`` indicating no sort. - - **Default value:** ``"ascending"`` - - **Note:** ``null`` is not supported for ``row`` and ``column``. - - **See also:** `sort `__ - documentation. - timeUnit : :class:`TimeUnit` - Time unit (e.g., ``year``, ``yearmonth``, ``month``, ``hours`` ) for a temporal - field. - or `a temporal field that gets casted as ordinal - `__. - - **Default value:** ``undefined`` (None) - - **See also:** `timeUnit `__ - documentation. - title : anyOf(string, None) - A title for the field. If ``null``, the title will be removed. - - **Default value:** derived from the field's name and transformation function ( - ``aggregate``, ``bin`` and ``timeUnit`` ). If the field has an aggregate function, - the function is displayed as part of the title (e.g., ``"Sum of Profit"`` ). If the - field is binned or has a time unit applied, the applied function is shown in - parentheses (e.g., ``"Profit (binned)"``, ``"Transaction Date (year-month)"`` ). - Otherwise, the title is simply the field name. - - **Notes** : - - 1) You can customize the default field title format by providing the `fieldTitle - `__ property in - the `config `__ or `fieldTitle - function via the compile function's options - `__. - - 2) If both field definition's ``title`` and axis, header, or legend ``title`` are - defined, axis/header/legend title will be used. - """ - _schema = {'$ref': '#/definitions/StringFieldDefWithCondition'} - - def __init__(self, type=Undefined, aggregate=Undefined, bin=Undefined, condition=Undefined, - field=Undefined, legend=Undefined, scale=Undefined, sort=Undefined, timeUnit=Undefined, - title=Undefined, **kwds): - super(StringFieldDefWithConditionTypeForShape, self).__init__(type=type, aggregate=aggregate, - bin=bin, condition=condition, - field=field, legend=legend, - scale=scale, sort=sort, - timeUnit=timeUnit, title=title, - **kwds) - - -class StringValueDefWithCondition(VegaLiteSchema): - """StringValueDefWithCondition schema wrapper - - Mapping(required=[]) - A ValueDef with Condition where either the condition or the value are - optional. - - Attributes - ---------- - - condition : anyOf(:class:`ConditionalMarkPropFieldDef`, :class:`ConditionalStringValueDef`, - List(:class:`ConditionalStringValueDef`)) - A field definition or one or more value definition(s) with a selection predicate. - value : anyOf(string, None) - A constant value in visual domain (e.g., ``"red"`` / "#0099ff" for color, values - between ``0`` to ``1`` for opacity). - """ - _schema = {'$ref': '#/definitions/StringValueDefWithCondition'} - - def __init__(self, condition=Undefined, value=Undefined, **kwds): - super(StringValueDefWithCondition, self).__init__(condition=condition, value=value, **kwds) - - -class StringValueDefWithConditionTypeForShape(VegaLiteSchema): - """StringValueDefWithConditionTypeForShape schema wrapper - - Mapping(required=[]) - A ValueDef with Condition where either the condition or the value are - optional. - - Attributes - ---------- - - condition : anyOf(:class:`ConditionalMarkPropFieldDefTypeForShape`, - :class:`ConditionalStringValueDef`, List(:class:`ConditionalStringValueDef`)) - A field definition or one or more value definition(s) with a selection predicate. - value : anyOf(string, None) - A constant value in visual domain (e.g., ``"red"`` / "#0099ff" for color, values - between ``0`` to ``1`` for opacity). - """ - _schema = {'$ref': '#/definitions/StringValueDefWithCondition'} - - def __init__(self, condition=Undefined, value=Undefined, **kwds): - super(StringValueDefWithConditionTypeForShape, self).__init__(condition=condition, value=value, - **kwds) - - -class StrokeCap(VegaLiteSchema): - """StrokeCap schema wrapper - - enum('butt', 'round', 'square') - """ - _schema = {'$ref': '#/definitions/StrokeCap'} - - def __init__(self, *args): - super(StrokeCap, self).__init__(*args) - - -class StrokeJoin(VegaLiteSchema): - """StrokeJoin schema wrapper - - enum('miter', 'round', 'bevel') - """ - _schema = {'$ref': '#/definitions/StrokeJoin'} - - def __init__(self, *args): - super(StrokeJoin, self).__init__(*args) - - -class StyleConfigIndex(VegaLiteSchema): - """StyleConfigIndex schema wrapper - - Mapping(required=[]) - """ - _schema = {'$ref': '#/definitions/StyleConfigIndex'} - - def __init__(self, **kwds): - super(StyleConfigIndex, self).__init__(**kwds) - - -class SymbolShape(VegaLiteSchema): - """SymbolShape schema wrapper - - string - """ - _schema = {'$ref': '#/definitions/SymbolShape'} - - def __init__(self, *args): - super(SymbolShape, self).__init__(*args) - - -class TextBaseline(VegaLiteSchema): - """TextBaseline schema wrapper - - anyOf(enum('alphabetic'), :class:`Baseline`) - """ - _schema = {'$ref': '#/definitions/TextBaseline'} - - def __init__(self, *args, **kwds): - super(TextBaseline, self).__init__(*args, **kwds) - - -class Baseline(TextBaseline): - """Baseline schema wrapper - - enum('top', 'middle', 'bottom') - """ - _schema = {'$ref': '#/definitions/Baseline'} - - def __init__(self, *args): - super(Baseline, self).__init__(*args) - - -class TextConfig(VegaLiteSchema): - """TextConfig schema wrapper - - Mapping(required=[]) - - Attributes - ---------- - - align : :class:`Align` - The horizontal alignment of the text. One of ``"left"``, ``"right"``, ``"center"``. - angle : float - The rotation angle of the text, in degrees. - baseline : :class:`TextBaseline` - The vertical alignment of the text. One of ``"top"``, ``"middle"``, ``"bottom"``. - - **Default value:** ``"middle"`` - color : :class:`Color` - Default color. Note that ``fill`` and ``stroke`` have higher precedence than - ``color`` and will override ``color``. - - **Default value:** :raw-html:`` - ``"#4682b4"`` - - **Note:** This property cannot be used in a `style config - `__. - cornerRadius : float - The radius in pixels of rounded rectangle corners. - - **Default value:** ``0`` - cursor : :class:`Cursor` - The mouse cursor used over the mark. Any valid `CSS cursor type - `__ can be used. - dir : :class:`Dir` - The direction of the text. One of ``"ltr"`` (left-to-right) or ``"rtl"`` - (right-to-left). This property determines on which side is truncated in response to - the limit parameter. - - **Default value:** ``"ltr"`` - dx : float - The horizontal offset, in pixels, between the text label and its anchor point. The - offset is applied after rotation by the *angle* property. - dy : float - The vertical offset, in pixels, between the text label and its anchor point. The - offset is applied after rotation by the *angle* property. - ellipsis : string - The ellipsis string for text truncated in response to the limit parameter. - - **Default value:** ``"…"`` - fill : :class:`Color` - Default Fill Color. This has higher precedence than ``config.color`` - - **Default value:** (None) - fillOpacity : float - The fill opacity (value between [0,1]). - - **Default value:** ``1`` - filled : boolean - Whether the mark's color should be used as fill color instead of stroke color. - - **Default value:** ``false`` for ``point``, ``line`` and ``rule`` ; otherwise, - ``true``. - - **Note:** This property cannot be used in a `style config - `__. - font : string - The typeface to set the text in (e.g., ``"Helvetica Neue"`` ). - fontSize : float - The font size, in pixels. - fontStyle : :class:`FontStyle` - The font style (e.g., ``"italic"`` ). - fontWeight : :class:`FontWeight` - The font weight. - This can be either a string (e.g ``"bold"``, ``"normal"`` ) or a number ( ``100``, - ``200``, ``300``, ..., ``900`` where ``"normal"`` = ``400`` and ``"bold"`` = ``700`` - ). - height : float - Height of the marks. - href : string - A URL to load upon mouse click. If defined, the mark acts as a hyperlink. - interpolate : :class:`Interpolate` - The line interpolation method to use for line and area marks. One of the following: - - - * ``"linear"`` : piecewise linear segments, as in a polyline. - * ``"linear-closed"`` : close the linear segments to form a polygon. - * ``"step"`` : alternate between horizontal and vertical segments, as in a step - function. - * ``"step-before"`` : alternate between vertical and horizontal segments, as in a - step function. - * ``"step-after"`` : alternate between horizontal and vertical segments, as in a - step function. - * ``"basis"`` : a B-spline, with control point duplication on the ends. - * ``"basis-open"`` : an open B-spline; may not intersect the start or end. - * ``"basis-closed"`` : a closed B-spline, as in a loop. - * ``"cardinal"`` : a Cardinal spline, with control point duplication on the ends. - * ``"cardinal-open"`` : an open Cardinal spline; may not intersect the start or end, - but will intersect other control points. - * ``"cardinal-closed"`` : a closed Cardinal spline, as in a loop. - * ``"bundle"`` : equivalent to basis, except the tension parameter is used to - straighten the spline. - * ``"monotone"`` : cubic interpolation that preserves monotonicity in y. - limit : float - The maximum length of the text mark in pixels. The text value will be automatically - truncated if the rendered size exceeds the limit. - - **Default value:** ``0``, indicating no limit - opacity : float - The overall opacity (value between [0,1]). - - **Default value:** ``0.7`` for non-aggregate plots with ``point``, ``tick``, - ``circle``, or ``square`` marks or layered ``bar`` charts and ``1`` otherwise. - order : anyOf(None, boolean) - For line and trail marks, this ``order`` property can be set to ``null`` or - ``false`` to make the lines use the original order in the data sources. - orient : :class:`Orientation` - The orientation of a non-stacked bar, tick, area, and line charts. - The value is either horizontal (default) or vertical. - - - * For bar, rule and tick, this determines whether the size of the bar and tick - should be applied to x or y dimension. - * For area, this property determines the orient property of the Vega output. - * For line and trail marks, this property determines the sort order of the points in - the line - if ``config.sortLineBy`` is not specified. - For stacked charts, this is always determined by the orientation of the stack; - therefore explicitly specified value will be ignored. - radius : float - Polar coordinate radial offset, in pixels, of the text label from the origin - determined by the ``x`` and ``y`` properties. - shape : string - Shape of the point marks. Supported values include: - - - * plotting shapes: ``"circle"``, ``"square"``, ``"cross"``, ``"diamond"``, - ``"triangle-up"``, ``"triangle-down"``, ``"triangle-right"``, or - ``"triangle-left"``. - * the line symbol ``"stroke"`` - * centered directional shapes ``"arrow"``, ``"wedge"``, or ``"triangle"`` - * a custom `SVG path string - `__ (For correct - sizing, custom shape paths should be defined within a square bounding box with - coordinates ranging from -1 to 1 along both the x and y dimensions.) - - **Default value:** ``"circle"`` - shortTimeLabels : boolean - Whether month names and weekday names should be abbreviated. - size : float - Default size for marks. - - - * For ``point`` / ``circle`` / ``square``, this represents the pixel area of the - marks. For example: in the case of circles, the radius is determined in part by - the square root of the size value. - * For ``bar``, this represents the band size of the bar, in pixels. - * For ``text``, this represents the font size, in pixels. - - **Default value:** ``30`` for point, circle, square marks; ``rangeStep`` - 1 for bar - marks with discrete dimensions; ``5`` for bar marks with continuous dimensions; - ``11`` for text marks. - stroke : :class:`Color` - Default Stroke Color. This has higher precedence than ``config.color`` - - **Default value:** (None) - strokeCap : :class:`StrokeCap` - The stroke cap for line ending style. One of ``"butt"``, ``"round"``, or - ``"square"``. - - **Default value:** ``"square"`` - strokeDash : List(float) - An array of alternating stroke, space lengths for creating dashed or dotted lines. - strokeDashOffset : float - The offset (in pixels) into which to begin drawing with the stroke dash array. - strokeJoin : :class:`StrokeJoin` - The stroke line join method. One of ``"miter"``, ``"round"`` or ``"bevel"``. - - **Default value:** ``"miter"`` - strokeMiterLimit : float - The miter limit at which to bevel a line join. - strokeOpacity : float - The stroke opacity (value between [0,1]). - - **Default value:** ``1`` - strokeWidth : float - The stroke width, in pixels. - tension : float - Depending on the interpolation type, sets the tension parameter (for line and area - marks). - text : string - Placeholder text if the ``text`` channel is not specified - theta : float - Polar coordinate angle, in radians, of the text label from the origin determined by - the ``x`` and ``y`` properties. Values for ``theta`` follow the same convention of - ``arc`` mark ``startAngle`` and ``endAngle`` properties: angles are measured in - radians, with ``0`` indicating "north". - tooltip : anyOf(:class:`Value`, :class:`TooltipContent`, None) - The tooltip text string to show upon mouse hover or an object defining which fields - should the tooltip be derived from. - - - * If ``tooltip`` is ``{"content": "encoding"}``, then all fields from ``encoding`` - will be used. - * If ``tooltip`` is ``{"content": "data"}``, then all fields that appear in the - highlighted data point will be used. - * If set to ``null``, then no tooltip will be used. - width : float - Width of the marks. - x : anyOf(float, enum('width')) - X coordinates of the marks, or width of horizontal ``"bar"`` and ``"area"`` without - specified ``x2`` or ``width``. - - The ``value`` of this channel can be a number or a string ``"width"`` for the width - of the plot. - x2 : anyOf(float, enum('width')) - X2 coordinates for ranged ``"area"``, ``"bar"``, ``"rect"``, and ``"rule"``. - - The ``value`` of this channel can be a number or a string ``"width"`` for the width - of the plot. - y : anyOf(float, enum('height')) - Y coordinates of the marks, or height of vertical ``"bar"`` and ``"area"`` without - specified ``y2`` or ``height``. - - The ``value`` of this channel can be a number or a string ``"height"`` for the - height of the plot. - y2 : anyOf(float, enum('width')) - Y2 coordinates for ranged ``"area"``, ``"bar"``, ``"rect"``, and ``"rule"``. - - The ``value`` of this channel can be a number or a string ``"height"`` for the - height of the plot. - """ - _schema = {'$ref': '#/definitions/TextConfig'} - - def __init__(self, align=Undefined, angle=Undefined, baseline=Undefined, color=Undefined, - cornerRadius=Undefined, cursor=Undefined, dir=Undefined, dx=Undefined, dy=Undefined, - ellipsis=Undefined, fill=Undefined, fillOpacity=Undefined, filled=Undefined, - font=Undefined, fontSize=Undefined, fontStyle=Undefined, fontWeight=Undefined, - height=Undefined, href=Undefined, interpolate=Undefined, limit=Undefined, - opacity=Undefined, order=Undefined, orient=Undefined, radius=Undefined, - shape=Undefined, shortTimeLabels=Undefined, size=Undefined, stroke=Undefined, - strokeCap=Undefined, strokeDash=Undefined, strokeDashOffset=Undefined, - strokeJoin=Undefined, strokeMiterLimit=Undefined, strokeOpacity=Undefined, - strokeWidth=Undefined, tension=Undefined, text=Undefined, theta=Undefined, - tooltip=Undefined, width=Undefined, x=Undefined, x2=Undefined, y=Undefined, - y2=Undefined, **kwds): - super(TextConfig, self).__init__(align=align, angle=angle, baseline=baseline, color=color, - cornerRadius=cornerRadius, cursor=cursor, dir=dir, dx=dx, - dy=dy, ellipsis=ellipsis, fill=fill, fillOpacity=fillOpacity, - filled=filled, font=font, fontSize=fontSize, - fontStyle=fontStyle, fontWeight=fontWeight, height=height, - href=href, interpolate=interpolate, limit=limit, - opacity=opacity, order=order, orient=orient, radius=radius, - shape=shape, shortTimeLabels=shortTimeLabels, size=size, - stroke=stroke, strokeCap=strokeCap, strokeDash=strokeDash, - strokeDashOffset=strokeDashOffset, strokeJoin=strokeJoin, - strokeMiterLimit=strokeMiterLimit, strokeOpacity=strokeOpacity, - strokeWidth=strokeWidth, tension=tension, text=text, - theta=theta, tooltip=tooltip, width=width, x=x, x2=x2, y=y, - y2=y2, **kwds) - - -class TextFieldDef(VegaLiteSchema): - """TextFieldDef schema wrapper - - Mapping(required=[type]) - - Attributes - ---------- - - type : :class:`StandardType` - The encoded field's type of measurement ( ``"quantitative"``, ``"temporal"``, - ``"ordinal"``, or ``"nominal"`` ). - It can also be a ``"geojson"`` type for encoding `'geoshape' - `__. - - **Note:** - - - * Data values for a temporal field can be either a date-time string (e.g., - ``"2015-03-07 12:32:17"``, ``"17:01"``, ``"2015-03-16"``. ``"2015"`` ) or a - timestamp number (e.g., ``1552199579097`` ). - * Data ``type`` describes the semantics of the data rather than the primitive data - types ( ``number``, ``string``, etc.). The same primitive data type can have - different types of measurement. For example, numeric data can represent - quantitative, ordinal, or nominal data. - * When using with `bin `__, the - ``type`` property can be either ``"quantitative"`` (for using a linear bin scale) - or `"ordinal" (for using an ordinal bin scale) - `__. - * When using with `timeUnit - `__, the ``type`` property - can be either ``"temporal"`` (for using a temporal scale) or `"ordinal" (for using - an ordinal scale) `__. - * When using with `aggregate - `__, the ``type`` property - refers to the post-aggregation data type. For example, we can calculate count - ``distinct`` of a categorical field ``"cat"`` using ``{"aggregate": "distinct", - "field": "cat", "type": "quantitative"}``. The ``"type"`` of the aggregate output - is ``"quantitative"``. - * Secondary channels (e.g., ``x2``, ``y2``, ``xError``, ``yError`` ) do not have - ``type`` as they have exactly the same type as their primary channels (e.g., - ``x``, ``y`` ). - - **See also:** `type `__ - documentation. - aggregate : :class:`Aggregate` - Aggregation function for the field - (e.g., ``mean``, ``sum``, ``median``, ``min``, ``max``, ``count`` ). - - **Default value:** ``undefined`` (None) - - **See also:** `aggregate `__ - documentation. - bin : anyOf(boolean, :class:`BinParams`, enum('binned'), None) - A flag for binning a ``quantitative`` field, `an object defining binning parameters - `__, or indicating that the - data for ``x`` or ``y`` channel are binned before they are imported into Vega-Lite ( - ``"binned"`` ). - - - If ``true``, default `binning parameters - `__ will be applied. - - If ``"binned"``, this indicates that the data for the ``x`` (or ``y`` ) channel are - already binned. You can map the bin-start field to ``x`` (or ``y`` ) and the bin-end - field to ``x2`` (or ``y2`` ). The scale and axis will be formatted similar to - binning in Vega-lite. To adjust the axis ticks based on the bin step, you can also - set the axis's `tickMinStep - `__ property. - - **Default value:** ``false`` - - **See also:** `bin `__ - documentation. - field : :class:`Field` - **Required.** A string defining the name of the field from which to pull a data - value - or an object defining iterated values from the `repeat - `__ operator. - - **See also:** `field `__ - documentation. - - **Notes:** - 1) Dots ( ``.`` ) and brackets ( ``[`` and ``]`` ) can be used to access nested - objects (e.g., ``"field": "foo.bar"`` and ``"field": "foo['bar']"`` ). - If field names contain dots or brackets but are not nested, you can use ``\\`` to - escape dots and brackets (e.g., ``"a\\.b"`` and ``"a\\[0\\]"`` ). - See more details about escaping in the `field documentation - `__. - 2) ``field`` is not required if ``aggregate`` is ``count``. - format : string - The text formatting pattern for labels of guides (axes, legends, headers) and text - marks. - - - * If the format type is ``"number"`` (e.g., for quantitative fields), this is D3's - `number format pattern `__. - * If the format type is ``"time"`` (e.g., for temporal fields), this is D3's `time - format pattern `__. - - See the `format documentation `__ - for more examples. - - **Default value:** Derived from `numberFormat - `__ config for number - format and from `timeFormat - `__ config for time - format. - formatType : enum('number', 'time') - The format type for labels ( ``"number"`` or ``"time"`` ). - - **Default value:** - - - * ``"time"`` for temporal fields and ordinal and nomimal fields with ``timeUnit``. - * ``"number"`` for quantitative fields as well as ordinal and nomimal fields without - ``timeUnit``. - timeUnit : :class:`TimeUnit` - Time unit (e.g., ``year``, ``yearmonth``, ``month``, ``hours`` ) for a temporal - field. - or `a temporal field that gets casted as ordinal - `__. - - **Default value:** ``undefined`` (None) - - **See also:** `timeUnit `__ - documentation. - title : anyOf(string, None) - A title for the field. If ``null``, the title will be removed. - - **Default value:** derived from the field's name and transformation function ( - ``aggregate``, ``bin`` and ``timeUnit`` ). If the field has an aggregate function, - the function is displayed as part of the title (e.g., ``"Sum of Profit"`` ). If the - field is binned or has a time unit applied, the applied function is shown in - parentheses (e.g., ``"Profit (binned)"``, ``"Transaction Date (year-month)"`` ). - Otherwise, the title is simply the field name. - - **Notes** : - - 1) You can customize the default field title format by providing the `fieldTitle - `__ property in - the `config `__ or `fieldTitle - function via the compile function's options - `__. - - 2) If both field definition's ``title`` and axis, header, or legend ``title`` are - defined, axis/header/legend title will be used. - """ - _schema = {'$ref': '#/definitions/TextFieldDef'} - - def __init__(self, type=Undefined, aggregate=Undefined, bin=Undefined, field=Undefined, - format=Undefined, formatType=Undefined, timeUnit=Undefined, title=Undefined, **kwds): - super(TextFieldDef, self).__init__(type=type, aggregate=aggregate, bin=bin, field=field, - format=format, formatType=formatType, timeUnit=timeUnit, - title=title, **kwds) - - -class TextFieldDefWithCondition(VegaLiteSchema): - """TextFieldDefWithCondition schema wrapper - - Mapping(required=[type]) - A FieldDef with Condition :raw-html:`` - - Attributes - ---------- - - type : :class:`StandardType` - The encoded field's type of measurement ( ``"quantitative"``, ``"temporal"``, - ``"ordinal"``, or ``"nominal"`` ). - It can also be a ``"geojson"`` type for encoding `'geoshape' - `__. - - **Note:** - - - * Data values for a temporal field can be either a date-time string (e.g., - ``"2015-03-07 12:32:17"``, ``"17:01"``, ``"2015-03-16"``. ``"2015"`` ) or a - timestamp number (e.g., ``1552199579097`` ). - * Data ``type`` describes the semantics of the data rather than the primitive data - types ( ``number``, ``string``, etc.). The same primitive data type can have - different types of measurement. For example, numeric data can represent - quantitative, ordinal, or nominal data. - * When using with `bin `__, the - ``type`` property can be either ``"quantitative"`` (for using a linear bin scale) - or `"ordinal" (for using an ordinal bin scale) - `__. - * When using with `timeUnit - `__, the ``type`` property - can be either ``"temporal"`` (for using a temporal scale) or `"ordinal" (for using - an ordinal scale) `__. - * When using with `aggregate - `__, the ``type`` property - refers to the post-aggregation data type. For example, we can calculate count - ``distinct`` of a categorical field ``"cat"`` using ``{"aggregate": "distinct", - "field": "cat", "type": "quantitative"}``. The ``"type"`` of the aggregate output - is ``"quantitative"``. - * Secondary channels (e.g., ``x2``, ``y2``, ``xError``, ``yError`` ) do not have - ``type`` as they have exactly the same type as their primary channels (e.g., - ``x``, ``y`` ). - - **See also:** `type `__ - documentation. - aggregate : :class:`Aggregate` - Aggregation function for the field - (e.g., ``mean``, ``sum``, ``median``, ``min``, ``max``, ``count`` ). - - **Default value:** ``undefined`` (None) - - **See also:** `aggregate `__ - documentation. - bin : anyOf(boolean, :class:`BinParams`, enum('binned'), None) - A flag for binning a ``quantitative`` field, `an object defining binning parameters - `__, or indicating that the - data for ``x`` or ``y`` channel are binned before they are imported into Vega-Lite ( - ``"binned"`` ). - - - If ``true``, default `binning parameters - `__ will be applied. - - If ``"binned"``, this indicates that the data for the ``x`` (or ``y`` ) channel are - already binned. You can map the bin-start field to ``x`` (or ``y`` ) and the bin-end - field to ``x2`` (or ``y2`` ). The scale and axis will be formatted similar to - binning in Vega-lite. To adjust the axis ticks based on the bin step, you can also - set the axis's `tickMinStep - `__ property. - - **Default value:** ``false`` - - **See also:** `bin `__ - documentation. - condition : anyOf(:class:`ConditionalValueDef`, List(:class:`ConditionalValueDef`)) - One or more value definition(s) with `a selection or a test predicate - `__. - - **Note:** A field definition's ``condition`` property can only contain `conditional - value definitions `__ - since Vega-Lite only allows at most one encoded field per encoding channel. - field : :class:`Field` - **Required.** A string defining the name of the field from which to pull a data - value - or an object defining iterated values from the `repeat - `__ operator. - - **See also:** `field `__ - documentation. - - **Notes:** - 1) Dots ( ``.`` ) and brackets ( ``[`` and ``]`` ) can be used to access nested - objects (e.g., ``"field": "foo.bar"`` and ``"field": "foo['bar']"`` ). - If field names contain dots or brackets but are not nested, you can use ``\\`` to - escape dots and brackets (e.g., ``"a\\.b"`` and ``"a\\[0\\]"`` ). - See more details about escaping in the `field documentation - `__. - 2) ``field`` is not required if ``aggregate`` is ``count``. - format : string - The text formatting pattern for labels of guides (axes, legends, headers) and text - marks. - - - * If the format type is ``"number"`` (e.g., for quantitative fields), this is D3's - `number format pattern `__. - * If the format type is ``"time"`` (e.g., for temporal fields), this is D3's `time - format pattern `__. - - See the `format documentation `__ - for more examples. - - **Default value:** Derived from `numberFormat - `__ config for number - format and from `timeFormat - `__ config for time - format. - formatType : enum('number', 'time') - The format type for labels ( ``"number"`` or ``"time"`` ). - - **Default value:** - - - * ``"time"`` for temporal fields and ordinal and nomimal fields with ``timeUnit``. - * ``"number"`` for quantitative fields as well as ordinal and nomimal fields without - ``timeUnit``. - timeUnit : :class:`TimeUnit` - Time unit (e.g., ``year``, ``yearmonth``, ``month``, ``hours`` ) for a temporal - field. - or `a temporal field that gets casted as ordinal - `__. - - **Default value:** ``undefined`` (None) - - **See also:** `timeUnit `__ - documentation. - title : anyOf(string, None) - A title for the field. If ``null``, the title will be removed. - - **Default value:** derived from the field's name and transformation function ( - ``aggregate``, ``bin`` and ``timeUnit`` ). If the field has an aggregate function, - the function is displayed as part of the title (e.g., ``"Sum of Profit"`` ). If the - field is binned or has a time unit applied, the applied function is shown in - parentheses (e.g., ``"Profit (binned)"``, ``"Transaction Date (year-month)"`` ). - Otherwise, the title is simply the field name. - - **Notes** : - - 1) You can customize the default field title format by providing the `fieldTitle - `__ property in - the `config `__ or `fieldTitle - function via the compile function's options - `__. - - 2) If both field definition's ``title`` and axis, header, or legend ``title`` are - defined, axis/header/legend title will be used. - """ - _schema = {'$ref': '#/definitions/TextFieldDefWithCondition'} - - def __init__(self, type=Undefined, aggregate=Undefined, bin=Undefined, condition=Undefined, - field=Undefined, format=Undefined, formatType=Undefined, timeUnit=Undefined, - title=Undefined, **kwds): - super(TextFieldDefWithCondition, self).__init__(type=type, aggregate=aggregate, bin=bin, - condition=condition, field=field, format=format, - formatType=formatType, timeUnit=timeUnit, - title=title, **kwds) - - -class TextValueDefWithCondition(VegaLiteSchema): - """TextValueDefWithCondition schema wrapper - - Mapping(required=[]) - A ValueDef with Condition where either the condition or the value are - optional. - - Attributes - ---------- - - condition : anyOf(:class:`ConditionalTextFieldDef`, :class:`ConditionalValueDef`, - List(:class:`ConditionalValueDef`)) - A field definition or one or more value definition(s) with a selection predicate. - value : :class:`Value` - A constant value in visual domain (e.g., ``"red"`` / "#0099ff" for color, values - between ``0`` to ``1`` for opacity). - """ - _schema = {'$ref': '#/definitions/TextValueDefWithCondition'} - - def __init__(self, condition=Undefined, value=Undefined, **kwds): - super(TextValueDefWithCondition, self).__init__(condition=condition, value=value, **kwds) - - -class TickConfig(VegaLiteSchema): - """TickConfig schema wrapper - - Mapping(required=[]) - - Attributes - ---------- - - align : :class:`Align` - The horizontal alignment of the text. One of ``"left"``, ``"right"``, ``"center"``. - angle : float - The rotation angle of the text, in degrees. - bandSize : float - The width of the ticks. - - **Default value:** 3/4 of rangeStep. - baseline : :class:`TextBaseline` - The vertical alignment of the text. One of ``"top"``, ``"middle"``, ``"bottom"``. - - **Default value:** ``"middle"`` - color : :class:`Color` - Default color. Note that ``fill`` and ``stroke`` have higher precedence than - ``color`` and will override ``color``. - - **Default value:** :raw-html:`` - ``"#4682b4"`` - - **Note:** This property cannot be used in a `style config - `__. - cornerRadius : float - The radius in pixels of rounded rectangle corners. - - **Default value:** ``0`` - cursor : :class:`Cursor` - The mouse cursor used over the mark. Any valid `CSS cursor type - `__ can be used. - dir : :class:`Dir` - The direction of the text. One of ``"ltr"`` (left-to-right) or ``"rtl"`` - (right-to-left). This property determines on which side is truncated in response to - the limit parameter. - - **Default value:** ``"ltr"`` - dx : float - The horizontal offset, in pixels, between the text label and its anchor point. The - offset is applied after rotation by the *angle* property. - dy : float - The vertical offset, in pixels, between the text label and its anchor point. The - offset is applied after rotation by the *angle* property. - ellipsis : string - The ellipsis string for text truncated in response to the limit parameter. - - **Default value:** ``"…"`` - fill : :class:`Color` - Default Fill Color. This has higher precedence than ``config.color`` - - **Default value:** (None) - fillOpacity : float - The fill opacity (value between [0,1]). - - **Default value:** ``1`` - filled : boolean - Whether the mark's color should be used as fill color instead of stroke color. - - **Default value:** ``false`` for ``point``, ``line`` and ``rule`` ; otherwise, - ``true``. - - **Note:** This property cannot be used in a `style config - `__. - font : string - The typeface to set the text in (e.g., ``"Helvetica Neue"`` ). - fontSize : float - The font size, in pixels. - fontStyle : :class:`FontStyle` - The font style (e.g., ``"italic"`` ). - fontWeight : :class:`FontWeight` - The font weight. - This can be either a string (e.g ``"bold"``, ``"normal"`` ) or a number ( ``100``, - ``200``, ``300``, ..., ``900`` where ``"normal"`` = ``400`` and ``"bold"`` = ``700`` - ). - height : float - Height of the marks. - href : string - A URL to load upon mouse click. If defined, the mark acts as a hyperlink. - interpolate : :class:`Interpolate` - The line interpolation method to use for line and area marks. One of the following: - - - * ``"linear"`` : piecewise linear segments, as in a polyline. - * ``"linear-closed"`` : close the linear segments to form a polygon. - * ``"step"`` : alternate between horizontal and vertical segments, as in a step - function. - * ``"step-before"`` : alternate between vertical and horizontal segments, as in a - step function. - * ``"step-after"`` : alternate between horizontal and vertical segments, as in a - step function. - * ``"basis"`` : a B-spline, with control point duplication on the ends. - * ``"basis-open"`` : an open B-spline; may not intersect the start or end. - * ``"basis-closed"`` : a closed B-spline, as in a loop. - * ``"cardinal"`` : a Cardinal spline, with control point duplication on the ends. - * ``"cardinal-open"`` : an open Cardinal spline; may not intersect the start or end, - but will intersect other control points. - * ``"cardinal-closed"`` : a closed Cardinal spline, as in a loop. - * ``"bundle"`` : equivalent to basis, except the tension parameter is used to - straighten the spline. - * ``"monotone"`` : cubic interpolation that preserves monotonicity in y. - limit : float - The maximum length of the text mark in pixels. The text value will be automatically - truncated if the rendered size exceeds the limit. - - **Default value:** ``0``, indicating no limit - opacity : float - The overall opacity (value between [0,1]). - - **Default value:** ``0.7`` for non-aggregate plots with ``point``, ``tick``, - ``circle``, or ``square`` marks or layered ``bar`` charts and ``1`` otherwise. - order : anyOf(None, boolean) - For line and trail marks, this ``order`` property can be set to ``null`` or - ``false`` to make the lines use the original order in the data sources. - orient : :class:`Orientation` - The orientation of a non-stacked bar, tick, area, and line charts. - The value is either horizontal (default) or vertical. - - - * For bar, rule and tick, this determines whether the size of the bar and tick - should be applied to x or y dimension. - * For area, this property determines the orient property of the Vega output. - * For line and trail marks, this property determines the sort order of the points in - the line - if ``config.sortLineBy`` is not specified. - For stacked charts, this is always determined by the orientation of the stack; - therefore explicitly specified value will be ignored. - radius : float - Polar coordinate radial offset, in pixels, of the text label from the origin - determined by the ``x`` and ``y`` properties. - shape : string - Shape of the point marks. Supported values include: - - - * plotting shapes: ``"circle"``, ``"square"``, ``"cross"``, ``"diamond"``, - ``"triangle-up"``, ``"triangle-down"``, ``"triangle-right"``, or - ``"triangle-left"``. - * the line symbol ``"stroke"`` - * centered directional shapes ``"arrow"``, ``"wedge"``, or ``"triangle"`` - * a custom `SVG path string - `__ (For correct - sizing, custom shape paths should be defined within a square bounding box with - coordinates ranging from -1 to 1 along both the x and y dimensions.) - - **Default value:** ``"circle"`` - size : float - Default size for marks. - - - * For ``point`` / ``circle`` / ``square``, this represents the pixel area of the - marks. For example: in the case of circles, the radius is determined in part by - the square root of the size value. - * For ``bar``, this represents the band size of the bar, in pixels. - * For ``text``, this represents the font size, in pixels. - - **Default value:** ``30`` for point, circle, square marks; ``rangeStep`` - 1 for bar - marks with discrete dimensions; ``5`` for bar marks with continuous dimensions; - ``11`` for text marks. - stroke : :class:`Color` - Default Stroke Color. This has higher precedence than ``config.color`` - - **Default value:** (None) - strokeCap : :class:`StrokeCap` - The stroke cap for line ending style. One of ``"butt"``, ``"round"``, or - ``"square"``. - - **Default value:** ``"square"`` - strokeDash : List(float) - An array of alternating stroke, space lengths for creating dashed or dotted lines. - strokeDashOffset : float - The offset (in pixels) into which to begin drawing with the stroke dash array. - strokeJoin : :class:`StrokeJoin` - The stroke line join method. One of ``"miter"``, ``"round"`` or ``"bevel"``. - - **Default value:** ``"miter"`` - strokeMiterLimit : float - The miter limit at which to bevel a line join. - strokeOpacity : float - The stroke opacity (value between [0,1]). - - **Default value:** ``1`` - strokeWidth : float - The stroke width, in pixels. - tension : float - Depending on the interpolation type, sets the tension parameter (for line and area - marks). - text : string - Placeholder text if the ``text`` channel is not specified - theta : float - Polar coordinate angle, in radians, of the text label from the origin determined by - the ``x`` and ``y`` properties. Values for ``theta`` follow the same convention of - ``arc`` mark ``startAngle`` and ``endAngle`` properties: angles are measured in - radians, with ``0`` indicating "north". - thickness : float - Thickness of the tick mark. - - **Default value:** ``1`` - tooltip : anyOf(:class:`Value`, :class:`TooltipContent`, None) - The tooltip text string to show upon mouse hover or an object defining which fields - should the tooltip be derived from. - - - * If ``tooltip`` is ``{"content": "encoding"}``, then all fields from ``encoding`` - will be used. - * If ``tooltip`` is ``{"content": "data"}``, then all fields that appear in the - highlighted data point will be used. - * If set to ``null``, then no tooltip will be used. - width : float - Width of the marks. - x : anyOf(float, enum('width')) - X coordinates of the marks, or width of horizontal ``"bar"`` and ``"area"`` without - specified ``x2`` or ``width``. - - The ``value`` of this channel can be a number or a string ``"width"`` for the width - of the plot. - x2 : anyOf(float, enum('width')) - X2 coordinates for ranged ``"area"``, ``"bar"``, ``"rect"``, and ``"rule"``. - - The ``value`` of this channel can be a number or a string ``"width"`` for the width - of the plot. - y : anyOf(float, enum('height')) - Y coordinates of the marks, or height of vertical ``"bar"`` and ``"area"`` without - specified ``y2`` or ``height``. - - The ``value`` of this channel can be a number or a string ``"height"`` for the - height of the plot. - y2 : anyOf(float, enum('width')) - Y2 coordinates for ranged ``"area"``, ``"bar"``, ``"rect"``, and ``"rule"``. - - The ``value`` of this channel can be a number or a string ``"height"`` for the - height of the plot. - """ - _schema = {'$ref': '#/definitions/TickConfig'} - - def __init__(self, align=Undefined, angle=Undefined, bandSize=Undefined, baseline=Undefined, - color=Undefined, cornerRadius=Undefined, cursor=Undefined, dir=Undefined, dx=Undefined, - dy=Undefined, ellipsis=Undefined, fill=Undefined, fillOpacity=Undefined, - filled=Undefined, font=Undefined, fontSize=Undefined, fontStyle=Undefined, - fontWeight=Undefined, height=Undefined, href=Undefined, interpolate=Undefined, - limit=Undefined, opacity=Undefined, order=Undefined, orient=Undefined, - radius=Undefined, shape=Undefined, size=Undefined, stroke=Undefined, - strokeCap=Undefined, strokeDash=Undefined, strokeDashOffset=Undefined, - strokeJoin=Undefined, strokeMiterLimit=Undefined, strokeOpacity=Undefined, - strokeWidth=Undefined, tension=Undefined, text=Undefined, theta=Undefined, - thickness=Undefined, tooltip=Undefined, width=Undefined, x=Undefined, x2=Undefined, - y=Undefined, y2=Undefined, **kwds): - super(TickConfig, self).__init__(align=align, angle=angle, bandSize=bandSize, baseline=baseline, - color=color, cornerRadius=cornerRadius, cursor=cursor, dir=dir, - dx=dx, dy=dy, ellipsis=ellipsis, fill=fill, - fillOpacity=fillOpacity, filled=filled, font=font, - fontSize=fontSize, fontStyle=fontStyle, fontWeight=fontWeight, - height=height, href=href, interpolate=interpolate, limit=limit, - opacity=opacity, order=order, orient=orient, radius=radius, - shape=shape, size=size, stroke=stroke, strokeCap=strokeCap, - strokeDash=strokeDash, strokeDashOffset=strokeDashOffset, - strokeJoin=strokeJoin, strokeMiterLimit=strokeMiterLimit, - strokeOpacity=strokeOpacity, strokeWidth=strokeWidth, - tension=tension, text=text, theta=theta, thickness=thickness, - tooltip=tooltip, width=width, x=x, x2=x2, y=y, y2=y2, **kwds) - - -class TimeUnit(VegaLiteSchema): - """TimeUnit schema wrapper - - anyOf(:class:`SingleTimeUnit`, :class:`MultiTimeUnit`) - """ - _schema = {'$ref': '#/definitions/TimeUnit'} - - def __init__(self, *args, **kwds): - super(TimeUnit, self).__init__(*args, **kwds) - - -class MultiTimeUnit(TimeUnit): - """MultiTimeUnit schema wrapper - - anyOf(:class:`LocalMultiTimeUnit`, :class:`UtcMultiTimeUnit`) - """ - _schema = {'$ref': '#/definitions/MultiTimeUnit'} - - def __init__(self, *args, **kwds): - super(MultiTimeUnit, self).__init__(*args, **kwds) - - -class LocalMultiTimeUnit(MultiTimeUnit): - """LocalMultiTimeUnit schema wrapper - - enum('yearquarter', 'yearquartermonth', 'yearmonth', 'yearmonthdate', 'yearmonthdatehours', - 'yearmonthdatehoursminutes', 'yearmonthdatehoursminutesseconds', 'quartermonth', - 'monthdate', 'monthdatehours', 'hoursminutes', 'hoursminutesseconds', 'minutesseconds', - 'secondsmilliseconds') - """ - _schema = {'$ref': '#/definitions/LocalMultiTimeUnit'} - - def __init__(self, *args): - super(LocalMultiTimeUnit, self).__init__(*args) - - -class SingleTimeUnit(TimeUnit): - """SingleTimeUnit schema wrapper - - anyOf(:class:`LocalSingleTimeUnit`, :class:`UtcSingleTimeUnit`) - """ - _schema = {'$ref': '#/definitions/SingleTimeUnit'} - - def __init__(self, *args, **kwds): - super(SingleTimeUnit, self).__init__(*args, **kwds) - - -class LocalSingleTimeUnit(SingleTimeUnit): - """LocalSingleTimeUnit schema wrapper - - enum('year', 'quarter', 'month', 'day', 'date', 'hours', 'minutes', 'seconds', - 'milliseconds') - """ - _schema = {'$ref': '#/definitions/LocalSingleTimeUnit'} - - def __init__(self, *args): - super(LocalSingleTimeUnit, self).__init__(*args) - - -class TitleAnchor(VegaLiteSchema): - """TitleAnchor schema wrapper - - enum(None, 'start', 'middle', 'end') - """ - _schema = {'$ref': '#/definitions/TitleAnchor'} - - def __init__(self, *args): - super(TitleAnchor, self).__init__(*args) - - -class TitleConfig(VegaLiteSchema): - """TitleConfig schema wrapper - - Mapping(required=[]) - - Attributes - ---------- - - align : :class:`Align` - - anchor : :class:`TitleAnchor` - The anchor position for placing the title. One of ``"start"``, ``"middle"``, or - ``"end"``. For example, with an orientation of top these anchor positions map to a - left-, center-, or right-aligned title. - angle : float - Angle in degrees of title text. - baseline : :class:`TextBaseline` - Vertical text baseline for title text. One of ``"top"``, ``"middle"``, ``"bottom"``, - or ``"alphabetic"``. - color : :class:`Color` - Text color for title text. - dx : float - Delta offset for title text x-coordinate. - dy : float - Delta offset for title text y-coordinate. - font : string - Font name for title text. - fontSize : float - Font size in pixels for title text. - - **Default value:** ``10``. - fontStyle : :class:`FontStyle` - Font style for title text. - fontWeight : :class:`FontWeight` - Font weight for title text. - This can be either a string (e.g ``"bold"``, ``"normal"`` ) or a number ( ``100``, - ``200``, ``300``, ..., ``900`` where ``"normal"`` = ``400`` and ``"bold"`` = ``700`` - ). - frame : :class:`TitleFrame` - The reference frame for the anchor position, one of ``"bounds"`` (to anchor relative - to the full bounding box) or ``"group"`` (to anchor relative to the group width or - height). - limit : float - The maximum allowed length in pixels of legend labels. - offset : float - The orthogonal offset in pixels by which to displace the title from its position - along the edge of the chart. - orient : :class:`TitleOrient` - Default title orientation ( ``"top"``, ``"bottom"``, ``"left"``, or ``"right"`` ) - """ - _schema = {'$ref': '#/definitions/TitleConfig'} - - def __init__(self, align=Undefined, anchor=Undefined, angle=Undefined, baseline=Undefined, - color=Undefined, dx=Undefined, dy=Undefined, font=Undefined, fontSize=Undefined, - fontStyle=Undefined, fontWeight=Undefined, frame=Undefined, limit=Undefined, - offset=Undefined, orient=Undefined, **kwds): - super(TitleConfig, self).__init__(align=align, anchor=anchor, angle=angle, baseline=baseline, - color=color, dx=dx, dy=dy, font=font, fontSize=fontSize, - fontStyle=fontStyle, fontWeight=fontWeight, frame=frame, - limit=limit, offset=offset, orient=orient, **kwds) - - -class TitleFrame(VegaLiteSchema): - """TitleFrame schema wrapper - - enum('bounds', 'group') - """ - _schema = {'$ref': '#/definitions/TitleFrame'} - - def __init__(self, *args): - super(TitleFrame, self).__init__(*args) - - -class TitleOrient(VegaLiteSchema): - """TitleOrient schema wrapper - - enum('none', 'left', 'right', 'top', 'bottom') - """ - _schema = {'$ref': '#/definitions/TitleOrient'} - - def __init__(self, *args): - super(TitleOrient, self).__init__(*args) - - -class TitleParams(VegaLiteSchema): - """TitleParams schema wrapper - - Mapping(required=[text]) - - Attributes - ---------- - - text : string - The title text. - align : :class:`Align` - - anchor : :class:`TitleAnchor` - The anchor position for placing the title. One of ``"start"``, ``"middle"``, or - ``"end"``. For example, with an orientation of top these anchor positions map to a - left-, center-, or right-aligned title. - - **Default value:** ``"middle"`` for `single - `__ and `layered - `__ views. - ``"start"`` for other composite views. - - **Note:** `For now `__, ``anchor`` is - only customizable only for `single - `__ and `layered - `__ views. For other composite - views, ``anchor`` is always ``"start"``. - angle : float - Angle in degrees of title text. - baseline : :class:`TextBaseline` - Vertical text baseline for title text. One of ``"top"``, ``"middle"``, ``"bottom"``, - or ``"alphabetic"``. - color : :class:`Color` - Text color for title text. - dx : float - Delta offset for title text x-coordinate. - dy : float - Delta offset for title text y-coordinate. - font : string - Font name for title text. - fontSize : float - Font size in pixels for title text. - - **Default value:** ``10``. - fontStyle : :class:`FontStyle` - Font style for title text. - fontWeight : :class:`FontWeight` - Font weight for title text. - This can be either a string (e.g ``"bold"``, ``"normal"`` ) or a number ( ``100``, - ``200``, ``300``, ..., ``900`` where ``"normal"`` = ``400`` and ``"bold"`` = ``700`` - ). - frame : :class:`TitleFrame` - The reference frame for the anchor position, one of ``"bounds"`` (to anchor relative - to the full bounding box) or ``"group"`` (to anchor relative to the group width or - height). - limit : float - The maximum allowed length in pixels of legend labels. - offset : float - The orthogonal offset in pixels by which to displace the title from its position - along the edge of the chart. - orient : :class:`TitleOrient` - Default title orientation ( ``"top"``, ``"bottom"``, ``"left"``, or ``"right"`` ) - style : anyOf(string, List(string)) - A `mark style property `__ - to apply to the title text mark. - - **Default value:** ``"group-title"``. - zindex : float - The integer z-index indicating the layering of the title group relative to other - axis, mark and legend groups. - - **Default value:** ``0``. - """ - _schema = {'$ref': '#/definitions/TitleParams'} - - def __init__(self, text=Undefined, align=Undefined, anchor=Undefined, angle=Undefined, - baseline=Undefined, color=Undefined, dx=Undefined, dy=Undefined, font=Undefined, - fontSize=Undefined, fontStyle=Undefined, fontWeight=Undefined, frame=Undefined, - limit=Undefined, offset=Undefined, orient=Undefined, style=Undefined, zindex=Undefined, - **kwds): - super(TitleParams, self).__init__(text=text, align=align, anchor=anchor, angle=angle, - baseline=baseline, color=color, dx=dx, dy=dy, font=font, - fontSize=fontSize, fontStyle=fontStyle, fontWeight=fontWeight, - frame=frame, limit=limit, offset=offset, orient=orient, - style=style, zindex=zindex, **kwds) - - -class TooltipContent(VegaLiteSchema): - """TooltipContent schema wrapper - - Mapping(required=[content]) - - Attributes - ---------- - - content : enum('encoding', 'data') - - """ - _schema = {'$ref': '#/definitions/TooltipContent'} - - def __init__(self, content=Undefined, **kwds): - super(TooltipContent, self).__init__(content=content, **kwds) - - -class TopLevelSpec(VegaLiteSchema): - """TopLevelSpec schema wrapper - - anyOf(:class:`TopLevelUnitSpec`, :class:`TopLevelFacetSpec`, :class:`TopLevelLayerSpec`, - :class:`TopLevelRepeatSpec`, :class:`TopLevelConcatSpec`, :class:`TopLevelVConcatSpec`, - :class:`TopLevelHConcatSpec`) - A Vega-Lite top-level specification. - This is the root class for all Vega-Lite specifications. - (The json schema is generated from this type.) - """ - _schema = {'$ref': '#/definitions/TopLevelSpec'} - - def __init__(self, *args, **kwds): - super(TopLevelSpec, self).__init__(*args, **kwds) - - -class TopLevelConcatSpec(TopLevelSpec): - """TopLevelConcatSpec schema wrapper - - Mapping(required=[concat]) - - Attributes - ---------- - - concat : List(:class:`Spec`) - A list of views to be concatenated. - align : anyOf(:class:`LayoutAlign`, :class:`RowColLayoutAlign`) - The alignment to apply to grid rows and columns. - The supported string values are ``"all"``, ``"each"``, and ``"none"``. - - - * For ``"none"``, a flow layout will be used, in which adjacent subviews are simply - placed one after the other. - * For ``"each"``, subviews will be aligned into a clean grid structure, but each row - or column may be of variable size. - * For ``"all"``, subviews will be aligned and each row or column will be sized - identically based on the maximum observed size. String values for this property - will be applied to both grid rows and columns. - - Alternatively, an object value of the form ``{"row": string, "column": string}`` can - be used to supply different alignments for rows and columns. - - **Default value:** ``"all"``. - autosize : anyOf(:class:`AutosizeType`, :class:`AutoSizeParams`) - Sets how the visualization size should be determined. If a string, should be one of - ``"pad"``, ``"fit"`` or ``"none"``. - Object values can additionally specify parameters for content sizing and automatic - resizing. - ``"fit"`` is only supported for single and layered views that don't use - ``rangeStep``. - - **Default value** : ``pad`` - background : string - CSS color property to use as the background of the entire view. - - **Default value:** none (transparent) - bounds : enum('full', 'flush') - The bounds calculation method to use for determining the extent of a sub-plot. One - of ``full`` (the default) or ``flush``. - - - * If set to ``full``, the entire calculated bounds (including axes, title, and - legend) will be used. - * If set to ``flush``, only the specified width and height values for the sub-view - will be used. The ``flush`` setting can be useful when attempting to place - sub-plots without axes or legends into a uniform grid structure. - - **Default value:** ``"full"`` - center : anyOf(boolean, :class:`RowColboolean`) - Boolean flag indicating if subviews should be centered relative to their respective - rows or columns. - - An object value of the form ``{"row": boolean, "column": boolean}`` can be used to - supply different centering values for rows and columns. - - **Default value:** ``false`` - columns : float - The number of columns to include in the view composition layout. - - **Default value** : ``undefined`` -- An infinite number of columns (a single row) - will be assumed. This is equivalent to - ``hconcat`` (for ``concat`` ) and to using the ``column`` channel (for ``facet`` and - ``repeat`` ). - - **Note** : - - 1) This property is only for: - - - * the general (wrappable) ``concat`` operator (not ``hconcat`` / ``vconcat`` ) - * the ``facet`` and ``repeat`` operator with one field/repetition definition - (without row/column nesting) - - 2) Setting the ``columns`` to ``1`` is equivalent to ``vconcat`` (for ``concat`` ) - and to using the ``row`` channel (for ``facet`` and ``repeat`` ). - config : :class:`Config` - Vega-Lite configuration object. This property can only be defined at the top-level - of a specification. - data : anyOf(:class:`Data`, None) - An object describing the data source. Set to ``null`` to ignore the parent's data - source. If no data is set, it is derived from the parent. - datasets : :class:`Datasets` - A global data store for named datasets. This is a mapping from names to inline - datasets. - This can be an array of objects or primitive values or a string. Arrays of primitive - values are ingested as objects with a ``data`` property. - description : string - Description of this mark for commenting purpose. - name : string - Name of the visualization for later reference. - padding : :class:`Padding` - The default visualization padding, in pixels, from the edge of the visualization - canvas to the data rectangle. If a number, specifies padding for all sides. - If an object, the value should have the format ``{"left": 5, "top": 5, "right": 5, - "bottom": 5}`` to specify padding for each side of the visualization. - - **Default value** : ``5`` - resolve : :class:`Resolve` - Scale, axis, and legend resolutions for view composition specifications. - spacing : anyOf(float, :class:`RowColnumber`) - The spacing in pixels between sub-views of the composition operator. - An object of the form ``{"row": number, "column": number}`` can be used to set - different spacing values for rows and columns. - - **Default value** : Depends on ``"spacing"`` property of `the view composition - configuration `__ ( - ``20`` by default) - title : anyOf(string, :class:`TitleParams`) - Title for the plot. - transform : List(:class:`Transform`) - An array of data transformations such as filter and new field calculation. - usermeta : Mapping(required=[]) - Optional metadata that will be passed to Vega. - This object is completely ignored by Vega and Vega-Lite and can be used for custom - metadata. - $schema : string - URL to `JSON schema `__ for a Vega-Lite specification. - Unless you have a reason to change this, use - ``https://vega.github.io/schema/vega-lite/v3.json``. Setting the ``$schema`` - property allows automatic validation and autocomplete in editors that support JSON - schema. - """ - _schema = {'$ref': '#/definitions/TopLevelConcatSpec'} - - def __init__(self, concat=Undefined, align=Undefined, autosize=Undefined, background=Undefined, - bounds=Undefined, center=Undefined, columns=Undefined, config=Undefined, - data=Undefined, datasets=Undefined, description=Undefined, name=Undefined, - padding=Undefined, resolve=Undefined, spacing=Undefined, title=Undefined, - transform=Undefined, usermeta=Undefined, **kwds): - super(TopLevelConcatSpec, self).__init__(concat=concat, align=align, autosize=autosize, - background=background, bounds=bounds, center=center, - columns=columns, config=config, data=data, - datasets=datasets, description=description, name=name, - padding=padding, resolve=resolve, spacing=spacing, - title=title, transform=transform, usermeta=usermeta, - **kwds) - - -class TopLevelFacetSpec(TopLevelSpec): - """TopLevelFacetSpec schema wrapper - - Mapping(required=[data, facet, spec]) - - Attributes - ---------- - - data : anyOf(:class:`Data`, None) - An object describing the data source. Set to ``null`` to ignore the parent's data - source. If no data is set, it is derived from the parent. - facet : anyOf(:class:`FacetFieldDef`, :class:`FacetMapping`) - Definition for how to facet the data. One of: - 1) `a field definition for faceting the plot by one field - `__ - 2) `An object that maps row and column channels to their field definitions - `__ - spec : anyOf(:class:`LayerSpec`, :class:`FacetedUnitSpec`) - A specification of the view that gets faceted. - align : anyOf(:class:`LayoutAlign`, :class:`RowColLayoutAlign`) - The alignment to apply to grid rows and columns. - The supported string values are ``"all"``, ``"each"``, and ``"none"``. - - - * For ``"none"``, a flow layout will be used, in which adjacent subviews are simply - placed one after the other. - * For ``"each"``, subviews will be aligned into a clean grid structure, but each row - or column may be of variable size. - * For ``"all"``, subviews will be aligned and each row or column will be sized - identically based on the maximum observed size. String values for this property - will be applied to both grid rows and columns. - - Alternatively, an object value of the form ``{"row": string, "column": string}`` can - be used to supply different alignments for rows and columns. - - **Default value:** ``"all"``. - autosize : anyOf(:class:`AutosizeType`, :class:`AutoSizeParams`) - Sets how the visualization size should be determined. If a string, should be one of - ``"pad"``, ``"fit"`` or ``"none"``. - Object values can additionally specify parameters for content sizing and automatic - resizing. - ``"fit"`` is only supported for single and layered views that don't use - ``rangeStep``. - - **Default value** : ``pad`` - background : string - CSS color property to use as the background of the entire view. - - **Default value:** none (transparent) - bounds : enum('full', 'flush') - The bounds calculation method to use for determining the extent of a sub-plot. One - of ``full`` (the default) or ``flush``. - - - * If set to ``full``, the entire calculated bounds (including axes, title, and - legend) will be used. - * If set to ``flush``, only the specified width and height values for the sub-view - will be used. The ``flush`` setting can be useful when attempting to place - sub-plots without axes or legends into a uniform grid structure. - - **Default value:** ``"full"`` - center : anyOf(boolean, :class:`RowColboolean`) - Boolean flag indicating if subviews should be centered relative to their respective - rows or columns. - - An object value of the form ``{"row": boolean, "column": boolean}`` can be used to - supply different centering values for rows and columns. - - **Default value:** ``false`` - columns : float - The number of columns to include in the view composition layout. - - **Default value** : ``undefined`` -- An infinite number of columns (a single row) - will be assumed. This is equivalent to - ``hconcat`` (for ``concat`` ) and to using the ``column`` channel (for ``facet`` and - ``repeat`` ). - - **Note** : - - 1) This property is only for: - - - * the general (wrappable) ``concat`` operator (not ``hconcat`` / ``vconcat`` ) - * the ``facet`` and ``repeat`` operator with one field/repetition definition - (without row/column nesting) - - 2) Setting the ``columns`` to ``1`` is equivalent to ``vconcat`` (for ``concat`` ) - and to using the ``row`` channel (for ``facet`` and ``repeat`` ). - config : :class:`Config` - Vega-Lite configuration object. This property can only be defined at the top-level - of a specification. - datasets : :class:`Datasets` - A global data store for named datasets. This is a mapping from names to inline - datasets. - This can be an array of objects or primitive values or a string. Arrays of primitive - values are ingested as objects with a ``data`` property. - description : string - Description of this mark for commenting purpose. - name : string - Name of the visualization for later reference. - padding : :class:`Padding` - The default visualization padding, in pixels, from the edge of the visualization - canvas to the data rectangle. If a number, specifies padding for all sides. - If an object, the value should have the format ``{"left": 5, "top": 5, "right": 5, - "bottom": 5}`` to specify padding for each side of the visualization. - - **Default value** : ``5`` - resolve : :class:`Resolve` - Scale, axis, and legend resolutions for view composition specifications. - spacing : anyOf(float, :class:`RowColnumber`) - The spacing in pixels between sub-views of the composition operator. - An object of the form ``{"row": number, "column": number}`` can be used to set - different spacing values for rows and columns. - - **Default value** : Depends on ``"spacing"`` property of `the view composition - configuration `__ ( - ``20`` by default) - title : anyOf(string, :class:`TitleParams`) - Title for the plot. - transform : List(:class:`Transform`) - An array of data transformations such as filter and new field calculation. - usermeta : Mapping(required=[]) - Optional metadata that will be passed to Vega. - This object is completely ignored by Vega and Vega-Lite and can be used for custom - metadata. - $schema : string - URL to `JSON schema `__ for a Vega-Lite specification. - Unless you have a reason to change this, use - ``https://vega.github.io/schema/vega-lite/v3.json``. Setting the ``$schema`` - property allows automatic validation and autocomplete in editors that support JSON - schema. - """ - _schema = {'$ref': '#/definitions/TopLevelFacetSpec'} - - def __init__(self, data=Undefined, facet=Undefined, spec=Undefined, align=Undefined, - autosize=Undefined, background=Undefined, bounds=Undefined, center=Undefined, - columns=Undefined, config=Undefined, datasets=Undefined, description=Undefined, - name=Undefined, padding=Undefined, resolve=Undefined, spacing=Undefined, - title=Undefined, transform=Undefined, usermeta=Undefined, **kwds): - super(TopLevelFacetSpec, self).__init__(data=data, facet=facet, spec=spec, align=align, - autosize=autosize, background=background, bounds=bounds, - center=center, columns=columns, config=config, - datasets=datasets, description=description, name=name, - padding=padding, resolve=resolve, spacing=spacing, - title=title, transform=transform, usermeta=usermeta, - **kwds) - - -class TopLevelHConcatSpec(TopLevelSpec): - """TopLevelHConcatSpec schema wrapper - - Mapping(required=[hconcat]) - - Attributes - ---------- - - hconcat : List(:class:`Spec`) - A list of views to be concatenated and put into a row. - autosize : anyOf(:class:`AutosizeType`, :class:`AutoSizeParams`) - Sets how the visualization size should be determined. If a string, should be one of - ``"pad"``, ``"fit"`` or ``"none"``. - Object values can additionally specify parameters for content sizing and automatic - resizing. - ``"fit"`` is only supported for single and layered views that don't use - ``rangeStep``. - - **Default value** : ``pad`` - background : string - CSS color property to use as the background of the entire view. - - **Default value:** none (transparent) - bounds : enum('full', 'flush') - The bounds calculation method to use for determining the extent of a sub-plot. One - of ``full`` (the default) or ``flush``. - - - * If set to ``full``, the entire calculated bounds (including axes, title, and - legend) will be used. - * If set to ``flush``, only the specified width and height values for the sub-view - will be used. The ``flush`` setting can be useful when attempting to place - sub-plots without axes or legends into a uniform grid structure. - - **Default value:** ``"full"`` - center : boolean - Boolean flag indicating if subviews should be centered relative to their respective - rows or columns. - - **Default value:** ``false`` - config : :class:`Config` - Vega-Lite configuration object. This property can only be defined at the top-level - of a specification. - data : anyOf(:class:`Data`, None) - An object describing the data source. Set to ``null`` to ignore the parent's data - source. If no data is set, it is derived from the parent. - datasets : :class:`Datasets` - A global data store for named datasets. This is a mapping from names to inline - datasets. - This can be an array of objects or primitive values or a string. Arrays of primitive - values are ingested as objects with a ``data`` property. - description : string - Description of this mark for commenting purpose. - name : string - Name of the visualization for later reference. - padding : :class:`Padding` - The default visualization padding, in pixels, from the edge of the visualization - canvas to the data rectangle. If a number, specifies padding for all sides. - If an object, the value should have the format ``{"left": 5, "top": 5, "right": 5, - "bottom": 5}`` to specify padding for each side of the visualization. - - **Default value** : ``5`` - resolve : :class:`Resolve` - Scale, axis, and legend resolutions for view composition specifications. - spacing : float - The spacing in pixels between sub-views of the concat operator. - - **Default value** : ``10`` - title : anyOf(string, :class:`TitleParams`) - Title for the plot. - transform : List(:class:`Transform`) - An array of data transformations such as filter and new field calculation. - usermeta : Mapping(required=[]) - Optional metadata that will be passed to Vega. - This object is completely ignored by Vega and Vega-Lite and can be used for custom - metadata. - $schema : string - URL to `JSON schema `__ for a Vega-Lite specification. - Unless you have a reason to change this, use - ``https://vega.github.io/schema/vega-lite/v3.json``. Setting the ``$schema`` - property allows automatic validation and autocomplete in editors that support JSON - schema. - """ - _schema = {'$ref': '#/definitions/TopLevelHConcatSpec'} - - def __init__(self, hconcat=Undefined, autosize=Undefined, background=Undefined, bounds=Undefined, - center=Undefined, config=Undefined, data=Undefined, datasets=Undefined, - description=Undefined, name=Undefined, padding=Undefined, resolve=Undefined, - spacing=Undefined, title=Undefined, transform=Undefined, usermeta=Undefined, **kwds): - super(TopLevelHConcatSpec, self).__init__(hconcat=hconcat, autosize=autosize, - background=background, bounds=bounds, center=center, - config=config, data=data, datasets=datasets, - description=description, name=name, padding=padding, - resolve=resolve, spacing=spacing, title=title, - transform=transform, usermeta=usermeta, **kwds) - - -class TopLevelLayerSpec(TopLevelSpec): - """TopLevelLayerSpec schema wrapper - - Mapping(required=[layer]) - - Attributes - ---------- - - layer : List(anyOf(:class:`LayerSpec`, :class:`UnitSpec`)) - Layer or single view specifications to be layered. - - **Note** : Specifications inside ``layer`` cannot use ``row`` and ``column`` - channels as layering facet specifications is not allowed. Instead, use the `facet - operator `__ and place a layer - inside a facet. - autosize : anyOf(:class:`AutosizeType`, :class:`AutoSizeParams`) - Sets how the visualization size should be determined. If a string, should be one of - ``"pad"``, ``"fit"`` or ``"none"``. - Object values can additionally specify parameters for content sizing and automatic - resizing. - ``"fit"`` is only supported for single and layered views that don't use - ``rangeStep``. - - **Default value** : ``pad`` - background : string - CSS color property to use as the background of the entire view. - - **Default value:** none (transparent) - config : :class:`Config` - Vega-Lite configuration object. This property can only be defined at the top-level - of a specification. - data : anyOf(:class:`Data`, None) - An object describing the data source. Set to ``null`` to ignore the parent's data - source. If no data is set, it is derived from the parent. - datasets : :class:`Datasets` - A global data store for named datasets. This is a mapping from names to inline - datasets. - This can be an array of objects or primitive values or a string. Arrays of primitive - values are ingested as objects with a ``data`` property. - description : string - Description of this mark for commenting purpose. - encoding : :class:`Encoding` - A shared key-value mapping between encoding channels and definition of fields in the - underlying layers. - height : float - The height of a visualization. - - **Default value:** - - - * If a view's `autosize - `__ type is ``"fit"`` or - its y-channel has a `continuous scale - `__, the height will - be the value of `config.view.height - `__. - * For y-axis with a band or point scale: if `rangeStep - `__ is a numeric value or - unspecified, the height is `determined by the range step, paddings, and the - cardinality of the field mapped to y-channel - `__. Otherwise, if the - ``rangeStep`` is ``null``, the height will be the value of `config.view.height - `__. - * If no field is mapped to ``y`` channel, the ``height`` will be the value of - ``rangeStep``. - - **Note** : For plots with `row and column channels - `__, this represents the - height of a single view. - - **See also:** The documentation for `width and height - `__ contains more examples. - name : string - Name of the visualization for later reference. - padding : :class:`Padding` - The default visualization padding, in pixels, from the edge of the visualization - canvas to the data rectangle. If a number, specifies padding for all sides. - If an object, the value should have the format ``{"left": 5, "top": 5, "right": 5, - "bottom": 5}`` to specify padding for each side of the visualization. - - **Default value** : ``5`` - projection : :class:`Projection` - An object defining properties of the geographic projection shared by underlying - layers. - resolve : :class:`Resolve` - Scale, axis, and legend resolutions for view composition specifications. - title : anyOf(string, :class:`TitleParams`) - Title for the plot. - transform : List(:class:`Transform`) - An array of data transformations such as filter and new field calculation. - usermeta : Mapping(required=[]) - Optional metadata that will be passed to Vega. - This object is completely ignored by Vega and Vega-Lite and can be used for custom - metadata. - view : :class:`ViewBackground` - An object defining the view background's fill and stroke. - - **Default value:** none (transparent) - width : float - The width of a visualization. - - **Default value:** This will be determined by the following rules: - - - * If a view's `autosize - `__ type is ``"fit"`` or - its x-channel has a `continuous scale - `__, the width will - be the value of `config.view.width - `__. - * For x-axis with a band or point scale: if `rangeStep - `__ is a numeric value or - unspecified, the width is `determined by the range step, paddings, and the - cardinality of the field mapped to x-channel - `__. Otherwise, if the - ``rangeStep`` is ``null``, the width will be the value of `config.view.width - `__. - * If no field is mapped to ``x`` channel, the ``width`` will be the value of - `config.scale.textXRangeStep - `__ for - ``text`` mark and the value of ``rangeStep`` for other marks. - - **Note:** For plots with `row and column channels - `__, this represents the - width of a single view. - - **See also:** The documentation for `width and height - `__ contains more examples. - $schema : string - URL to `JSON schema `__ for a Vega-Lite specification. - Unless you have a reason to change this, use - ``https://vega.github.io/schema/vega-lite/v3.json``. Setting the ``$schema`` - property allows automatic validation and autocomplete in editors that support JSON - schema. - """ - _schema = {'$ref': '#/definitions/TopLevelLayerSpec'} - - def __init__(self, layer=Undefined, autosize=Undefined, background=Undefined, config=Undefined, - data=Undefined, datasets=Undefined, description=Undefined, encoding=Undefined, - height=Undefined, name=Undefined, padding=Undefined, projection=Undefined, - resolve=Undefined, title=Undefined, transform=Undefined, usermeta=Undefined, - view=Undefined, width=Undefined, **kwds): - super(TopLevelLayerSpec, self).__init__(layer=layer, autosize=autosize, background=background, - config=config, data=data, datasets=datasets, - description=description, encoding=encoding, - height=height, name=name, padding=padding, - projection=projection, resolve=resolve, title=title, - transform=transform, usermeta=usermeta, view=view, - width=width, **kwds) - - -class TopLevelRepeatSpec(TopLevelSpec): - """TopLevelRepeatSpec schema wrapper - - Mapping(required=[repeat, spec]) - - Attributes - ---------- - - repeat : anyOf(List(string), :class:`RepeatMapping`) - Definition for fields to be repeated. One of: - 1) An array of fields to be repeated. If ``"repeat"`` is an array, the field can be - referred using ``{"repeat": "repeat"}`` - 2) An object that mapped ``"row"`` and/or ``"column"`` to the listed of fields to be - repeated along the particular orientations. The objects ``{"repeat": "row"}`` and - ``{"repeat": "column"}`` can be used to refer to the repeated field respectively. - spec : :class:`Spec` - A specification of the view that gets repeated. - align : anyOf(:class:`LayoutAlign`, :class:`RowColLayoutAlign`) - The alignment to apply to grid rows and columns. - The supported string values are ``"all"``, ``"each"``, and ``"none"``. - - - * For ``"none"``, a flow layout will be used, in which adjacent subviews are simply - placed one after the other. - * For ``"each"``, subviews will be aligned into a clean grid structure, but each row - or column may be of variable size. - * For ``"all"``, subviews will be aligned and each row or column will be sized - identically based on the maximum observed size. String values for this property - will be applied to both grid rows and columns. - - Alternatively, an object value of the form ``{"row": string, "column": string}`` can - be used to supply different alignments for rows and columns. - - **Default value:** ``"all"``. - autosize : anyOf(:class:`AutosizeType`, :class:`AutoSizeParams`) - Sets how the visualization size should be determined. If a string, should be one of - ``"pad"``, ``"fit"`` or ``"none"``. - Object values can additionally specify parameters for content sizing and automatic - resizing. - ``"fit"`` is only supported for single and layered views that don't use - ``rangeStep``. - - **Default value** : ``pad`` - background : string - CSS color property to use as the background of the entire view. - - **Default value:** none (transparent) - bounds : enum('full', 'flush') - The bounds calculation method to use for determining the extent of a sub-plot. One - of ``full`` (the default) or ``flush``. - - - * If set to ``full``, the entire calculated bounds (including axes, title, and - legend) will be used. - * If set to ``flush``, only the specified width and height values for the sub-view - will be used. The ``flush`` setting can be useful when attempting to place - sub-plots without axes or legends into a uniform grid structure. - - **Default value:** ``"full"`` - center : anyOf(boolean, :class:`RowColboolean`) - Boolean flag indicating if subviews should be centered relative to their respective - rows or columns. - - An object value of the form ``{"row": boolean, "column": boolean}`` can be used to - supply different centering values for rows and columns. - - **Default value:** ``false`` - columns : float - The number of columns to include in the view composition layout. - - **Default value** : ``undefined`` -- An infinite number of columns (a single row) - will be assumed. This is equivalent to - ``hconcat`` (for ``concat`` ) and to using the ``column`` channel (for ``facet`` and - ``repeat`` ). - - **Note** : - - 1) This property is only for: - - - * the general (wrappable) ``concat`` operator (not ``hconcat`` / ``vconcat`` ) - * the ``facet`` and ``repeat`` operator with one field/repetition definition - (without row/column nesting) - - 2) Setting the ``columns`` to ``1`` is equivalent to ``vconcat`` (for ``concat`` ) - and to using the ``row`` channel (for ``facet`` and ``repeat`` ). - config : :class:`Config` - Vega-Lite configuration object. This property can only be defined at the top-level - of a specification. - data : anyOf(:class:`Data`, None) - An object describing the data source. Set to ``null`` to ignore the parent's data - source. If no data is set, it is derived from the parent. - datasets : :class:`Datasets` - A global data store for named datasets. This is a mapping from names to inline - datasets. - This can be an array of objects or primitive values or a string. Arrays of primitive - values are ingested as objects with a ``data`` property. - description : string - Description of this mark for commenting purpose. - name : string - Name of the visualization for later reference. - padding : :class:`Padding` - The default visualization padding, in pixels, from the edge of the visualization - canvas to the data rectangle. If a number, specifies padding for all sides. - If an object, the value should have the format ``{"left": 5, "top": 5, "right": 5, - "bottom": 5}`` to specify padding for each side of the visualization. - - **Default value** : ``5`` - resolve : :class:`Resolve` - Scale, axis, and legend resolutions for view composition specifications. - spacing : anyOf(float, :class:`RowColnumber`) - The spacing in pixels between sub-views of the composition operator. - An object of the form ``{"row": number, "column": number}`` can be used to set - different spacing values for rows and columns. - - **Default value** : Depends on ``"spacing"`` property of `the view composition - configuration `__ ( - ``20`` by default) - title : anyOf(string, :class:`TitleParams`) - Title for the plot. - transform : List(:class:`Transform`) - An array of data transformations such as filter and new field calculation. - usermeta : Mapping(required=[]) - Optional metadata that will be passed to Vega. - This object is completely ignored by Vega and Vega-Lite and can be used for custom - metadata. - $schema : string - URL to `JSON schema `__ for a Vega-Lite specification. - Unless you have a reason to change this, use - ``https://vega.github.io/schema/vega-lite/v3.json``. Setting the ``$schema`` - property allows automatic validation and autocomplete in editors that support JSON - schema. - """ - _schema = {'$ref': '#/definitions/TopLevelRepeatSpec'} - - def __init__(self, repeat=Undefined, spec=Undefined, align=Undefined, autosize=Undefined, - background=Undefined, bounds=Undefined, center=Undefined, columns=Undefined, - config=Undefined, data=Undefined, datasets=Undefined, description=Undefined, - name=Undefined, padding=Undefined, resolve=Undefined, spacing=Undefined, - title=Undefined, transform=Undefined, usermeta=Undefined, **kwds): - super(TopLevelRepeatSpec, self).__init__(repeat=repeat, spec=spec, align=align, - autosize=autosize, background=background, - bounds=bounds, center=center, columns=columns, - config=config, data=data, datasets=datasets, - description=description, name=name, padding=padding, - resolve=resolve, spacing=spacing, title=title, - transform=transform, usermeta=usermeta, **kwds) - - -class TopLevelUnitSpec(TopLevelSpec): - """TopLevelUnitSpec schema wrapper - - Mapping(required=[data, mark]) - - Attributes - ---------- - - data : anyOf(:class:`Data`, None) - An object describing the data source. Set to ``null`` to ignore the parent's data - source. If no data is set, it is derived from the parent. - mark : :class:`AnyMark` - A string describing the mark type (one of ``"bar"``, ``"circle"``, ``"square"``, - ``"tick"``, ``"line"``, - ``"area"``, ``"point"``, ``"rule"``, ``"geoshape"``, and ``"text"`` ) or a `mark - definition object `__. - align : anyOf(:class:`LayoutAlign`, :class:`RowColLayoutAlign`) - The alignment to apply to grid rows and columns. - The supported string values are ``"all"``, ``"each"``, and ``"none"``. - - - * For ``"none"``, a flow layout will be used, in which adjacent subviews are simply - placed one after the other. - * For ``"each"``, subviews will be aligned into a clean grid structure, but each row - or column may be of variable size. - * For ``"all"``, subviews will be aligned and each row or column will be sized - identically based on the maximum observed size. String values for this property - will be applied to both grid rows and columns. - - Alternatively, an object value of the form ``{"row": string, "column": string}`` can - be used to supply different alignments for rows and columns. - - **Default value:** ``"all"``. - autosize : anyOf(:class:`AutosizeType`, :class:`AutoSizeParams`) - Sets how the visualization size should be determined. If a string, should be one of - ``"pad"``, ``"fit"`` or ``"none"``. - Object values can additionally specify parameters for content sizing and automatic - resizing. - ``"fit"`` is only supported for single and layered views that don't use - ``rangeStep``. - - **Default value** : ``pad`` - background : string - CSS color property to use as the background of the entire view. - - **Default value:** none (transparent) - bounds : enum('full', 'flush') - The bounds calculation method to use for determining the extent of a sub-plot. One - of ``full`` (the default) or ``flush``. - - - * If set to ``full``, the entire calculated bounds (including axes, title, and - legend) will be used. - * If set to ``flush``, only the specified width and height values for the sub-view - will be used. The ``flush`` setting can be useful when attempting to place - sub-plots without axes or legends into a uniform grid structure. - - **Default value:** ``"full"`` - center : anyOf(boolean, :class:`RowColboolean`) - Boolean flag indicating if subviews should be centered relative to their respective - rows or columns. - - An object value of the form ``{"row": boolean, "column": boolean}`` can be used to - supply different centering values for rows and columns. - - **Default value:** ``false`` - columns : float - The number of columns to include in the view composition layout. - - **Default value** : ``undefined`` -- An infinite number of columns (a single row) - will be assumed. This is equivalent to - ``hconcat`` (for ``concat`` ) and to using the ``column`` channel (for ``facet`` and - ``repeat`` ). - - **Note** : - - 1) This property is only for: - - - * the general (wrappable) ``concat`` operator (not ``hconcat`` / ``vconcat`` ) - * the ``facet`` and ``repeat`` operator with one field/repetition definition - (without row/column nesting) - - 2) Setting the ``columns`` to ``1`` is equivalent to ``vconcat`` (for ``concat`` ) - and to using the ``row`` channel (for ``facet`` and ``repeat`` ). - config : :class:`Config` - Vega-Lite configuration object. This property can only be defined at the top-level - of a specification. - datasets : :class:`Datasets` - A global data store for named datasets. This is a mapping from names to inline - datasets. - This can be an array of objects or primitive values or a string. Arrays of primitive - values are ingested as objects with a ``data`` property. - description : string - Description of this mark for commenting purpose. - encoding : :class:`FacetedEncoding` - A key-value mapping between encoding channels and definition of fields. - height : float - The height of a visualization. - - **Default value:** - - - * If a view's `autosize - `__ type is ``"fit"`` or - its y-channel has a `continuous scale - `__, the height will - be the value of `config.view.height - `__. - * For y-axis with a band or point scale: if `rangeStep - `__ is a numeric value or - unspecified, the height is `determined by the range step, paddings, and the - cardinality of the field mapped to y-channel - `__. Otherwise, if the - ``rangeStep`` is ``null``, the height will be the value of `config.view.height - `__. - * If no field is mapped to ``y`` channel, the ``height`` will be the value of - ``rangeStep``. - - **Note** : For plots with `row and column channels - `__, this represents the - height of a single view. - - **See also:** The documentation for `width and height - `__ contains more examples. - name : string - Name of the visualization for later reference. - padding : :class:`Padding` - The default visualization padding, in pixels, from the edge of the visualization - canvas to the data rectangle. If a number, specifies padding for all sides. - If an object, the value should have the format ``{"left": 5, "top": 5, "right": 5, - "bottom": 5}`` to specify padding for each side of the visualization. - - **Default value** : ``5`` - projection : :class:`Projection` - An object defining properties of geographic projection, which will be applied to - ``shape`` path for ``"geoshape"`` marks - and to ``latitude`` and ``"longitude"`` channels for other marks. - resolve : :class:`Resolve` - Scale, axis, and legend resolutions for view composition specifications. - selection : Mapping(required=[]) - A key-value mapping between selection names and definitions. - spacing : anyOf(float, :class:`RowColnumber`) - The spacing in pixels between sub-views of the composition operator. - An object of the form ``{"row": number, "column": number}`` can be used to set - different spacing values for rows and columns. - - **Default value** : Depends on ``"spacing"`` property of `the view composition - configuration `__ ( - ``20`` by default) - title : anyOf(string, :class:`TitleParams`) - Title for the plot. - transform : List(:class:`Transform`) - An array of data transformations such as filter and new field calculation. - usermeta : Mapping(required=[]) - Optional metadata that will be passed to Vega. - This object is completely ignored by Vega and Vega-Lite and can be used for custom - metadata. - view : :class:`ViewBackground` - An object defining the view background's fill and stroke. - - **Default value:** none (transparent) - width : float - The width of a visualization. - - **Default value:** This will be determined by the following rules: - - - * If a view's `autosize - `__ type is ``"fit"`` or - its x-channel has a `continuous scale - `__, the width will - be the value of `config.view.width - `__. - * For x-axis with a band or point scale: if `rangeStep - `__ is a numeric value or - unspecified, the width is `determined by the range step, paddings, and the - cardinality of the field mapped to x-channel - `__. Otherwise, if the - ``rangeStep`` is ``null``, the width will be the value of `config.view.width - `__. - * If no field is mapped to ``x`` channel, the ``width`` will be the value of - `config.scale.textXRangeStep - `__ for - ``text`` mark and the value of ``rangeStep`` for other marks. - - **Note:** For plots with `row and column channels - `__, this represents the - width of a single view. - - **See also:** The documentation for `width and height - `__ contains more examples. - $schema : string - URL to `JSON schema `__ for a Vega-Lite specification. - Unless you have a reason to change this, use - ``https://vega.github.io/schema/vega-lite/v3.json``. Setting the ``$schema`` - property allows automatic validation and autocomplete in editors that support JSON - schema. - """ - _schema = {'$ref': '#/definitions/TopLevelUnitSpec'} - - def __init__(self, data=Undefined, mark=Undefined, align=Undefined, autosize=Undefined, - background=Undefined, bounds=Undefined, center=Undefined, columns=Undefined, - config=Undefined, datasets=Undefined, description=Undefined, encoding=Undefined, - height=Undefined, name=Undefined, padding=Undefined, projection=Undefined, - resolve=Undefined, selection=Undefined, spacing=Undefined, title=Undefined, - transform=Undefined, usermeta=Undefined, view=Undefined, width=Undefined, **kwds): - super(TopLevelUnitSpec, self).__init__(data=data, mark=mark, align=align, autosize=autosize, - background=background, bounds=bounds, center=center, - columns=columns, config=config, datasets=datasets, - description=description, encoding=encoding, - height=height, name=name, padding=padding, - projection=projection, resolve=resolve, - selection=selection, spacing=spacing, title=title, - transform=transform, usermeta=usermeta, view=view, - width=width, **kwds) - - -class TopLevelVConcatSpec(TopLevelSpec): - """TopLevelVConcatSpec schema wrapper - - Mapping(required=[vconcat]) - - Attributes - ---------- - - vconcat : List(:class:`Spec`) - A list of views to be concatenated and put into a column. - autosize : anyOf(:class:`AutosizeType`, :class:`AutoSizeParams`) - Sets how the visualization size should be determined. If a string, should be one of - ``"pad"``, ``"fit"`` or ``"none"``. - Object values can additionally specify parameters for content sizing and automatic - resizing. - ``"fit"`` is only supported for single and layered views that don't use - ``rangeStep``. - - **Default value** : ``pad`` - background : string - CSS color property to use as the background of the entire view. - - **Default value:** none (transparent) - bounds : enum('full', 'flush') - The bounds calculation method to use for determining the extent of a sub-plot. One - of ``full`` (the default) or ``flush``. - - - * If set to ``full``, the entire calculated bounds (including axes, title, and - legend) will be used. - * If set to ``flush``, only the specified width and height values for the sub-view - will be used. The ``flush`` setting can be useful when attempting to place - sub-plots without axes or legends into a uniform grid structure. - - **Default value:** ``"full"`` - center : boolean - Boolean flag indicating if subviews should be centered relative to their respective - rows or columns. - - **Default value:** ``false`` - config : :class:`Config` - Vega-Lite configuration object. This property can only be defined at the top-level - of a specification. - data : anyOf(:class:`Data`, None) - An object describing the data source. Set to ``null`` to ignore the parent's data - source. If no data is set, it is derived from the parent. - datasets : :class:`Datasets` - A global data store for named datasets. This is a mapping from names to inline - datasets. - This can be an array of objects or primitive values or a string. Arrays of primitive - values are ingested as objects with a ``data`` property. - description : string - Description of this mark for commenting purpose. - name : string - Name of the visualization for later reference. - padding : :class:`Padding` - The default visualization padding, in pixels, from the edge of the visualization - canvas to the data rectangle. If a number, specifies padding for all sides. - If an object, the value should have the format ``{"left": 5, "top": 5, "right": 5, - "bottom": 5}`` to specify padding for each side of the visualization. - - **Default value** : ``5`` - resolve : :class:`Resolve` - Scale, axis, and legend resolutions for view composition specifications. - spacing : float - The spacing in pixels between sub-views of the concat operator. - - **Default value** : ``10`` - title : anyOf(string, :class:`TitleParams`) - Title for the plot. - transform : List(:class:`Transform`) - An array of data transformations such as filter and new field calculation. - usermeta : Mapping(required=[]) - Optional metadata that will be passed to Vega. - This object is completely ignored by Vega and Vega-Lite and can be used for custom - metadata. - $schema : string - URL to `JSON schema `__ for a Vega-Lite specification. - Unless you have a reason to change this, use - ``https://vega.github.io/schema/vega-lite/v3.json``. Setting the ``$schema`` - property allows automatic validation and autocomplete in editors that support JSON - schema. - """ - _schema = {'$ref': '#/definitions/TopLevelVConcatSpec'} - - def __init__(self, vconcat=Undefined, autosize=Undefined, background=Undefined, bounds=Undefined, - center=Undefined, config=Undefined, data=Undefined, datasets=Undefined, - description=Undefined, name=Undefined, padding=Undefined, resolve=Undefined, - spacing=Undefined, title=Undefined, transform=Undefined, usermeta=Undefined, **kwds): - super(TopLevelVConcatSpec, self).__init__(vconcat=vconcat, autosize=autosize, - background=background, bounds=bounds, center=center, - config=config, data=data, datasets=datasets, - description=description, name=name, padding=padding, - resolve=resolve, spacing=spacing, title=title, - transform=transform, usermeta=usermeta, **kwds) - - -class TopoDataFormat(DataFormat): - """TopoDataFormat schema wrapper - - Mapping(required=[]) - - Attributes - ---------- - - feature : string - The name of the TopoJSON object set to convert to a GeoJSON feature collection. - For example, in a map of the world, there may be an object set named - ``"countries"``. - Using the feature property, we can extract this set and generate a GeoJSON feature - object for each country. - mesh : string - The name of the TopoJSON object set to convert to mesh. - Similar to the ``feature`` option, ``mesh`` extracts a named TopoJSON object set. - Unlike the ``feature`` option, the corresponding geo data is returned as a single, - unified mesh instance, not as individual GeoJSON features. - Extracting a mesh is useful for more efficiently drawing borders or other geographic - elements that you do not need to associate with specific regions such as individual - countries, states or counties. - parse : anyOf(:class:`Parse`, None) - If set to ``null``, disable type inference based on the spec and only use type - inference based on the data. - Alternatively, a parsing directive object can be provided for explicit data types. - Each property of the object corresponds to a field name, and the value to the - desired data type (one of ``"number"``, ``"boolean"``, ``"date"``, or null (do not - parse the field)). - For example, ``"parse": {"modified_on": "date"}`` parses the ``modified_on`` field - in each input record a Date value. - - For ``"date"``, we parse data based using Javascript's `Date.parse() - `__. - For Specific date formats can be provided (e.g., ``{foo: "date:'%m%d%Y'"}`` ), using - the `d3-time-format syntax `__. - UTC date format parsing is supported similarly (e.g., ``{foo: "utc:'%m%d%Y'"}`` ). - See more about `UTC time - `__ - type : enum('topojson') - Type of input data: ``"json"``, ``"csv"``, ``"tsv"``, ``"dsv"``. - - **Default value:** The default format type is determined by the extension of the - file URL. - If no extension is detected, ``"json"`` will be used by default. - """ - _schema = {'$ref': '#/definitions/TopoDataFormat'} - - def __init__(self, feature=Undefined, mesh=Undefined, parse=Undefined, type=Undefined, **kwds): - super(TopoDataFormat, self).__init__(feature=feature, mesh=mesh, parse=parse, type=type, **kwds) - - -class Transform(VegaLiteSchema): - """Transform schema wrapper - - anyOf(:class:`AggregateTransform`, :class:`BinTransform`, :class:`CalculateTransform`, - :class:`FilterTransform`, :class:`FlattenTransform`, :class:`FoldTransform`, - :class:`ImputeTransform`, :class:`JoinAggregateTransform`, :class:`LookupTransform`, - :class:`TimeUnitTransform`, :class:`SampleTransform`, :class:`StackTransform`, - :class:`WindowTransform`) - """ - _schema = {'$ref': '#/definitions/Transform'} - - def __init__(self, *args, **kwds): - super(Transform, self).__init__(*args, **kwds) - - -class AggregateTransform(Transform): - """AggregateTransform schema wrapper - - Mapping(required=[aggregate]) - - Attributes - ---------- - - aggregate : List(:class:`AggregatedFieldDef`) - Array of objects that define fields to aggregate. - groupby : List(:class:`FieldName`) - The data fields to group by. If not specified, a single group containing all data - objects will be used. - """ - _schema = {'$ref': '#/definitions/AggregateTransform'} - - def __init__(self, aggregate=Undefined, groupby=Undefined, **kwds): - super(AggregateTransform, self).__init__(aggregate=aggregate, groupby=groupby, **kwds) - - -class BinTransform(Transform): - """BinTransform schema wrapper - - Mapping(required=[bin, field, as]) - - Attributes - ---------- - - bin : anyOf(enum(True), :class:`BinParams`) - An object indicating bin properties, or simply ``true`` for using default bin - parameters. - field : :class:`FieldName` - The data field to bin. - as : anyOf(:class:`FieldName`, List(:class:`FieldName`)) - The output fields at which to write the start and end bin values. - """ - _schema = {'$ref': '#/definitions/BinTransform'} - - def __init__(self, bin=Undefined, field=Undefined, **kwds): - super(BinTransform, self).__init__(bin=bin, field=field, **kwds) - - -class CalculateTransform(Transform): - """CalculateTransform schema wrapper - - Mapping(required=[calculate, as]) - - Attributes - ---------- - - calculate : string - A `expression `__ - string. Use the variable ``datum`` to refer to the current data object. - as : :class:`FieldName` - The field for storing the computed formula value. - """ - _schema = {'$ref': '#/definitions/CalculateTransform'} - - def __init__(self, calculate=Undefined, **kwds): - super(CalculateTransform, self).__init__(calculate=calculate, **kwds) - - -class FilterTransform(Transform): - """FilterTransform schema wrapper - - Mapping(required=[filter]) - - Attributes - ---------- - - filter : :class:`LogicalOperandPredicate` - The ``filter`` property must be one of the predicate definitions: - - 1) an `expression `__ - string, - where ``datum`` can be used to refer to the current data object - - 2) one of the field predicates: `equal - `__, - `lt `__, - `lte `__, - `gt `__, - `gte `__, - `range `__, - `oneOf `__, - or `valid `__, - - 3) a `selection predicate - `__ - - 4) a logical operand that combines (1), (2), or (3). - """ - _schema = {'$ref': '#/definitions/FilterTransform'} - - def __init__(self, filter=Undefined, **kwds): - super(FilterTransform, self).__init__(filter=filter, **kwds) - - -class FlattenTransform(Transform): - """FlattenTransform schema wrapper - - Mapping(required=[flatten]) - - Attributes - ---------- - - flatten : List(:class:`FieldName`) - An array of one or more data fields containing arrays to flatten. - If multiple fields are specified, their array values should have a parallel - structure, ideally with the same length. - If the lengths of parallel arrays do not match, - the longest array will be used with ``null`` values added for missing entries. - as : List(:class:`FieldName`) - The output field names for extracted array values. - - **Default value:** The field name of the corresponding array field - """ - _schema = {'$ref': '#/definitions/FlattenTransform'} - - def __init__(self, flatten=Undefined, **kwds): - super(FlattenTransform, self).__init__(flatten=flatten, **kwds) - - -class FoldTransform(Transform): - """FoldTransform schema wrapper - - Mapping(required=[fold]) - - Attributes - ---------- - - fold : List(:class:`FieldName`) - An array of data fields indicating the properties to fold. - as : List([:class:`FieldName`, :class:`FieldName`]) - The output field names for the key and value properties produced by the fold - transform. - **Default value:** ``["key", "value"]`` - """ - _schema = {'$ref': '#/definitions/FoldTransform'} - - def __init__(self, fold=Undefined, **kwds): - super(FoldTransform, self).__init__(fold=fold, **kwds) - - -class ImputeTransform(Transform): - """ImputeTransform schema wrapper - - Mapping(required=[impute, key]) - - Attributes - ---------- - - impute : :class:`FieldName` - The data field for which the missing values should be imputed. - key : :class:`FieldName` - A key field that uniquely identifies data objects within a group. - Missing key values (those occurring in the data but not in the current group) will - be imputed. - frame : List(anyOf(None, float)) - A frame specification as a two-element array used to control the window over which - the specified method is applied. The array entries should either be a number - indicating the offset from the current data object, or null to indicate unbounded - rows preceding or following the current data object. For example, the value ``[-5, - 5]`` indicates that the window should include five objects preceding and five - objects following the current object. - - **Default value:** : ``[null, null]`` indicating that the window includes all - objects. - groupby : List(:class:`FieldName`) - An optional array of fields by which to group the values. - Imputation will then be performed on a per-group basis. - keyvals : anyOf(List(Any), :class:`ImputeSequence`) - Defines the key values that should be considered for imputation. - An array of key values or an object defining a `number sequence - `__. - - If provided, this will be used in addition to the key values observed within the - input data. If not provided, the values will be derived from all unique values of - the ``key`` field. For ``impute`` in ``encoding``, the key field is the x-field if - the y-field is imputed, or vice versa. - - If there is no impute grouping, this property *must* be specified. - method : :class:`ImputeMethod` - The imputation method to use for the field value of imputed data objects. - One of ``value``, ``mean``, ``median``, ``max`` or ``min``. - - **Default value:** ``"value"`` - value : Any - The field value to use when the imputation ``method`` is ``"value"``. - """ - _schema = {'$ref': '#/definitions/ImputeTransform'} - - def __init__(self, impute=Undefined, key=Undefined, frame=Undefined, groupby=Undefined, - keyvals=Undefined, method=Undefined, value=Undefined, **kwds): - super(ImputeTransform, self).__init__(impute=impute, key=key, frame=frame, groupby=groupby, - keyvals=keyvals, method=method, value=value, **kwds) - - -class JoinAggregateTransform(Transform): - """JoinAggregateTransform schema wrapper - - Mapping(required=[joinaggregate]) - - Attributes - ---------- - - joinaggregate : List(:class:`JoinAggregateFieldDef`) - The definition of the fields in the join aggregate, and what calculations to use. - groupby : List(:class:`FieldName`) - The data fields for partitioning the data objects into separate groups. If - unspecified, all data points will be in a single group. - """ - _schema = {'$ref': '#/definitions/JoinAggregateTransform'} - - def __init__(self, joinaggregate=Undefined, groupby=Undefined, **kwds): - super(JoinAggregateTransform, self).__init__(joinaggregate=joinaggregate, groupby=groupby, - **kwds) - - -class LookupTransform(Transform): - """LookupTransform schema wrapper - - Mapping(required=[lookup, from]) - - Attributes - ---------- - - lookup : :class:`FieldName` - Key in primary data source. - default : string - The default value to use if lookup fails. - - **Default value:** ``null`` - as : anyOf(:class:`FieldName`, List(:class:`FieldName`)) - The field or fields for storing the computed formula value. - If ``from.fields`` is specified, the transform will use the same names for ``as``. - If ``from.fields`` is not specified, ``as`` has to be a string and we put the whole - object into the data under the specified name. - from : :class:`LookupData` - Secondary data reference. - """ - _schema = {'$ref': '#/definitions/LookupTransform'} - - def __init__(self, lookup=Undefined, default=Undefined, **kwds): - super(LookupTransform, self).__init__(lookup=lookup, default=default, **kwds) - - -class SampleTransform(Transform): - """SampleTransform schema wrapper - - Mapping(required=[sample]) - - Attributes - ---------- - - sample : float - The maximum number of data objects to include in the sample. - - **Default value:** ``1000`` - """ - _schema = {'$ref': '#/definitions/SampleTransform'} - - def __init__(self, sample=Undefined, **kwds): - super(SampleTransform, self).__init__(sample=sample, **kwds) - - -class StackTransform(Transform): - """StackTransform schema wrapper - - Mapping(required=[stack, groupby, as]) - - Attributes - ---------- - - groupby : List(:class:`FieldName`) - The data fields to group by. - stack : :class:`FieldName` - The field which is stacked. - offset : enum('zero', 'center', 'normalize') - Mode for stacking marks. - **Default value:** ``"zero"`` - sort : List(:class:`SortField`) - Field that determines the order of leaves in the stacked charts. - as : anyOf(:class:`FieldName`, List(:class:`FieldName`)) - Output field names. This can be either a string or an array of strings with - two elements denoting the name for the fields for stack start and stack end - respectively. - If a single string(eg."val") is provided, the end field will be "val_end". - """ - _schema = {'$ref': '#/definitions/StackTransform'} - - def __init__(self, groupby=Undefined, stack=Undefined, offset=Undefined, sort=Undefined, **kwds): - super(StackTransform, self).__init__(groupby=groupby, stack=stack, offset=offset, sort=sort, - **kwds) - - -class TimeUnitTransform(Transform): - """TimeUnitTransform schema wrapper - - Mapping(required=[timeUnit, field, as]) - - Attributes - ---------- - - field : :class:`FieldName` - The data field to apply time unit. - timeUnit : :class:`TimeUnit` - The timeUnit. - as : :class:`FieldName` - The output field to write the timeUnit value. - """ - _schema = {'$ref': '#/definitions/TimeUnitTransform'} - - def __init__(self, field=Undefined, timeUnit=Undefined, **kwds): - super(TimeUnitTransform, self).__init__(field=field, timeUnit=timeUnit, **kwds) - - -class TypeForShape(VegaLiteSchema): - """TypeForShape schema wrapper - - enum('nominal', 'ordinal', 'geojson') - """ - _schema = {'$ref': '#/definitions/TypeForShape'} - - def __init__(self, *args): - super(TypeForShape, self).__init__(*args) - - -class TypedFieldDef(VegaLiteSchema): - """TypedFieldDef schema wrapper - - Mapping(required=[type]) - Definition object for a data field, its type and transformation of an encoding channel. - - Attributes - ---------- - - type : :class:`StandardType` - The encoded field's type of measurement ( ``"quantitative"``, ``"temporal"``, - ``"ordinal"``, or ``"nominal"`` ). - It can also be a ``"geojson"`` type for encoding `'geoshape' - `__. - - **Note:** - - - * Data values for a temporal field can be either a date-time string (e.g., - ``"2015-03-07 12:32:17"``, ``"17:01"``, ``"2015-03-16"``. ``"2015"`` ) or a - timestamp number (e.g., ``1552199579097`` ). - * Data ``type`` describes the semantics of the data rather than the primitive data - types ( ``number``, ``string``, etc.). The same primitive data type can have - different types of measurement. For example, numeric data can represent - quantitative, ordinal, or nominal data. - * When using with `bin `__, the - ``type`` property can be either ``"quantitative"`` (for using a linear bin scale) - or `"ordinal" (for using an ordinal bin scale) - `__. - * When using with `timeUnit - `__, the ``type`` property - can be either ``"temporal"`` (for using a temporal scale) or `"ordinal" (for using - an ordinal scale) `__. - * When using with `aggregate - `__, the ``type`` property - refers to the post-aggregation data type. For example, we can calculate count - ``distinct`` of a categorical field ``"cat"`` using ``{"aggregate": "distinct", - "field": "cat", "type": "quantitative"}``. The ``"type"`` of the aggregate output - is ``"quantitative"``. - * Secondary channels (e.g., ``x2``, ``y2``, ``xError``, ``yError`` ) do not have - ``type`` as they have exactly the same type as their primary channels (e.g., - ``x``, ``y`` ). - - **See also:** `type `__ - documentation. - aggregate : :class:`Aggregate` - Aggregation function for the field - (e.g., ``mean``, ``sum``, ``median``, ``min``, ``max``, ``count`` ). - - **Default value:** ``undefined`` (None) - - **See also:** `aggregate `__ - documentation. - bin : anyOf(boolean, :class:`BinParams`, enum('binned'), None) - A flag for binning a ``quantitative`` field, `an object defining binning parameters - `__, or indicating that the - data for ``x`` or ``y`` channel are binned before they are imported into Vega-Lite ( - ``"binned"`` ). - - - If ``true``, default `binning parameters - `__ will be applied. - - If ``"binned"``, this indicates that the data for the ``x`` (or ``y`` ) channel are - already binned. You can map the bin-start field to ``x`` (or ``y`` ) and the bin-end - field to ``x2`` (or ``y2`` ). The scale and axis will be formatted similar to - binning in Vega-lite. To adjust the axis ticks based on the bin step, you can also - set the axis's `tickMinStep - `__ property. - - **Default value:** ``false`` - - **See also:** `bin `__ - documentation. - field : :class:`Field` - **Required.** A string defining the name of the field from which to pull a data - value - or an object defining iterated values from the `repeat - `__ operator. - - **See also:** `field `__ - documentation. - - **Notes:** - 1) Dots ( ``.`` ) and brackets ( ``[`` and ``]`` ) can be used to access nested - objects (e.g., ``"field": "foo.bar"`` and ``"field": "foo['bar']"`` ). - If field names contain dots or brackets but are not nested, you can use ``\\`` to - escape dots and brackets (e.g., ``"a\\.b"`` and ``"a\\[0\\]"`` ). - See more details about escaping in the `field documentation - `__. - 2) ``field`` is not required if ``aggregate`` is ``count``. - timeUnit : :class:`TimeUnit` - Time unit (e.g., ``year``, ``yearmonth``, ``month``, ``hours`` ) for a temporal - field. - or `a temporal field that gets casted as ordinal - `__. - - **Default value:** ``undefined`` (None) - - **See also:** `timeUnit `__ - documentation. - title : anyOf(string, None) - A title for the field. If ``null``, the title will be removed. - - **Default value:** derived from the field's name and transformation function ( - ``aggregate``, ``bin`` and ``timeUnit`` ). If the field has an aggregate function, - the function is displayed as part of the title (e.g., ``"Sum of Profit"`` ). If the - field is binned or has a time unit applied, the applied function is shown in - parentheses (e.g., ``"Profit (binned)"``, ``"Transaction Date (year-month)"`` ). - Otherwise, the title is simply the field name. - - **Notes** : - - 1) You can customize the default field title format by providing the `fieldTitle - `__ property in - the `config `__ or `fieldTitle - function via the compile function's options - `__. - - 2) If both field definition's ``title`` and axis, header, or legend ``title`` are - defined, axis/header/legend title will be used. - """ - _schema = {'$ref': '#/definitions/TypedFieldDef'} - - def __init__(self, type=Undefined, aggregate=Undefined, bin=Undefined, field=Undefined, - timeUnit=Undefined, title=Undefined, **kwds): - super(TypedFieldDef, self).__init__(type=type, aggregate=aggregate, bin=bin, field=field, - timeUnit=timeUnit, title=title, **kwds) - - -class UnitSpec(VegaLiteSchema): - """UnitSpec schema wrapper - - Mapping(required=[mark]) - Base interface for a unit (single-view) specification. - - Attributes - ---------- - - mark : :class:`AnyMark` - A string describing the mark type (one of ``"bar"``, ``"circle"``, ``"square"``, - ``"tick"``, ``"line"``, - ``"area"``, ``"point"``, ``"rule"``, ``"geoshape"``, and ``"text"`` ) or a `mark - definition object `__. - data : anyOf(:class:`Data`, None) - An object describing the data source. Set to ``null`` to ignore the parent's data - source. If no data is set, it is derived from the parent. - description : string - Description of this mark for commenting purpose. - encoding : :class:`Encoding` - A key-value mapping between encoding channels and definition of fields. - height : float - The height of a visualization. - - **Default value:** - - - * If a view's `autosize - `__ type is ``"fit"`` or - its y-channel has a `continuous scale - `__, the height will - be the value of `config.view.height - `__. - * For y-axis with a band or point scale: if `rangeStep - `__ is a numeric value or - unspecified, the height is `determined by the range step, paddings, and the - cardinality of the field mapped to y-channel - `__. Otherwise, if the - ``rangeStep`` is ``null``, the height will be the value of `config.view.height - `__. - * If no field is mapped to ``y`` channel, the ``height`` will be the value of - ``rangeStep``. - - **Note** : For plots with `row and column channels - `__, this represents the - height of a single view. - - **See also:** The documentation for `width and height - `__ contains more examples. - name : string - Name of the visualization for later reference. - projection : :class:`Projection` - An object defining properties of geographic projection, which will be applied to - ``shape`` path for ``"geoshape"`` marks - and to ``latitude`` and ``"longitude"`` channels for other marks. - selection : Mapping(required=[]) - A key-value mapping between selection names and definitions. - title : anyOf(string, :class:`TitleParams`) - Title for the plot. - transform : List(:class:`Transform`) - An array of data transformations such as filter and new field calculation. - view : :class:`ViewBackground` - An object defining the view background's fill and stroke. - - **Default value:** none (transparent) - width : float - The width of a visualization. - - **Default value:** This will be determined by the following rules: - - - * If a view's `autosize - `__ type is ``"fit"`` or - its x-channel has a `continuous scale - `__, the width will - be the value of `config.view.width - `__. - * For x-axis with a band or point scale: if `rangeStep - `__ is a numeric value or - unspecified, the width is `determined by the range step, paddings, and the - cardinality of the field mapped to x-channel - `__. Otherwise, if the - ``rangeStep`` is ``null``, the width will be the value of `config.view.width - `__. - * If no field is mapped to ``x`` channel, the ``width`` will be the value of - `config.scale.textXRangeStep - `__ for - ``text`` mark and the value of ``rangeStep`` for other marks. - - **Note:** For plots with `row and column channels - `__, this represents the - width of a single view. - - **See also:** The documentation for `width and height - `__ contains more examples. - """ - _schema = {'$ref': '#/definitions/UnitSpec'} - - def __init__(self, mark=Undefined, data=Undefined, description=Undefined, encoding=Undefined, - height=Undefined, name=Undefined, projection=Undefined, selection=Undefined, - title=Undefined, transform=Undefined, view=Undefined, width=Undefined, **kwds): - super(UnitSpec, self).__init__(mark=mark, data=data, description=description, encoding=encoding, - height=height, name=name, projection=projection, - selection=selection, title=title, transform=transform, view=view, - width=width, **kwds) - - -class UrlData(DataSource): - """UrlData schema wrapper - - Mapping(required=[url]) - - Attributes - ---------- - - url : string - An URL from which to load the data set. Use the ``format.type`` property - to ensure the loaded data is correctly parsed. - format : :class:`DataFormat` - An object that specifies the format for parsing the data. - name : string - Provide a placeholder name and bind data at runtime. - """ - _schema = {'$ref': '#/definitions/UrlData'} - - def __init__(self, url=Undefined, format=Undefined, name=Undefined, **kwds): - super(UrlData, self).__init__(url=url, format=format, name=name, **kwds) - - -class UtcMultiTimeUnit(MultiTimeUnit): - """UtcMultiTimeUnit schema wrapper - - enum('utcyearquarter', 'utcyearquartermonth', 'utcyearmonth', 'utcyearmonthdate', - 'utcyearmonthdatehours', 'utcyearmonthdatehoursminutes', - 'utcyearmonthdatehoursminutesseconds', 'utcquartermonth', 'utcmonthdate', - 'utcmonthdatehours', 'utchoursminutes', 'utchoursminutesseconds', 'utcminutesseconds', - 'utcsecondsmilliseconds') - """ - _schema = {'$ref': '#/definitions/UtcMultiTimeUnit'} - - def __init__(self, *args): - super(UtcMultiTimeUnit, self).__init__(*args) - - -class UtcSingleTimeUnit(SingleTimeUnit): - """UtcSingleTimeUnit schema wrapper - - enum('utcyear', 'utcquarter', 'utcmonth', 'utcday', 'utcdate', 'utchours', 'utcminutes', - 'utcseconds', 'utcmilliseconds') - """ - _schema = {'$ref': '#/definitions/UtcSingleTimeUnit'} - - def __init__(self, *args): - super(UtcSingleTimeUnit, self).__init__(*args) - - -class VConcatSpec(Spec): - """VConcatSpec schema wrapper - - Mapping(required=[vconcat]) - Base interface for a vertical concatenation specification. - - Attributes - ---------- - - vconcat : List(:class:`Spec`) - A list of views to be concatenated and put into a column. - bounds : enum('full', 'flush') - The bounds calculation method to use for determining the extent of a sub-plot. One - of ``full`` (the default) or ``flush``. - - - * If set to ``full``, the entire calculated bounds (including axes, title, and - legend) will be used. - * If set to ``flush``, only the specified width and height values for the sub-view - will be used. The ``flush`` setting can be useful when attempting to place - sub-plots without axes or legends into a uniform grid structure. - - **Default value:** ``"full"`` - center : boolean - Boolean flag indicating if subviews should be centered relative to their respective - rows or columns. - - **Default value:** ``false`` - data : anyOf(:class:`Data`, None) - An object describing the data source. Set to ``null`` to ignore the parent's data - source. If no data is set, it is derived from the parent. - description : string - Description of this mark for commenting purpose. - name : string - Name of the visualization for later reference. - resolve : :class:`Resolve` - Scale, axis, and legend resolutions for view composition specifications. - spacing : float - The spacing in pixels between sub-views of the concat operator. - - **Default value** : ``10`` - title : anyOf(string, :class:`TitleParams`) - Title for the plot. - transform : List(:class:`Transform`) - An array of data transformations such as filter and new field calculation. - """ - _schema = {'$ref': '#/definitions/VConcatSpec'} - - def __init__(self, vconcat=Undefined, bounds=Undefined, center=Undefined, data=Undefined, - description=Undefined, name=Undefined, resolve=Undefined, spacing=Undefined, - title=Undefined, transform=Undefined, **kwds): - super(VConcatSpec, self).__init__(vconcat=vconcat, bounds=bounds, center=center, data=data, - description=description, name=name, resolve=resolve, - spacing=spacing, title=title, transform=transform, **kwds) - - -class Value(VegaLiteSchema): - """Value schema wrapper - - anyOf(float, string, boolean, None) - """ - _schema = {'$ref': '#/definitions/Value'} - - def __init__(self, *args): - super(Value, self).__init__(*args) - - -class ValueDefWithConditionMarkPropFieldDefTypeForShapestringnull(VegaLiteSchema): - """ValueDefWithConditionMarkPropFieldDefTypeForShapestringnull schema wrapper - - Mapping(required=[]) - A ValueDef with Condition where either the condition or the value are - optional. - - Attributes - ---------- - - condition : anyOf(:class:`ConditionalMarkPropFieldDefTypeForShape`, - :class:`ConditionalStringValueDef`, List(:class:`ConditionalStringValueDef`)) - A field definition or one or more value definition(s) with a selection predicate. - value : anyOf(string, None) - A constant value in visual domain (e.g., ``"red"`` / "#0099ff" for color, values - between ``0`` to ``1`` for opacity). - """ - _schema = {'$ref': '#/definitions/ValueDefWithCondition,(string|null)>'} - - def __init__(self, condition=Undefined, value=Undefined, **kwds): - super(ValueDefWithConditionMarkPropFieldDefTypeForShapestringnull, self).__init__(condition=condition, - value=value, - **kwds) - - -class ValueDefWithConditionMarkPropFieldDefnumber(VegaLiteSchema): - """ValueDefWithConditionMarkPropFieldDefnumber schema wrapper - - Mapping(required=[]) - A ValueDef with Condition where either the condition or the value are - optional. - - Attributes - ---------- - - condition : anyOf(:class:`ConditionalMarkPropFieldDef`, :class:`ConditionalNumberValueDef`, - List(:class:`ConditionalNumberValueDef`)) - A field definition or one or more value definition(s) with a selection predicate. - value : float - A constant value in visual domain (e.g., ``"red"`` / "#0099ff" for color, values - between ``0`` to ``1`` for opacity). - """ - _schema = {'$ref': '#/definitions/ValueDefWithCondition'} - - def __init__(self, condition=Undefined, value=Undefined, **kwds): - super(ValueDefWithConditionMarkPropFieldDefnumber, self).__init__(condition=condition, - value=value, **kwds) - - -class ValueDefWithConditionMarkPropFieldDefstringnull(VegaLiteSchema): - """ValueDefWithConditionMarkPropFieldDefstringnull schema wrapper - - Mapping(required=[]) - A ValueDef with Condition where either the condition or the value are - optional. - - Attributes - ---------- - - condition : anyOf(:class:`ConditionalMarkPropFieldDef`, :class:`ConditionalStringValueDef`, - List(:class:`ConditionalStringValueDef`)) - A field definition or one or more value definition(s) with a selection predicate. - value : anyOf(string, None) - A constant value in visual domain (e.g., ``"red"`` / "#0099ff" for color, values - between ``0`` to ``1`` for opacity). - """ - _schema = {'$ref': '#/definitions/ValueDefWithCondition'} - - def __init__(self, condition=Undefined, value=Undefined, **kwds): - super(ValueDefWithConditionMarkPropFieldDefstringnull, self).__init__(condition=condition, - value=value, **kwds) - - -class ValueDefWithConditionTextFieldDefValue(VegaLiteSchema): - """ValueDefWithConditionTextFieldDefValue schema wrapper - - Mapping(required=[]) - A ValueDef with Condition where either the condition or the value are - optional. - - Attributes - ---------- - - condition : anyOf(:class:`ConditionalTextFieldDef`, :class:`ConditionalValueDef`, - List(:class:`ConditionalValueDef`)) - A field definition or one or more value definition(s) with a selection predicate. - value : :class:`Value` - A constant value in visual domain (e.g., ``"red"`` / "#0099ff" for color, values - between ``0`` to ``1`` for opacity). - """ - _schema = {'$ref': '#/definitions/ValueDefWithCondition'} - - def __init__(self, condition=Undefined, value=Undefined, **kwds): - super(ValueDefWithConditionTextFieldDefValue, self).__init__(condition=condition, value=value, - **kwds) - - -class ViewBackground(VegaLiteSchema): - """ViewBackground schema wrapper - - Mapping(required=[]) - - Attributes - ---------- - - cornerRadius : float - The radius in pixels of rounded rectangle corners. - - **Default value:** ``0`` - fill : anyOf(:class:`Color`, None) - The fill color. - - **Default value:** ``undefined`` - fillOpacity : float - The fill opacity (value between [0,1]). - - **Default value:** ``1`` - opacity : float - The overall opacity (value between [0,1]). - - **Default value:** ``0.7`` for non-aggregate plots with ``point``, ``tick``, - ``circle``, or ``square`` marks or layered ``bar`` charts and ``1`` otherwise. - stroke : anyOf(:class:`Color`, None) - The stroke color. - - **Default value:** ``"#ddd"`` - strokeCap : :class:`StrokeCap` - The stroke cap for line ending style. One of ``"butt"``, ``"round"``, or - ``"square"``. - - **Default value:** ``"square"`` - strokeDash : List(float) - An array of alternating stroke, space lengths for creating dashed or dotted lines. - strokeDashOffset : float - The offset (in pixels) into which to begin drawing with the stroke dash array. - strokeJoin : :class:`StrokeJoin` - The stroke line join method. One of ``"miter"``, ``"round"`` or ``"bevel"``. - - **Default value:** ``"miter"`` - strokeMiterLimit : float - The miter limit at which to bevel a line join. - strokeOpacity : float - The stroke opacity (value between [0,1]). - - **Default value:** ``1`` - strokeWidth : float - The stroke width, in pixels. - style : anyOf(string, List(string)) - A string or array of strings indicating the name of custom styles to apply to the - view background. A style is a named collection of mark property defaults defined - within the `style configuration - `__. If style is an - array, later styles will override earlier styles. - - **Default value:** ``"cell"`` - **Note:** Any specified view background properties will augment the default style. - """ - _schema = {'$ref': '#/definitions/ViewBackground'} - - def __init__(self, cornerRadius=Undefined, fill=Undefined, fillOpacity=Undefined, opacity=Undefined, - stroke=Undefined, strokeCap=Undefined, strokeDash=Undefined, - strokeDashOffset=Undefined, strokeJoin=Undefined, strokeMiterLimit=Undefined, - strokeOpacity=Undefined, strokeWidth=Undefined, style=Undefined, **kwds): - super(ViewBackground, self).__init__(cornerRadius=cornerRadius, fill=fill, - fillOpacity=fillOpacity, opacity=opacity, stroke=stroke, - strokeCap=strokeCap, strokeDash=strokeDash, - strokeDashOffset=strokeDashOffset, strokeJoin=strokeJoin, - strokeMiterLimit=strokeMiterLimit, - strokeOpacity=strokeOpacity, strokeWidth=strokeWidth, - style=style, **kwds) - - -class ViewConfig(VegaLiteSchema): - """ViewConfig schema wrapper - - Mapping(required=[]) - - Attributes - ---------- - - clip : boolean - Whether the view should be clipped. - cornerRadius : float - The radius in pixels of rounded rectangle corners. - - **Default value:** ``0`` - fill : anyOf(:class:`Color`, None) - The fill color. - - **Default value:** ``undefined`` - fillOpacity : float - The fill opacity (value between [0,1]). - - **Default value:** ``1`` - height : float - The default height of the single plot or each plot in a trellis plot when the - visualization has a continuous (non-ordinal) y-scale with ``rangeStep`` = ``null``. - - **Default value:** ``200`` - opacity : float - The overall opacity (value between [0,1]). - - **Default value:** ``0.7`` for non-aggregate plots with ``point``, ``tick``, - ``circle``, or ``square`` marks or layered ``bar`` charts and ``1`` otherwise. - stroke : anyOf(:class:`Color`, None) - The stroke color. - - **Default value:** ``"#ddd"`` - strokeCap : :class:`StrokeCap` - The stroke cap for line ending style. One of ``"butt"``, ``"round"``, or - ``"square"``. - - **Default value:** ``"square"`` - strokeDash : List(float) - An array of alternating stroke, space lengths for creating dashed or dotted lines. - strokeDashOffset : float - The offset (in pixels) into which to begin drawing with the stroke dash array. - strokeJoin : :class:`StrokeJoin` - The stroke line join method. One of ``"miter"``, ``"round"`` or ``"bevel"``. - - **Default value:** ``"miter"`` - strokeMiterLimit : float - The miter limit at which to bevel a line join. - strokeOpacity : float - The stroke opacity (value between [0,1]). - - **Default value:** ``1`` - strokeWidth : float - The stroke width, in pixels. - width : float - The default width of the single plot or each plot in a trellis plot when the - visualization has a continuous (non-ordinal) x-scale or ordinal x-scale with - ``rangeStep`` = ``null``. - - **Default value:** ``200`` - """ - _schema = {'$ref': '#/definitions/ViewConfig'} - - def __init__(self, clip=Undefined, cornerRadius=Undefined, fill=Undefined, fillOpacity=Undefined, - height=Undefined, opacity=Undefined, stroke=Undefined, strokeCap=Undefined, - strokeDash=Undefined, strokeDashOffset=Undefined, strokeJoin=Undefined, - strokeMiterLimit=Undefined, strokeOpacity=Undefined, strokeWidth=Undefined, - width=Undefined, **kwds): - super(ViewConfig, self).__init__(clip=clip, cornerRadius=cornerRadius, fill=fill, - fillOpacity=fillOpacity, height=height, opacity=opacity, - stroke=stroke, strokeCap=strokeCap, strokeDash=strokeDash, - strokeDashOffset=strokeDashOffset, strokeJoin=strokeJoin, - strokeMiterLimit=strokeMiterLimit, strokeOpacity=strokeOpacity, - strokeWidth=strokeWidth, width=width, **kwds) - - -class WindowFieldDef(VegaLiteSchema): - """WindowFieldDef schema wrapper - - Mapping(required=[op, as]) - - Attributes - ---------- - - op : anyOf(:class:`AggregateOp`, :class:`WindowOnlyOp`) - The window or aggregation operation to apply within a window (e.g., ``rank``, - ``lead``, ``sum``, ``average`` or ``count`` ). See the list of all supported - operations `here `__. - field : :class:`FieldName` - The data field for which to compute the aggregate or window function. This can be - omitted for window functions that do not operate over a field such as ``count``, - ``rank``, ``dense_rank``. - param : float - Parameter values for the window functions. Parameter values can be omitted for - operations that do not accept a parameter. - - See the list of all supported operations and their parameters `here - `__. - as : :class:`FieldName` - The output name for the window operation. - """ - _schema = {'$ref': '#/definitions/WindowFieldDef'} - - def __init__(self, op=Undefined, field=Undefined, param=Undefined, **kwds): - super(WindowFieldDef, self).__init__(op=op, field=field, param=param, **kwds) - - -class WindowOnlyOp(VegaLiteSchema): - """WindowOnlyOp schema wrapper - - enum('row_number', 'rank', 'dense_rank', 'percent_rank', 'cume_dist', 'ntile', 'lag', - 'lead', 'first_value', 'last_value', 'nth_value') - """ - _schema = {'$ref': '#/definitions/WindowOnlyOp'} - - def __init__(self, *args): - super(WindowOnlyOp, self).__init__(*args) - - -class WindowTransform(Transform): - """WindowTransform schema wrapper - - Mapping(required=[window]) - - Attributes - ---------- - - window : List(:class:`WindowFieldDef`) - The definition of the fields in the window, and what calculations to use. - frame : List(anyOf(None, float)) - A frame specification as a two-element array indicating how the sliding window - should proceed. The array entries should either be a number indicating the offset - from the current data object, or null to indicate unbounded rows preceding or - following the current data object. The default value is ``[null, 0]``, indicating - that the sliding window includes the current object and all preceding objects. The - value ``[-5, 5]`` indicates that the window should include five objects preceding - and five objects following the current object. Finally, ``[null, null]`` indicates - that the window frame should always include all data objects. If you this frame and - want to assign the same value to add objects, you can use the simpler `join - aggregate transform `__. - The only operators affected are the aggregation operations and the ``first_value``, - ``last_value``, and ``nth_value`` window operations. The other window operations are - not affected by this. - - **Default value:** : ``[null, 0]`` (includes the current object and all preceding - objects) - groupby : List(:class:`FieldName`) - The data fields for partitioning the data objects into separate windows. If - unspecified, all data points will be in a single window. - ignorePeers : boolean - Indicates if the sliding window frame should ignore peer values (data that are - considered identical by the sort criteria). The default is false, causing the window - frame to expand to include all peer values. If set to true, the window frame will be - defined by offset values only. This setting only affects those operations that - depend on the window frame, namely aggregation operations and the first_value, - last_value, and nth_value window operations. - - **Default value:** ``false`` - sort : List(:class:`SortField`) - A sort field definition for sorting data objects within a window. If two data - objects are considered equal by the comparator, they are considered “peer” values of - equal rank. If sort is not specified, the order is undefined: data objects are - processed in the order they are observed and none are considered peers (the - ignorePeers parameter is ignored and treated as if set to ``true`` ). - """ - _schema = {'$ref': '#/definitions/WindowTransform'} - - def __init__(self, window=Undefined, frame=Undefined, groupby=Undefined, ignorePeers=Undefined, - sort=Undefined, **kwds): - super(WindowTransform, self).__init__(window=window, frame=frame, groupby=groupby, - ignorePeers=ignorePeers, sort=sort, **kwds) - - -class XValueDef(VegaLiteSchema): - """XValueDef schema wrapper - - Mapping(required=[value]) - Definition object for a constant value of an encoding channel. - - Attributes - ---------- - - value : anyOf(float, enum('width')) - A constant value in visual domain (e.g., ``"red"`` / "#0099ff" for color, values - between ``0`` to ``1`` for opacity). - """ - _schema = {'$ref': '#/definitions/XValueDef'} - - def __init__(self, value=Undefined, **kwds): - super(XValueDef, self).__init__(value=value, **kwds) - - -class YValueDef(VegaLiteSchema): - """YValueDef schema wrapper - - Mapping(required=[value]) - Definition object for a constant value of an encoding channel. - - Attributes - ---------- - - value : anyOf(float, enum('height')) - A constant value in visual domain (e.g., ``"red"`` / "#0099ff" for color, values - between ``0`` to ``1`` for opacity). - """ - _schema = {'$ref': '#/definitions/YValueDef'} - - def __init__(self, value=Undefined, **kwds): - super(YValueDef, self).__init__(value=value, **kwds) - diff --git a/spaces/asafAdge/Detic/tools/download_cc.py b/spaces/asafAdge/Detic/tools/download_cc.py deleted file mode 100644 index 3c43690a3ca407c3553686d9eb51db9c1834f156..0000000000000000000000000000000000000000 --- a/spaces/asafAdge/Detic/tools/download_cc.py +++ /dev/null @@ -1,47 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import os -import json -import argparse -from PIL import Image -import numpy as np - -if __name__ == '__main__': - parser = argparse.ArgumentParser() - parser.add_argument('--ann', default='datasets/cc3m/Train_GCC-training.tsv') - parser.add_argument('--save_image_path', default='datasets/cc3m/training/') - parser.add_argument('--cat_info', default='datasets/lvis/lvis_v1_val.json') - parser.add_argument('--out_path', default='datasets/cc3m/train_image_info.json') - parser.add_argument('--not_download_image', action='store_true') - args = parser.parse_args() - categories = json.load(open(args.cat_info, 'r'))['categories'] - images = [] - if not os.path.exists(args.save_image_path): - os.makedirs(args.save_image_path) - f = open(args.ann) - for i, line in enumerate(f): - cap, path = line[:-1].split('\t') - print(i, cap, path) - if not args.not_download_image: - os.system( - 'wget {} -O {}/{}.jpg'.format( - path, args.save_image_path, i + 1)) - try: - img = Image.open( - open('{}/{}.jpg'.format(args.save_image_path, i + 1), "rb")) - img = np.asarray(img.convert("RGB")) - h, w = img.shape[:2] - except: - continue - image_info = { - 'id': i + 1, - 'file_name': '{}.jpg'.format(i + 1), - 'height': h, - 'width': w, - 'captions': [cap], - } - images.append(image_info) - data = {'categories': categories, 'images': images, 'annotations': []} - for k, v in data.items(): - print(k, len(v)) - print('Saving to', args.out_path) - json.dump(data, open(args.out_path, 'w')) diff --git a/spaces/asd123Xiao/kafuu_chino_sovits4.0/inference/infer_tool_grad.py b/spaces/asd123Xiao/kafuu_chino_sovits4.0/inference/infer_tool_grad.py deleted file mode 100644 index b75af49c08e2e724839828bc419792ed580809bb..0000000000000000000000000000000000000000 --- a/spaces/asd123Xiao/kafuu_chino_sovits4.0/inference/infer_tool_grad.py +++ /dev/null @@ -1,160 +0,0 @@ -import hashlib -import json -import logging -import os -import time -from pathlib import Path -import io -import librosa -import maad -import numpy as np -from inference import slicer -import parselmouth -import soundfile -import torch -import torchaudio - -from hubert import hubert_model -import utils -from models import SynthesizerTrn -logging.getLogger('numba').setLevel(logging.WARNING) -logging.getLogger('matplotlib').setLevel(logging.WARNING) - -def resize2d_f0(x, target_len): - source = np.array(x) - source[source < 0.001] = np.nan - target = np.interp(np.arange(0, len(source) * target_len, len(source)) / target_len, np.arange(0, len(source)), - source) - res = np.nan_to_num(target) - return res - -def get_f0(x, p_len,f0_up_key=0): - - time_step = 160 / 16000 * 1000 - f0_min = 50 - f0_max = 1100 - f0_mel_min = 1127 * np.log(1 + f0_min / 700) - f0_mel_max = 1127 * np.log(1 + f0_max / 700) - - f0 = parselmouth.Sound(x, 16000).to_pitch_ac( - time_step=time_step / 1000, voicing_threshold=0.6, - pitch_floor=f0_min, pitch_ceiling=f0_max).selected_array['frequency'] - - pad_size=(p_len - len(f0) + 1) // 2 - if(pad_size>0 or p_len - len(f0) - pad_size>0): - f0 = np.pad(f0,[[pad_size,p_len - len(f0) - pad_size]], mode='constant') - - f0 *= pow(2, f0_up_key / 12) - f0_mel = 1127 * np.log(1 + f0 / 700) - f0_mel[f0_mel > 0] = (f0_mel[f0_mel > 0] - f0_mel_min) * 254 / (f0_mel_max - f0_mel_min) + 1 - f0_mel[f0_mel <= 1] = 1 - f0_mel[f0_mel > 255] = 255 - f0_coarse = np.rint(f0_mel).astype(np.int) - return f0_coarse, f0 - -def clean_pitch(input_pitch): - num_nan = np.sum(input_pitch == 1) - if num_nan / len(input_pitch) > 0.9: - input_pitch[input_pitch != 1] = 1 - return input_pitch - - -def plt_pitch(input_pitch): - input_pitch = input_pitch.astype(float) - input_pitch[input_pitch == 1] = np.nan - return input_pitch - - -def f0_to_pitch(ff): - f0_pitch = 69 + 12 * np.log2(ff / 440) - return f0_pitch - - -def fill_a_to_b(a, b): - if len(a) < len(b): - for _ in range(0, len(b) - len(a)): - a.append(a[0]) - - -def mkdir(paths: list): - for path in paths: - if not os.path.exists(path): - os.mkdir(path) - - -class VitsSvc(object): - def __init__(self): - self.device = torch.device("cuda" if torch.cuda.is_available() else "cpu") - self.SVCVITS = None - self.hps = None - self.speakers = None - self.hubert_soft = utils.get_hubert_model() - - def set_device(self, device): - self.device = torch.device(device) - self.hubert_soft.to(self.device) - if self.SVCVITS != None: - self.SVCVITS.to(self.device) - - def loadCheckpoint(self, path): - self.hps = utils.get_hparams_from_file(f"checkpoints/{path}/config.json") - self.SVCVITS = SynthesizerTrn( - self.hps.data.filter_length // 2 + 1, - self.hps.train.segment_size // self.hps.data.hop_length, - **self.hps.model) - _ = utils.load_checkpoint(f"checkpoints/{path}/model.pth", self.SVCVITS, None) - _ = self.SVCVITS.eval().to(self.device) - self.speakers = self.hps.spk - - def get_units(self, source, sr): - source = source.unsqueeze(0).to(self.device) - with torch.inference_mode(): - units = self.hubert_soft.units(source) - return units - - - def get_unit_pitch(self, in_path, tran): - source, sr = torchaudio.load(in_path) - source = torchaudio.functional.resample(source, sr, 16000) - if len(source.shape) == 2 and source.shape[1] >= 2: - source = torch.mean(source, dim=0).unsqueeze(0) - soft = self.get_units(source, sr).squeeze(0).cpu().numpy() - f0_coarse, f0 = get_f0(source.cpu().numpy()[0], soft.shape[0]*2, tran) - return soft, f0 - - def infer(self, speaker_id, tran, raw_path): - speaker_id = self.speakers[speaker_id] - sid = torch.LongTensor([int(speaker_id)]).to(self.device).unsqueeze(0) - soft, pitch = self.get_unit_pitch(raw_path, tran) - f0 = torch.FloatTensor(clean_pitch(pitch)).unsqueeze(0).to(self.device) - stn_tst = torch.FloatTensor(soft) - with torch.no_grad(): - x_tst = stn_tst.unsqueeze(0).to(self.device) - x_tst = torch.repeat_interleave(x_tst, repeats=2, dim=1).transpose(1, 2) - audio = self.SVCVITS.infer(x_tst, f0=f0, g=sid)[0,0].data.float() - return audio, audio.shape[-1] - - def inference(self,srcaudio,chara,tran,slice_db): - sampling_rate, audio = srcaudio - audio = (audio / np.iinfo(audio.dtype).max).astype(np.float32) - if len(audio.shape) > 1: - audio = librosa.to_mono(audio.transpose(1, 0)) - if sampling_rate != 16000: - audio = librosa.resample(audio, orig_sr=sampling_rate, target_sr=16000) - soundfile.write("tmpwav.wav", audio, 16000, format="wav") - chunks = slicer.cut("tmpwav.wav", db_thresh=slice_db) - audio_data, audio_sr = slicer.chunks2audio("tmpwav.wav", chunks) - audio = [] - for (slice_tag, data) in audio_data: - length = int(np.ceil(len(data) / audio_sr * self.hps.data.sampling_rate)) - raw_path = io.BytesIO() - soundfile.write(raw_path, data, audio_sr, format="wav") - raw_path.seek(0) - if slice_tag: - _audio = np.zeros(length) - else: - out_audio, out_sr = self.infer(chara, tran, raw_path) - _audio = out_audio.cpu().numpy() - audio.extend(list(_audio)) - audio = (np.array(audio) * 32768.0).astype('int16') - return (self.hps.data.sampling_rate,audio) diff --git a/spaces/attention-refocusing/Attention-refocusing/gligen/ldm/models/diffusion/ddim.py b/spaces/attention-refocusing/Attention-refocusing/gligen/ldm/models/diffusion/ddim.py deleted file mode 100644 index 7db86661e94319b54bec15bf521097bb7b7faf87..0000000000000000000000000000000000000000 --- a/spaces/attention-refocusing/Attention-refocusing/gligen/ldm/models/diffusion/ddim.py +++ /dev/null @@ -1,134 +0,0 @@ -import torch -import numpy as np -from tqdm import tqdm -from functools import partial - -from ldm.modules.diffusionmodules.util import make_ddim_sampling_parameters, make_ddim_timesteps, noise_like - - -class DDIMSampler(object): - def __init__(self, diffusion, model, schedule="linear", alpha_generator_func=None, set_alpha_scale=None): - super().__init__() - self.diffusion = diffusion - self.model = model - self.device = diffusion.betas.device - self.ddpm_num_timesteps = diffusion.num_timesteps - self.schedule = schedule - self.alpha_generator_func = alpha_generator_func - self.set_alpha_scale = set_alpha_scale - - - def register_buffer(self, name, attr): - if type(attr) == torch.Tensor: - attr = attr.to(self.device) - setattr(self, name, attr) - - - def make_schedule(self, ddim_num_steps, ddim_discretize="uniform", ddim_eta=0.): - self.ddim_timesteps = make_ddim_timesteps(ddim_discr_method=ddim_discretize, num_ddim_timesteps=ddim_num_steps, - num_ddpm_timesteps=self.ddpm_num_timesteps,verbose=False) - alphas_cumprod = self.diffusion.alphas_cumprod - assert alphas_cumprod.shape[0] == self.ddpm_num_timesteps, 'alphas have to be defined for each timestep' - to_torch = lambda x: x.clone().detach().to(torch.float32).to(self.device) - - self.register_buffer('betas', to_torch(self.diffusion.betas)) - self.register_buffer('alphas_cumprod', to_torch(alphas_cumprod)) - self.register_buffer('alphas_cumprod_prev', to_torch(self.diffusion.alphas_cumprod_prev)) - - # calculations for diffusion q(x_t | x_{t-1}) and others - self.register_buffer('sqrt_alphas_cumprod', to_torch(np.sqrt(alphas_cumprod.cpu()))) - self.register_buffer('sqrt_one_minus_alphas_cumprod', to_torch(np.sqrt(1. - alphas_cumprod.cpu()))) - self.register_buffer('log_one_minus_alphas_cumprod', to_torch(np.log(1. - alphas_cumprod.cpu()))) - self.register_buffer('sqrt_recip_alphas_cumprod', to_torch(np.sqrt(1. / alphas_cumprod.cpu()))) - self.register_buffer('sqrt_recipm1_alphas_cumprod', to_torch(np.sqrt(1. / alphas_cumprod.cpu() - 1))) - - # ddim sampling parameters - ddim_sigmas, ddim_alphas, ddim_alphas_prev = make_ddim_sampling_parameters(alphacums=alphas_cumprod.cpu(), - ddim_timesteps=self.ddim_timesteps, - eta=ddim_eta,verbose=False) - self.register_buffer('ddim_sigmas', ddim_sigmas) - self.register_buffer('ddim_alphas', ddim_alphas) - self.register_buffer('ddim_alphas_prev', ddim_alphas_prev) - self.register_buffer('ddim_sqrt_one_minus_alphas', np.sqrt(1. - ddim_alphas)) - sigmas_for_original_sampling_steps = ddim_eta * torch.sqrt( - (1 - self.alphas_cumprod_prev) / (1 - self.alphas_cumprod) * ( - 1 - self.alphas_cumprod / self.alphas_cumprod_prev)) - self.register_buffer('ddim_sigmas_for_original_num_steps', sigmas_for_original_sampling_steps) - - - @torch.no_grad() - def sample(self, S, shape, input, uc=None, guidance_scale=1, mask=None, x0=None): - self.make_schedule(ddim_num_steps=S) - return self.ddim_sampling(shape, input, uc, guidance_scale, mask=mask, x0=x0) - - - @torch.no_grad() - def ddim_sampling(self, shape, input, uc, guidance_scale=1, mask=None, x0=None): - b = shape[0] - - img = input["x"] - if img == None: - img = torch.randn(shape, device=self.device) - input["x"] = img - - - time_range = np.flip(self.ddim_timesteps) - total_steps = self.ddim_timesteps.shape[0] - - #iterator = tqdm(time_range, desc='DDIM Sampler', total=total_steps) - iterator = time_range - - if self.alpha_generator_func != None: - alphas = self.alpha_generator_func(len(iterator)) - - - for i, step in enumerate(iterator): - - # set alpha - if self.alpha_generator_func != None: - self.set_alpha_scale(self.model, alphas[i]) - if alphas[i] == 0: - self.model.restore_first_conv_from_SD() - - # run - index = total_steps - i - 1 - input["timesteps"] = torch.full((b,), step, device=self.device, dtype=torch.long) - - if mask is not None: - assert x0 is not None - img_orig = self.diffusion.q_sample( x0, input["timesteps"] ) - img = img_orig * mask + (1. - mask) * img - input["x"] = img - - img, pred_x0 = self.p_sample_ddim(input, index=index, uc=uc, guidance_scale=guidance_scale) - input["x"] = img - - return img - - - @torch.no_grad() - def p_sample_ddim(self, input, index, uc=None, guidance_scale=1): - - - e_t = self.model(input) - if uc is not None and guidance_scale != 1: - unconditional_input = dict(x=input["x"], timesteps=input["timesteps"], context=uc, inpainting_extra_input=input["inpainting_extra_input"], grounding_extra_input=input['grounding_extra_input']) - e_t_uncond = self.model( unconditional_input ) - e_t = e_t_uncond + guidance_scale * (e_t - e_t_uncond) - - # select parameters corresponding to the currently considered timestep - b = input["x"].shape[0] - a_t = torch.full((b, 1, 1, 1), self.ddim_alphas[index], device=self.device) - a_prev = torch.full((b, 1, 1, 1), self.ddim_alphas_prev[index], device=self.device) - sigma_t = torch.full((b, 1, 1, 1), self.ddim_sigmas[index], device=self.device) - sqrt_one_minus_at = torch.full((b, 1, 1, 1), self.ddim_sqrt_one_minus_alphas[index],device=self.device) - - # current prediction for x_0 - pred_x0 = (input["x"] - sqrt_one_minus_at * e_t) / a_t.sqrt() - - # direction pointing to x_t - dir_xt = (1. - a_prev - sigma_t**2).sqrt() * e_t - noise = sigma_t * torch.randn_like( input["x"] ) - x_prev = a_prev.sqrt() * pred_x0 + dir_xt + noise - - return x_prev, pred_x0 diff --git a/spaces/aus10powell/TwitterAccounts/templates/charts/handle_sentiment_breakdown.html b/spaces/aus10powell/TwitterAccounts/templates/charts/handle_sentiment_breakdown.html deleted file mode 100644 index 4d3af1375e6655f124aaba24615ff7c73bdf37b5..0000000000000000000000000000000000000000 --- a/spaces/aus10powell/TwitterAccounts/templates/charts/handle_sentiment_breakdown.html +++ /dev/null @@ -1,35 +0,0 @@ - - - - - - - - - -
      - - - \ No newline at end of file diff --git a/spaces/awacke1/AskMeAnythingSemanticSearch/README.md b/spaces/awacke1/AskMeAnythingSemanticSearch/README.md deleted file mode 100644 index 6736bc14fb5465a37c8104a6412d32e56684e1e0..0000000000000000000000000000000000000000 --- a/spaces/awacke1/AskMeAnythingSemanticSearch/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: AskMeAnythingSemanticSearch -emoji: 🐢 -colorFrom: indigo -colorTo: gray -sdk: streamlit -sdk_version: 1.10.0 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/awacke1/Docker-PEFT-ParamEfficiency/README.md b/spaces/awacke1/Docker-PEFT-ParamEfficiency/README.md deleted file mode 100644 index f10f424e221caaaabebf6573705e0e14f56553ee..0000000000000000000000000000000000000000 --- a/spaces/awacke1/Docker-PEFT-ParamEfficiency/README.md +++ /dev/null @@ -1,11 +0,0 @@ ---- -title: Docker PEFT ParamEfficiency -emoji: 🏃 -colorFrom: indigo -colorTo: purple -sdk: docker -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/awacke1/REBEL-Knowledge-Graph-Generator/app.py b/spaces/awacke1/REBEL-Knowledge-Graph-Generator/app.py deleted file mode 100644 index f59707a0c5072cbc00fe6a63810f2aab678a8bb3..0000000000000000000000000000000000000000 --- a/spaces/awacke1/REBEL-Knowledge-Graph-Generator/app.py +++ /dev/null @@ -1,230 +0,0 @@ -from logging import disable -from pkg_resources import EggMetadata -import streamlit as st -import streamlit.components.v1 as components -import networkx as nx -import matplotlib.pyplot as plt -from pyvis.network import Network -from streamlit.state.session_state import SessionState -from streamlit.type_util import Key -import rebel -import wikipedia -from utils import clip_text -from datetime import datetime as dt -import os - -MAX_TOPICS = 3 - -wiki_state_variables = { - 'has_run_wiki':False, - 'wiki_suggestions': [], - 'wiki_text' : [], - 'nodes':[], - "topics":[], - "html_wiki":"" -} - -free_text_state_variables = { - 'has_run_free':False, - "html_free":"" - -} - -BUTTON_COLUMS = 4 - -def wiki_init_state_variables(): - for k in free_text_state_variables.keys(): - if k in st.session_state: - del st.session_state[k] - - for k, v in wiki_state_variables.items(): - if k not in st.session_state: - st.session_state[k] = v - -def wiki_generate_graph(): - st.session_state["GRAPH_FILENAME"] = str(dt.now().timestamp()*1000) + ".html" - - if 'wiki_text' not in st.session_state: - return - if len(st.session_state['wiki_text']) == 0: - st.error("please enter a topic and select a wiki page first") - return - with st.spinner(text="Generating graph..."): - texts = st.session_state['wiki_text'] - st.session_state['nodes'] = [] - nodes = rebel.generate_knowledge_graph(texts, st.session_state["GRAPH_FILENAME"]) - HtmlFile = open(st.session_state["GRAPH_FILENAME"], 'r', encoding='utf-8') - source_code = HtmlFile.read() - st.session_state["html_wiki"] = source_code - os.remove(st.session_state["GRAPH_FILENAME"]) - for n in nodes: - n = n.lower() - if n not in st.session_state['topics']: - possible_topics = wikipedia.search(n, results = 2) - st.session_state['nodes'].extend(possible_topics) - st.session_state['nodes'] = list(set(st.session_state['nodes'])) - st.session_state['has_run_wiki'] = True - st.success('Done!') - -def wiki_show_suggestion(): - st.session_state['wiki_suggestions'] = [] - with st.spinner(text="fetching wiki topics..."): - if st.session_state['input_method'] == "wikipedia": - text = st.session_state.text - if (text is not None) and (text != ""): - subjects = text.split(",")[:MAX_TOPICS] - for subj in subjects: - st.session_state['wiki_suggestions'] += wikipedia.search(subj, results = 3) - -def wiki_show_text(page_title): - with st.spinner(text="fetching wiki page..."): - try: - page = wikipedia.page(title=page_title, auto_suggest=False) - st.session_state['wiki_text'].append(clip_text(page.summary)) - st.session_state['topics'].append(page_title.lower()) - st.session_state['wiki_suggestions'].remove(page_title) - - except wikipedia.DisambiguationError as e: - with st.spinner(text="Woops, ambigious term, recalculating options..."): - st.session_state['wiki_suggestions'].remove(page_title) - temp = st.session_state['wiki_suggestions'] + e.options[:3] - st.session_state['wiki_suggestions'] = list(set(temp)) - except wikipedia.WikipediaException: - st.session_state['wiki_suggestions'].remove(page_title) - -def wiki_add_text(term): - if len(st.session_state['wiki_text']) > MAX_TOPICS: - return - try: - page = wikipedia.page(title=term, auto_suggest=False) - extra_text = clip_text(page.summary) - - st.session_state['wiki_text'].append(extra_text) - st.session_state['topics'].append(term.lower()) - st.session_state['nodes'].remove(term) - - except wikipedia.DisambiguationError as e: - print(e) - with st.spinner(text="Woops, ambigious term, recalculating options..."): - st.session_state['nodes'].remove(term) - temp = st.session_state['nodes'] + e.options[:3] - st.session_state['nodes'] = list(set(temp)) - except wikipedia.WikipediaException as e: - print(e) - st.session_state['nodes'].remove(term) - -def wiki_reset_session(): - for k in wiki_state_variables: - del st.session_state[k] - -def free_reset_session(): - for k in free_text_state_variables: - del st.session_state[k] - -def free_text_generate(): - st.session_state["GRAPH_FILENAME"] = str(dt.now().timestamp()*1000) + ".html" - text = st.session_state['free_text'][0:100] - rebel.generate_knowledge_graph([text], st.session_state["GRAPH_FILENAME"]) - HtmlFile = open(st.session_state["GRAPH_FILENAME"], 'r', encoding='utf-8') - source_code = HtmlFile.read() - st.session_state["html_free"] = source_code - os.remove(st.session_state["GRAPH_FILENAME"]) - st.session_state['has_run_free'] = True - -def free_text_layout(): - st.text_area("Free text", key="free_text", height=5, value="Tardigrades, known colloquially as water bears or moss piglets, are a phylum of eight-legged segmented micro-animals.") - st.button("Generate", on_click=free_text_generate, key="free_text_generate") - -def free_test_init_state_variables(): - for k in wiki_state_variables.keys(): - if k in st.session_state: - del st.session_state[k] - - for k, v in free_text_state_variables.items(): - if k not in st.session_state: - st.session_state[k] = v - -st.title('RE:Belle') -st.markdown( -""" -### Building Beautiful Knowledge Graphs With REBEL -""") -st.selectbox( - 'input method', - ('wikipedia', 'free text'), key="input_method") - - -def show_wiki_hub_page(): - # st.sidebar.button("Reset", on_click=wiki_reset_session, key="reset_key") - - cols = st.columns([8, 1]) - with cols[0]: - st.text_input("wikipedia search term", on_change=wiki_show_suggestion, key="text", value="graphs, are, awesome") - with cols[1]: - st.text('') - st.text('') - st.button("Search", on_click=wiki_show_suggestion, key="show_suggestion_key") - - if len(st.session_state['wiki_suggestions']) != 0: - num_buttons = len(st.session_state['wiki_suggestions']) - num_cols = num_buttons if 0 < num_buttons < BUTTON_COLUMS else BUTTON_COLUMS - columns = st.columns([1] * num_cols ) - for q in range(1 + num_buttons//num_cols): - for i, (c, s) in enumerate(zip(columns, st.session_state['wiki_suggestions'][q*num_cols: (q+1)*num_cols])): - with c: - st.button(s, on_click=wiki_show_text, args=(s,), key=str(i)+s+"wiki_suggestion") - - if len(st.session_state['wiki_text']) != 0: - for i, t in enumerate(st.session_state['wiki_text']): - new_expander = st.expander(label=t[:30] + "...", expanded=(i==0)) - with new_expander: - st.markdown(t) - - if len(st.session_state['wiki_text']) > 0: - st.button("Generate", on_click=wiki_generate_graph, key="gen_graph") - - if st.session_state['has_run_wiki']: - - components.html(st.session_state["html_wiki"], width=720, height=600) - num_buttons = len(st.session_state["nodes"]) - num_cols = num_buttons if 0 < num_buttons < BUTTON_COLUMS else BUTTON_COLUMS - columns = st.columns([1] * num_cols + [1]) - - for q in range(1 + num_buttons//num_cols): - for i, (c, s) in enumerate(zip(columns, st.session_state["nodes"][q*num_cols: (q+1)*num_cols])): - with c: - st.button(s, on_click=wiki_add_text, args=(s,), key=str(i)+s) - -def show_free_text_hub_page(): - free_text_layout() - if st.session_state['has_run_free']: - components.html(st.session_state["html_free"], width=720, height=600) - -if st.session_state['input_method'] == "wikipedia": - wiki_init_state_variables() - show_wiki_hub_page() -else: - free_test_init_state_variables() - show_free_text_hub_page() - - - -# st.sidebar.markdown( -""" -## What This Is And Why We Built it - -This space shows how a transformer network can be used to convert *human* text into a computer-queryable format: a **knowledge graph**. Knowledge graphs are graphs where each node (or *vertex* if you're fancy) represent a concept/person/thing and each edge the link between those concepts. If you'd like to know more, you can read [this blogpost](https://www.ml6.eu/knowhow/knowledge-graphs-an-introduction-and-business-applications). - -Knowledge graphs aren't just cool to look at, they are an extremely versatile way of storing data, and are used in machine learning to perform tasks like fraud detection. You can read more about the applications of knowledge graphs in ML in [this blogpost](https://blog.ml6.eu/how-are-knowledge-graphs-and-machine-learning-related-ff6f5c1760b5). - -There is one problem though: building knowledge graphs from scratch is a time-consuming and tedious task, so it would be a lot easier if we could leverage machine learning to **create** them from existing texts. This demo shows how a model named **REBEL** has been trained to do just that: it reads summaries from Wikipedia (or any other text you input), and generates a graph containing the information it distills from the text. -""" -) - -# st.sidebar.markdown( -""" -*Credits for the REBEL model go out to Pere-Lluís Huguet Cabot and Roberto Navigli. -The code can be found [here](https://github.com/Babelscape/rebel), -and the original paper [here](https://github.com/Babelscape/rebel/blob/main/docs/EMNLP_2021_REBEL__Camera_Ready_.pdf)* -""" -) \ No newline at end of file diff --git a/spaces/awaiss/vits-models/attentions.py b/spaces/awaiss/vits-models/attentions.py deleted file mode 100644 index 86bc73b5fe98cc7b443e9078553920346c996707..0000000000000000000000000000000000000000 --- a/spaces/awaiss/vits-models/attentions.py +++ /dev/null @@ -1,300 +0,0 @@ -import math -import torch -from torch import nn -from torch.nn import functional as F - -import commons -from modules import LayerNorm - - -class Encoder(nn.Module): - def __init__(self, hidden_channels, filter_channels, n_heads, n_layers, kernel_size=1, p_dropout=0., window_size=4, **kwargs): - super().__init__() - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.window_size = window_size - - self.drop = nn.Dropout(p_dropout) - self.attn_layers = nn.ModuleList() - self.norm_layers_1 = nn.ModuleList() - self.ffn_layers = nn.ModuleList() - self.norm_layers_2 = nn.ModuleList() - for i in range(self.n_layers): - self.attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout, window_size=window_size)) - self.norm_layers_1.append(LayerNorm(hidden_channels)) - self.ffn_layers.append(FFN(hidden_channels, hidden_channels, filter_channels, kernel_size, p_dropout=p_dropout)) - self.norm_layers_2.append(LayerNorm(hidden_channels)) - - def forward(self, x, x_mask): - attn_mask = x_mask.unsqueeze(2) * x_mask.unsqueeze(-1) - x = x * x_mask - for i in range(self.n_layers): - y = self.attn_layers[i](x, x, attn_mask) - y = self.drop(y) - x = self.norm_layers_1[i](x + y) - - y = self.ffn_layers[i](x, x_mask) - y = self.drop(y) - x = self.norm_layers_2[i](x + y) - x = x * x_mask - return x - - -class Decoder(nn.Module): - def __init__(self, hidden_channels, filter_channels, n_heads, n_layers, kernel_size=1, p_dropout=0., proximal_bias=False, proximal_init=True, **kwargs): - super().__init__() - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.proximal_bias = proximal_bias - self.proximal_init = proximal_init - - self.drop = nn.Dropout(p_dropout) - self.self_attn_layers = nn.ModuleList() - self.norm_layers_0 = nn.ModuleList() - self.encdec_attn_layers = nn.ModuleList() - self.norm_layers_1 = nn.ModuleList() - self.ffn_layers = nn.ModuleList() - self.norm_layers_2 = nn.ModuleList() - for i in range(self.n_layers): - self.self_attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout, proximal_bias=proximal_bias, proximal_init=proximal_init)) - self.norm_layers_0.append(LayerNorm(hidden_channels)) - self.encdec_attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout)) - self.norm_layers_1.append(LayerNorm(hidden_channels)) - self.ffn_layers.append(FFN(hidden_channels, hidden_channels, filter_channels, kernel_size, p_dropout=p_dropout, causal=True)) - self.norm_layers_2.append(LayerNorm(hidden_channels)) - - def forward(self, x, x_mask, h, h_mask): - """ - x: decoder input - h: encoder output - """ - self_attn_mask = commons.subsequent_mask(x_mask.size(2)).to(device=x.device, dtype=x.dtype) - encdec_attn_mask = h_mask.unsqueeze(2) * x_mask.unsqueeze(-1) - x = x * x_mask - for i in range(self.n_layers): - y = self.self_attn_layers[i](x, x, self_attn_mask) - y = self.drop(y) - x = self.norm_layers_0[i](x + y) - - y = self.encdec_attn_layers[i](x, h, encdec_attn_mask) - y = self.drop(y) - x = self.norm_layers_1[i](x + y) - - y = self.ffn_layers[i](x, x_mask) - y = self.drop(y) - x = self.norm_layers_2[i](x + y) - x = x * x_mask - return x - - -class MultiHeadAttention(nn.Module): - def __init__(self, channels, out_channels, n_heads, p_dropout=0., window_size=None, heads_share=True, block_length=None, proximal_bias=False, proximal_init=False): - super().__init__() - assert channels % n_heads == 0 - - self.channels = channels - self.out_channels = out_channels - self.n_heads = n_heads - self.p_dropout = p_dropout - self.window_size = window_size - self.heads_share = heads_share - self.block_length = block_length - self.proximal_bias = proximal_bias - self.proximal_init = proximal_init - self.attn = None - - self.k_channels = channels // n_heads - self.conv_q = nn.Conv1d(channels, channels, 1) - self.conv_k = nn.Conv1d(channels, channels, 1) - self.conv_v = nn.Conv1d(channels, channels, 1) - self.conv_o = nn.Conv1d(channels, out_channels, 1) - self.drop = nn.Dropout(p_dropout) - - if window_size is not None: - n_heads_rel = 1 if heads_share else n_heads - rel_stddev = self.k_channels**-0.5 - self.emb_rel_k = nn.Parameter(torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) * rel_stddev) - self.emb_rel_v = nn.Parameter(torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) * rel_stddev) - - nn.init.xavier_uniform_(self.conv_q.weight) - nn.init.xavier_uniform_(self.conv_k.weight) - nn.init.xavier_uniform_(self.conv_v.weight) - if proximal_init: - with torch.no_grad(): - self.conv_k.weight.copy_(self.conv_q.weight) - self.conv_k.bias.copy_(self.conv_q.bias) - - def forward(self, x, c, attn_mask=None): - q = self.conv_q(x) - k = self.conv_k(c) - v = self.conv_v(c) - - x, self.attn = self.attention(q, k, v, mask=attn_mask) - - x = self.conv_o(x) - return x - - def attention(self, query, key, value, mask=None): - # reshape [b, d, t] -> [b, n_h, t, d_k] - b, d, t_s, t_t = (*key.size(), query.size(2)) - query = query.view(b, self.n_heads, self.k_channels, t_t).transpose(2, 3) - key = key.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3) - value = value.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3) - - scores = torch.matmul(query / math.sqrt(self.k_channels), key.transpose(-2, -1)) - if self.window_size is not None: - assert t_s == t_t, "Relative attention is only available for self-attention." - key_relative_embeddings = self._get_relative_embeddings(self.emb_rel_k, t_s) - rel_logits = self._matmul_with_relative_keys(query /math.sqrt(self.k_channels), key_relative_embeddings) - scores_local = self._relative_position_to_absolute_position(rel_logits) - scores = scores + scores_local - if self.proximal_bias: - assert t_s == t_t, "Proximal bias is only available for self-attention." - scores = scores + self._attention_bias_proximal(t_s).to(device=scores.device, dtype=scores.dtype) - if mask is not None: - scores = scores.masked_fill(mask == 0, -1e4) - if self.block_length is not None: - assert t_s == t_t, "Local attention is only available for self-attention." - block_mask = torch.ones_like(scores).triu(-self.block_length).tril(self.block_length) - scores = scores.masked_fill(block_mask == 0, -1e4) - p_attn = F.softmax(scores, dim=-1) # [b, n_h, t_t, t_s] - p_attn = self.drop(p_attn) - output = torch.matmul(p_attn, value) - if self.window_size is not None: - relative_weights = self._absolute_position_to_relative_position(p_attn) - value_relative_embeddings = self._get_relative_embeddings(self.emb_rel_v, t_s) - output = output + self._matmul_with_relative_values(relative_weights, value_relative_embeddings) - output = output.transpose(2, 3).contiguous().view(b, d, t_t) # [b, n_h, t_t, d_k] -> [b, d, t_t] - return output, p_attn - - def _matmul_with_relative_values(self, x, y): - """ - x: [b, h, l, m] - y: [h or 1, m, d] - ret: [b, h, l, d] - """ - ret = torch.matmul(x, y.unsqueeze(0)) - return ret - - def _matmul_with_relative_keys(self, x, y): - """ - x: [b, h, l, d] - y: [h or 1, m, d] - ret: [b, h, l, m] - """ - ret = torch.matmul(x, y.unsqueeze(0).transpose(-2, -1)) - return ret - - def _get_relative_embeddings(self, relative_embeddings, length): - max_relative_position = 2 * self.window_size + 1 - # Pad first before slice to avoid using cond ops. - pad_length = max(length - (self.window_size + 1), 0) - slice_start_position = max((self.window_size + 1) - length, 0) - slice_end_position = slice_start_position + 2 * length - 1 - if pad_length > 0: - padded_relative_embeddings = F.pad( - relative_embeddings, - commons.convert_pad_shape([[0, 0], [pad_length, pad_length], [0, 0]])) - else: - padded_relative_embeddings = relative_embeddings - used_relative_embeddings = padded_relative_embeddings[:,slice_start_position:slice_end_position] - return used_relative_embeddings - - def _relative_position_to_absolute_position(self, x): - """ - x: [b, h, l, 2*l-1] - ret: [b, h, l, l] - """ - batch, heads, length, _ = x.size() - # Concat columns of pad to shift from relative to absolute indexing. - x = F.pad(x, commons.convert_pad_shape([[0,0],[0,0],[0,0],[0,1]])) - - # Concat extra elements so to add up to shape (len+1, 2*len-1). - x_flat = x.view([batch, heads, length * 2 * length]) - x_flat = F.pad(x_flat, commons.convert_pad_shape([[0,0],[0,0],[0,length-1]])) - - # Reshape and slice out the padded elements. - x_final = x_flat.view([batch, heads, length+1, 2*length-1])[:, :, :length, length-1:] - return x_final - - def _absolute_position_to_relative_position(self, x): - """ - x: [b, h, l, l] - ret: [b, h, l, 2*l-1] - """ - batch, heads, length, _ = x.size() - # padd along column - x = F.pad(x, commons.convert_pad_shape([[0, 0], [0, 0], [0, 0], [0, length-1]])) - x_flat = x.view([batch, heads, length**2 + length*(length -1)]) - # add 0's in the beginning that will skew the elements after reshape - x_flat = F.pad(x_flat, commons.convert_pad_shape([[0, 0], [0, 0], [length, 0]])) - x_final = x_flat.view([batch, heads, length, 2*length])[:,:,:,1:] - return x_final - - def _attention_bias_proximal(self, length): - """Bias for self-attention to encourage attention to close positions. - Args: - length: an integer scalar. - Returns: - a Tensor with shape [1, 1, length, length] - """ - r = torch.arange(length, dtype=torch.float32) - diff = torch.unsqueeze(r, 0) - torch.unsqueeze(r, 1) - return torch.unsqueeze(torch.unsqueeze(-torch.log1p(torch.abs(diff)), 0), 0) - - -class FFN(nn.Module): - def __init__(self, in_channels, out_channels, filter_channels, kernel_size, p_dropout=0., activation=None, causal=False): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.activation = activation - self.causal = causal - - if causal: - self.padding = self._causal_padding - else: - self.padding = self._same_padding - - self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size) - self.conv_2 = nn.Conv1d(filter_channels, out_channels, kernel_size) - self.drop = nn.Dropout(p_dropout) - - def forward(self, x, x_mask): - x = self.conv_1(self.padding(x * x_mask)) - if self.activation == "gelu": - x = x * torch.sigmoid(1.702 * x) - else: - x = torch.relu(x) - x = self.drop(x) - x = self.conv_2(self.padding(x * x_mask)) - return x * x_mask - - def _causal_padding(self, x): - if self.kernel_size == 1: - return x - pad_l = self.kernel_size - 1 - pad_r = 0 - padding = [[0, 0], [0, 0], [pad_l, pad_r]] - x = F.pad(x, commons.convert_pad_shape(padding)) - return x - - def _same_padding(self, x): - if self.kernel_size == 1: - return x - pad_l = (self.kernel_size - 1) // 2 - pad_r = self.kernel_size // 2 - padding = [[0, 0], [0, 0], [pad_l, pad_r]] - x = F.pad(x, commons.convert_pad_shape(padding)) - return x diff --git a/spaces/banana-projects/web3d/node_modules/three/examples/js/WebGL.js b/spaces/banana-projects/web3d/node_modules/three/examples/js/WebGL.js deleted file mode 100644 index adf94502e6a176530f1101f4e8220a8cb302cf61..0000000000000000000000000000000000000000 --- a/spaces/banana-projects/web3d/node_modules/three/examples/js/WebGL.js +++ /dev/null @@ -1,94 +0,0 @@ -/** - * @author alteredq / http://alteredqualia.com/ - * @author mr.doob / http://mrdoob.com/ - */ - -var WEBGL = { - - isWebGLAvailable: function () { - - try { - - var canvas = document.createElement( 'canvas' ); - return !! ( window.WebGLRenderingContext && ( canvas.getContext( 'webgl' ) || canvas.getContext( 'experimental-webgl' ) ) ); - - } catch ( e ) { - - return false; - - } - - }, - - isWebGL2Available: function () { - - try { - - var canvas = document.createElement( 'canvas' ); - return !! ( window.WebGL2RenderingContext && canvas.getContext( 'webgl2' ) ); - - } catch ( e ) { - - return false; - - } - - }, - - getWebGLErrorMessage: function () { - - return this.getErrorMessage( 1 ); - - }, - - getWebGL2ErrorMessage: function () { - - return this.getErrorMessage( 2 ); - - }, - - getErrorMessage: function ( version ) { - - var names = { - 1: 'WebGL', - 2: 'WebGL 2' - }; - - var contexts = { - 1: window.WebGLRenderingContext, - 2: window.WebGL2RenderingContext - }; - - var message = 'Your $0 does not seem to support $1'; - - var element = document.createElement( 'div' ); - element.id = 'webglmessage'; - element.style.fontFamily = 'monospace'; - element.style.fontSize = '13px'; - element.style.fontWeight = 'normal'; - element.style.textAlign = 'center'; - element.style.background = '#fff'; - element.style.color = '#000'; - element.style.padding = '1.5em'; - element.style.width = '400px'; - element.style.margin = '5em auto 0'; - - if ( contexts[ version ] ) { - - message = message.replace( '$0', 'graphics card' ); - - } else { - - message = message.replace( '$0', 'browser' ); - - } - - message = message.replace( '$1', names[ version ] ); - - element.innerHTML = message; - - return element; - - } - -}; diff --git a/spaces/banana-projects/web3d/node_modules/three/src/extras/curves/CatmullRomCurve3.d.ts b/spaces/banana-projects/web3d/node_modules/three/src/extras/curves/CatmullRomCurve3.d.ts deleted file mode 100644 index 686077d71e3b2162738c867b0001d89286bfbd49..0000000000000000000000000000000000000000 --- a/spaces/banana-projects/web3d/node_modules/three/src/extras/curves/CatmullRomCurve3.d.ts +++ /dev/null @@ -1,46 +0,0 @@ -import { Vector3 } from './../../math/Vector3'; -import { Curve } from './../core/Curve'; - -// Extras / Curves ///////////////////////////////////////////////////////////////////// -export namespace CurveUtils { - export function tangentQuadraticBezier( - t: number, - p0: number, - p1: number, - p2: number - ): number; - export function tangentCubicBezier( - t: number, - p0: number, - p1: number, - p2: number, - p3: number - ): number; - export function tangentSpline( - t: number, - p0: number, - p1: number, - p2: number, - p3: number - ): number; - export function interpolate( - p0: number, - p1: number, - p2: number, - p3: number, - t: number - ): number; -} - -export class CatmullRomCurve3 extends Curve { - constructor( - points?: Vector3[], - closed?: boolean, - curveType?: string, - tension?: number - ); - - points: Vector3[]; - - getPoint(t: number): Vector3; -} diff --git a/spaces/bigbio/dataset-explore/app.py b/spaces/bigbio/dataset-explore/app.py deleted file mode 100644 index 1f2033342077e4f4468e0f541e8d8f0b2677451b..0000000000000000000000000000000000000000 --- a/spaces/bigbio/dataset-explore/app.py +++ /dev/null @@ -1,299 +0,0 @@ -""" -BigBIO Dataset Explorer Demo -""" - -from collections import Counter -from collections import defaultdict -import string - -from datasets import load_dataset -from loguru import logger -import numpy as np -import pandas as pd -import plotly.express as px -import spacy -from spacy import displacy -import streamlit as st - -from bigbio.dataloader import BigBioConfigHelpers -from bigbio.hf_maps import BATCH_MAPPERS_TEXT_FROM_SCHEMA -from sklearn.feature_extraction.text import CountVectorizer - - -st.set_page_config(layout="wide") - - -IBM_COLORS = [ - "#648fff", - "#dc267f", - "#ffb000", - "#fe6100", - "#785ef0", - "#000000", - "#ffffff", -] - - -def get_html(html: str): - """Convert HTML so it can be rendered.""" - WRAPPER = """
      {}
      """ - # Newlines seem to mess with the rendering - html = html.replace("\n", " ") - return WRAPPER.format(html) - - -@st.cache() -def load_conhelps(): - conhelps = BigBioConfigHelpers() - logger.info(conhelps) - conhelps = conhelps.filtered(lambda x: not x.is_large) - conhelps = conhelps.filtered(lambda x: x.is_bigbio_schema) - conhelps = conhelps.filtered(lambda x: not x.is_local) - return conhelps - - -def update_axis_font(fig): - fig.update_layout( - xaxis = dict(title_font = dict(size=20)), - yaxis = dict(title_font = dict(size=20)), - ) - return fig - - -def draw_histogram(hist_data, col_name, histnorm=None, nbins=25, xmax=None, loc=st): - fig = px.histogram( - hist_data, - x=col_name, - color="split", - color_discrete_sequence=IBM_COLORS, - marginal="box", # or violin, rug - barmode="group", - hover_data=hist_data.columns, - histnorm=histnorm, - nbins=nbins, - range_x=(0, xmax) if xmax else None, - ) - fig = update_axis_font(fig) - loc.plotly_chart(fig, use_container_width=True) - - -def draw_bar(bar_data, x, y, loc=st): - fig = px.bar( - bar_data, - x=x, - y=y, - color="split", - color_discrete_sequence=IBM_COLORS, - barmode="group", - hover_data=bar_data.columns, - ) - fig = update_axis_font(fig) - loc.plotly_chart(fig, use_container_width=True) - - -def parse_metrics(metadata, loc): - for split, meta in metadata.items(): - for key, val in meta.__dict__.items(): - if isinstance(val, int): - loc.metric(label=f"{split}-{key}", value=val) - - -def parse_counters(metadata): - meta = metadata["train"] # using the training counter to fetch the names - counters = [] - for k, v in meta.__dict__.items(): - if "counter" in k and len(v) > 0: - counters.append(k) - return counters - - -# generate the df for histogram -def parse_label_counter(metadata, counter_type): - hist_data = [] - for split, m in metadata.items(): - metadata_counter = getattr(m, counter_type) - for k, v in metadata_counter.items(): - row = {} - row["labels"] = k - row[counter_type] = v - row["split"] = split - hist_data.append(row) - return pd.DataFrame(hist_data) - - - - -# load BigBioConfigHelpers -#================================== -logger.info("about to call load_conhelps") -conhelps = load_conhelps() -logger.info("exiting call load_conhelps") -config_name_to_conhelp = {ch.config.name: ch for ch in conhelps} -ds_display_names = sorted(list(set([ch.display_name for ch in conhelps]))) -ds_display_name_to_config_names = defaultdict(list) -for ch in conhelps: - ds_display_name_to_config_names[ch.display_name].append(ch.config.name) - - -# dataset selection -#================================== - -st.sidebar.title("Dataset Selection") -ds_display_name = st.sidebar.selectbox("dataset name", ds_display_names, index=0) - -config_names = ds_display_name_to_config_names[ds_display_name] -config_name = st.sidebar.selectbox("config name", config_names) -conhelp = config_name_to_conhelp[config_name] - - -st.header(f"Dataset stats for {ds_display_name}") - - -@st.cache() -def load_data(conhelp): - metadata = conhelp.get_metadata() - dsd = conhelp.load_dataset() - dsd = dsd.map( - BATCH_MAPPERS_TEXT_FROM_SCHEMA[conhelp.bigbio_schema_caps.lower()], - batched=True) - - return dsd, metadata - -@st.cache() -def count_vectorize(dsd): - cv = CountVectorizer() - xcvs = {} - dfs_tok_per_samp = [] - for split, ds in dsd.items(): - xcv = cv.fit_transform(ds['text']) - token_counts = np.asarray(xcv.sum(axis=1)).flatten() - df = pd.DataFrame(token_counts, columns=["tokens per sample"]) - df["split"] = split - dfs_tok_per_samp.append(df) - xcvs[split] = xcv - df_tok_per_samp = pd.concat(dfs_tok_per_samp) - return xcvs, df_tok_per_samp - - -dsd_load_state = st.info(f"Loading {ds_display_name} - {config_name} ...") -dsd, metadata = load_data(conhelp) -dsd_load_state.empty() - -cv_load_state = st.info(f"Count Vectorizing {ds_display_name} - {config_name} ...") -xcvs, df_tok_per_samp = count_vectorize(dsd) -cv_load_state.empty() - - -st.sidebar.subheader(f"BigBIO Schema = {conhelp.bigbio_schema_caps}") - -st.sidebar.subheader("Tasks Supported by Dataset") -tasks = conhelp.tasks -tasks = [string.capwords(task.replace("_", " ")) for task in tasks] -st.sidebar.markdown( - """ - {} - """.format( - "\n".join([ - f"- {task}" for task in tasks - ])) -) - -st.sidebar.subheader("Languages") -langs = conhelp.languages -st.sidebar.markdown( - """ - {} - """.format("\n".join([f"- {lang}" for lang in langs])) -) - -st.sidebar.subheader("Home Page") -st.sidebar.write(conhelp.homepage) - -st.sidebar.subheader("Description") -st.sidebar.write(conhelp.description) - -st.sidebar.subheader("Citation") -st.sidebar.markdown(f"""\ -``` -{conhelp.citation} -```` -""" - ) -st.sidebar.subheader("Counts") -parse_metrics(metadata, st.sidebar) - - - -# dataframe display -#if "train" in dsd.keys(): -# st.subheader("Sample Preview") -# df = pd.DataFrame.from_dict(dsd["train"]) -# st.write(df.head(10)) - - - -# draw token distribution -st.subheader("Sample Length Distribution") -max_xmax = int(df_tok_per_samp["tokens per sample"].max()) -xmax = st.slider("xmax", min_value=0, max_value=max_xmax, value=max_xmax) -histnorms = ['percent', 'probability', 'density', 'probability density', None] -histnorm = st.selectbox("histnorm", histnorms) -draw_histogram(df_tok_per_samp, "tokens per sample", histnorm=histnorm, xmax=xmax, loc=st) - - - -st.subheader("Counter Distributions") -counters = parse_counters(metadata) -counter_type = st.selectbox("counter_type", counters) -label_df = parse_label_counter(metadata, counter_type) -label_max = int(label_df[counter_type].max() - 1) -label_min = int(label_df[counter_type].min()) -filter_value = st.slider("minimum cutoff", label_min, label_max) -label_df = label_df[label_df[counter_type] >= filter_value] -# draw bar chart for counter -draw_bar(label_df, "labels", counter_type, st) - - -st.subheader("Sample Explorer") -split = st.selectbox("split", list(dsd.keys())) -sample_index = st.number_input( - "sample index", - min_value=0, - max_value=len(dsd[split])-1, - value=0, -) - -sample = dsd[split][sample_index] - - -if conhelp.bigbio_schema_caps == "KB": - nlp = spacy.blank("en") - text = sample["text"] - doc = nlp(text) - spans = [] - for bb_ent in sample["entities"]: - span = doc.char_span( - bb_ent["offsets"][0][0], - bb_ent["offsets"][0][1], - label=bb_ent["type"], - ) - spans.append(span) - doc.spans["sc"] = spans - html = displacy.render( - doc, - style="span", - options={ - "colors": { - et: clr for et,clr in zip( - metadata[split].entities_type_counter.keys(), - IBM_COLORS*10 - ) - } - }, - ) - style = "" - st.write(f"{style}{get_html(html)}", unsafe_allow_html=True) - - -st.write(sample) \ No newline at end of file diff --git a/spaces/bikemright/overweight-AI/index.html b/spaces/bikemright/overweight-AI/index.html deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/bioriAsaeru/text-to-voice/Avanutri Full Crack.rar.md b/spaces/bioriAsaeru/text-to-voice/Avanutri Full Crack.rar.md deleted file mode 100644 index ff99a9e8ab015d241c24a7ec920b9f9e36e36119..0000000000000000000000000000000000000000 --- a/spaces/bioriAsaeru/text-to-voice/Avanutri Full Crack.rar.md +++ /dev/null @@ -1,120 +0,0 @@ - -

      Avanutri Full Crack.rar: Qué es y Cómo Usarlo

      - -

      Avanutri es un software de nutrición que te permite realizar evaluaciones nutricionales, calcular el gasto energético, elaborar dietas equilibradas, generar informes y gráficos, y mucho más. Es una herramienta muy útil para los profesionales de la nutrición que quieren ofrecer un servicio de calidad a sus clientes.

      -

      avanutri full crack.rar


      Download ⚙⚙⚙ https://urloso.com/2uyPnl



      - -

      Para poder usar Avanutri, necesitas comprar una licencia que tiene un costo mensual o anual. Sin embargo, hay una forma de obtener este software gratis: usando el avanutri full crack.rar.

      - -

      El avanutri full crack.rar es un programa que te permite activar Avanutri sin necesidad de comprar una licencia o registrarte. De esta forma, podrás usar todas las funciones y modos del software sin ningún problema ni restricción.

      - -

      En este artículo te vamos a explicar qué es el avanutri full crack.rar, cómo descargarlo y cómo usarlo para disfrutar de Avanutri sin límites.

      - -

      Qué es el avanutri full crack.rar

      - -

      El avanutri full crack.rar es un programa que te permite activar Avanutri sin necesidad de comprar una licencia o registrarte. Es un archivo comprimido en formato ZIP o RAR que contiene el programa del crack y las instrucciones para usarlo.

      - -

      El programa del crack es el encargado de modificar el código de Avanutri para que se pueda usar sin restricciones. El programa del crack se ejecuta en tu ordenador y selecciona la versión de Avanutri que quieres activar. Luego, hace el proceso automáticamente y te permite abrir Avanutri y usarlo sin ningún problema.

      -

      - -

      El avanutri full crack.rar es un programa ilegal y no autorizado por la compañía que desarrolla Avanutri. Por eso, hay que tener algunas precauciones al usarlo, como no actualizar Avanutri, no conectarse a internet, no compartir el crack ni abusar de él. De esta forma, evitarás problemas legales, virus informáticos o daños en tu ordenador o tu consciencia.

      - -

      Cómo descargar el avanutri full crack.rar

      - -

      Para descargar el avanutri full crack.rar, lo primero que tienes que hacer es buscarlo en internet. Hay muchos sitios web que ofrecen este programa, pero no todos son seguros o confiables. Por eso, te recomendamos que uses un antivirus y un bloqueador de anuncios antes de entrar en cualquier página.

      - -

      Una vez que hayas encontrado un sitio web que te parezca fiable, tendrás que descargar el archivo del avanutri full crack.rar. Normalmente se trata de un archivo comprimido en formato ZIP o RAR, que tendrás que extraer en tu ordenador. Dentro del archivo encontrarás el programa del crack y las instrucciones para usarlo.

      - -

      Cómo usar el avanutri full crack.rar

      - -

      Para usar el avanutri full crack.rar, lo primero que tienes que hacer es instalar Avanutri en tu ordenador. Puedes descargarlo desde la página oficial de Avanutri o desde cualquier otro sitio web que lo ofrezca. Recuerda que no tienes que comprarlo ni registrarlo.

      - -

      Una vez que hayas instalado Avanutri, tendrás que ejecutar el programa del avanutri full crack.rar. Este programa te pedirá que selecciones la versión de Avanutri que quieres activar y luego hará el proceso automáticamente. En algunos casos, tendrás que copiar y pegar el archivo del crack en la carpeta donde se instaló Avanutri.

      - -

      Cuando el programa del avanutri full crack.rar haya terminado, podrás abrir Avanutri y usarlo sin ningún problema. No tendrás que introducir ningún código ni ninguna clave de activación. El software estará completamente desbloqueado y podrás acceder a todas sus funciones y modos.

      - -

      Qué ventajas tiene usar el avanutri full crack.rar

      - -

      Usar el avanutri full crack.rar tiene muchas ventajas, entre las que se destacan las siguientes:

      - -
        -
      • Ahorras dinero: no tienes que pagar por una licencia para usar Avanutri, sino que puedes acceder a él con un solo programa.
      • -
      • Ahorras tiempo: no tienes que esperar a que se active Avanutri con una clave o un código.
      • -
      • Ahorras espacio: no tienes que instalar cada versión de Avanutri por separado, sino que puedes tenerlas todas en una sola carpeta.
      • -
      • Tienes más opciones: puedes elegir entre diferentes versiones de Avanutri según tus necesidades y preferencias.
      • -
      • Tienes más diversión: puedes usar Avanutri sin restricciones ni limitaciones, disfrutando de sus funciones y modos.
      • -
      - -

      Qué precauciones hay que tener al usar el avanutri full crack.rar

      - -

      Aunque usar el avanutri full crack.rar tiene muchas ventajas, también hay que tener algunas precauciones al hacerlo, ya que se trata de un programa ilegal y no autorizado por la compañía. Estas son algunas de las precauciones que hay que tener:

      - -
        -
      • No actualizar Avanutri: si actualizas Avanutri desde la página oficial o desde el propio software, es posible que el crack deje de funcionar o que se detecte como un virus.
      • -
      • No conectarse a internet: si te conectas a internet mientras usas el crack, es posible que se envíe información a la compañía o que se bloquee el software.
      • -
      • No compartir el crack: si compartes el archivo del crack con otras personas o lo subes a internet, es posible que te expongas a problemas legales o a virus informáticos.
      • -
      • No abusar del crack: si usas el crack con demasiada frecuencia o durante mucho tiempo, es posible que dañes tu ordenador o tu consciencia.
      • -
      - -

      Conclusión

      - -

      El avanutri full crack.rar es un programa que te permite usar Avanutri sin tener que pagar por una licencia o registrarte. Es una forma fácil y rápida de acceder a uno de los mejores software de nutrición del mercado.

      - -

      Para usar el avanutri full crack.rar solo tienes que descargarlo de internet, ejecutarlo en tu ordenador y seleccionar la versión de Avanutri que quieras activar. Así podrás usar todas las funciones y modos del software sin ningún problema ni restricción.

      - -

      Sin embargo, también hay que tener en cuenta que se trata de un programa ilegal y no autorizado por la compañía, por lo que hay que tener algunas precauciones al usarlo. No debes actualizar Avanutri, conectarte a internet, compartir el crack ni abusar de él. De esta forma evitarás problemas legales, virus informáticos o daños en tu ordenador o tu consciencia.

      - -

      Esperamos que este artículo te haya sido útil y te haya ayudado a entender cómo funciona el avanutri full crack.rar. Si tienes alguna duda o comentario al respecto, no dudes en dejarnos un mensaje. Y si te ha gustado este artículo, compártelo con tus amigos y familiares. ¡Gracias por leernos!

      -

      Qué es Avanutri y para qué sirve

      - -

      Avanutri es un software de nutrición que te permite realizar evaluaciones nutricionales, calcular el gasto energético, elaborar dietas equilibradas, generar informes y gráficos, y mucho más. Es una herramienta muy útil para los profesionales de la nutrición que quieren ofrecer un servicio de calidad a sus clientes.

      - -

      Avanutri te permite realizar las siguientes funciones:

      - -
        -
      • Evaluación nutricional: puedes medir el peso, la altura, el índice de masa corporal, el porcentaje de grasa, el metabolismo basal y otros parámetros de tus clientes. También puedes hacer un análisis de la composición corporal y del estado nutricional.
      • -
      • Cálculo del gasto energético: puedes estimar el gasto energético total y el gasto energético por actividad física de tus clientes. También puedes calcular el balance energético y el requerimiento calórico.
      • -
      • Elaboración de dietas: puedes crear planes alimenticios personalizados para tus clientes según sus objetivos, preferencias, alergias, intolerancias y patologías. También puedes ajustar las porciones, las calorías, los macronutrientes y los micronutrientes.
      • -
      • Generación de informes y gráficos: puedes generar informes detallados y gráficos comparativos de la evolución de tus clientes. También puedes imprimir o enviar por correo electrónico los informes y los gráficos.
      • -
      • Otras funciones: puedes acceder a una base de datos de más de 10.000 alimentos y recetas, consultar las tablas de composición de alimentos y las recomendaciones nutricionales, crear tu propia base de datos de clientes y alimentos, y mucho más.
      • -
      - -

      Qué beneficios tiene usar Avanutri

      - -

      Usar Avanutri tiene muchos beneficios, tanto para ti como para tus clientes. Estos son algunos de los beneficios que puedes obtener:

      - -
        -
      • Mejorar tu servicio: puedes ofrecer un servicio más profesional, completo y personalizado a tus clientes. Podrás hacer evaluaciones nutricionales más precisas, elaborar dietas más equilibradas y generar informes más claros y atractivos.
      • -
      • Aumentar tu productividad: puedes ahorrar tiempo y esfuerzo al usar Avanutri. Podrás hacer las evaluaciones nutricionales, los cálculos del gasto energético, la elaboración de dietas y la generación de informes y gráficos de forma rápida y sencilla.
      • -
      • Ampliar tu conocimiento: puedes aprender más sobre nutrición al usar Avanutri. Podrás consultar las últimas novedades científicas, las tablas de composición de alimentos y las recomendaciones nutricionales. También podrás acceder a una base de datos de más de 10.000 alimentos y recetas.
      • -
      • Fidelizar a tus clientes: puedes mejorar la satisfacción y la confianza de tus clientes al usar Avanutri. Podrás ofrecerles un servicio más profesional, completo y personalizado. También podrás hacerles un seguimiento más eficaz y motivador.
      • -
      - -

      Cómo comprar una licencia para usar Avanutri

      - -

      Si quieres usar Avanutri legalmente, necesitas comprar una licencia que tiene un costo mensual o anual. De esta forma, podrás acceder a todas las funciones y modos del software sin ningún problema ni restricción.

      - -

      Para comprar una licencia para usar Avanutri, solo tienes que seguir estos pasos:

      - -
        -
      1. Entra en la página web oficial de Avanutri y crea una cuenta gratuita o inicia sesión con tu cuenta existente.
      2. -
      3. Elige el plan que más te convenga según el número de licencias que quieras comprar y el tiempo que quieras usarlas.
      4. -
      5. Paga el plan con tu tarjeta de crédito, PayPal u otro método. Recibirás un código o una clave de activación para el software.
      6. -
      7. Descarga e instala Avanutri en tu ordenador. Abre el software e introduce el código o la clave de activación cuando te lo pidan.
      8. -
      9. Disfruta de Avanutri sin ningún problema ni restricción. Podrás acceder a todas sus funciones y modos, así como a las actualizaciones, los contenidos adicionales y el soporte técnico.
      10. -
      - -

      Conclusión

      - -

      Avanutri es un software de nutrición que te permite realizar evaluaciones nutricionales, calcular el gasto energético, elaborar dietas equilibradas, generar informes y gráficos, y mucho más. Es una herramienta muy útil para los profesionales de la nutrición que quieren ofrecer un servicio de calidad a sus clientes.

      - -

      Para poder usar Avanutri legalmente, necesitas comprar una licencia que tiene un costo mensual o anual. De esta forma, podrás acceder a todas las funciones y modos del software sin ningún problema ni restricción.

      - -

      Sin embargo, hay una forma de obtener este software gratis: usando el avanutri full crack.rar. Este es un programa que te permite activar Avanutri sin necesidad de comprar una licencia o registrarte. De esta forma, podrás usar todas las funciones y modos del software sin ningún problema ni restricción.

      - -

      Sin embargo, también hay que tener en cuenta que se trata de un programa ilegal y no autorizado por la compañía, por lo que hay que tener algunas precauciones al usarlo. No debes actualizar Avanutri, conectarte a internet, compartir el crack ni abusar de él. De esta forma evitarás problemas legales, virus informáticos o daños en tu ordenador o tu consciencia.

      - -

      Esperamos que este artículo te haya sido útil y te haya ayudado a entender cómo funciona el avanutri full crack.rar. Si tienes alguna duda o comentario al respecto, no dudes en dejarnos un mensaje. Y si te ha gustado este artículo, compártelo con tus amigos y familiares. ¡Gracias por leernos!

      -

      Esperamos que este artículo te haya sido útil y te haya ayudado a entender cómo funciona el avanutri full crack.rar. Si tienes alguna duda o comentario al respecto, no dudes en dejarnos un mensaje. Y si te ha gustado este artículo, compártelo con tus amigos y familiares. ¡Gracias por leernos!

      3cee63e6c2
      -
      -
      \ No newline at end of file diff --git a/spaces/bioriAsaeru/text-to-voice/Enfile Gars La Tete Dans Le Vagin.md b/spaces/bioriAsaeru/text-to-voice/Enfile Gars La Tete Dans Le Vagin.md deleted file mode 100644 index 5c11a9709e2f4a878a290226193f19042d756407..0000000000000000000000000000000000000000 --- a/spaces/bioriAsaeru/text-to-voice/Enfile Gars La Tete Dans Le Vagin.md +++ /dev/null @@ -1,6 +0,0 @@ -

      Enfile Gars La Tete Dans Le Vagin


      DOWNLOADhttps://urloso.com/2uyRKP



      - - aaccfb2cb3
      -
      -
      -

      diff --git a/spaces/bioriAsaeru/text-to-voice/HD Online Player (filem Rock 2005 Full Movie Download).md b/spaces/bioriAsaeru/text-to-voice/HD Online Player (filem Rock 2005 Full Movie Download).md deleted file mode 100644 index e56014f6b67c58ea70d93023bf9dfeb4404b15d0..0000000000000000000000000000000000000000 --- a/spaces/bioriAsaeru/text-to-voice/HD Online Player (filem Rock 2005 Full Movie Download).md +++ /dev/null @@ -1,6 +0,0 @@ -

      HD Online Player (filem rock 2005 full movie download)


      Download Zip 🗹 https://urloso.com/2uyPqo



      - - 4d29de3e1b
      -
      -
      -

      diff --git a/spaces/bioriAsaeru/text-to-voice/Kick Movie ((NEW)) Download Free Utorrent Movies.md b/spaces/bioriAsaeru/text-to-voice/Kick Movie ((NEW)) Download Free Utorrent Movies.md deleted file mode 100644 index 11c328d069335c34e92380df1e9d269a6e26df0f..0000000000000000000000000000000000000000 --- a/spaces/bioriAsaeru/text-to-voice/Kick Movie ((NEW)) Download Free Utorrent Movies.md +++ /dev/null @@ -1,6 +0,0 @@ -

      Kick Movie Download Free Utorrent Movies


      Download Ziphttps://urloso.com/2uyPur



      -
      -YOU CAN DOWNLOAD HEAVY PC GAMES AND HD MOVIES AND SOFTWARES THIS IS ALL ABOUT U CALLED UTORRENT ... hd movie1080p download. you download from here kick 1080p movie. CLICK HERE TO DOWNLOAD. Share ... 1fdad05405
      -
      -
      -

      diff --git a/spaces/bofenghuang/whisper-demo-german/run_demo_hf_multiple_models.py b/spaces/bofenghuang/whisper-demo-german/run_demo_hf_multiple_models.py deleted file mode 100644 index 39f4d9fe22e0af2f77b16d6f9326ebcf2acd46e2..0000000000000000000000000000000000000000 --- a/spaces/bofenghuang/whisper-demo-german/run_demo_hf_multiple_models.py +++ /dev/null @@ -1,160 +0,0 @@ -import logging -import warnings - -import gradio as gr -import pytube as pt -import torch -from huggingface_hub import model_info -from transformers import pipeline -from transformers.utils.logging import disable_progress_bar - -warnings.filterwarnings("ignore") -disable_progress_bar() - -DEFAULT_MODEL_NAME = "bofenghuang/whisper-large-v2-cv11-german" -# make sure no OOM -MODEL_NAMES = [ - "bofenghuang/whisper-medium-cv11-german", - "bofenghuang/whisper-large-v2-cv11-german", -] -LANG = "de" -CHUNK_LENGTH_S = 30 -MAX_NEW_TOKENS = 225 - -logging.basicConfig( - format="%(asctime)s [%(levelname)s] [%(name)s] %(message)s", - datefmt="%Y-%m-%dT%H:%M:%SZ", -) -logger = logging.getLogger(__name__) -logger.setLevel(logging.DEBUG) - -device = 0 if torch.cuda.is_available() else "cpu" -logger.info(f"Model will be loaded on device `{device}`") - -cached_models = {} - - -def print_cuda_memory_info(): - used_mem, tot_mem = torch.cuda.mem_get_info() - logger.info(f"CUDA memory info - Free: {used_mem / 1024 ** 3:.2f} Gb, used: {(tot_mem - used_mem) / 1024 ** 3:.2f} Gb, total: {tot_mem / 1024 ** 3:.2f} Gb") - - -def print_memory_info(): - # todo - if device == "cpu": - pass - else: - print_cuda_memory_info() - - -def maybe_load_cached_pipeline(model_name): - pipe = cached_models.get(model_name) - if pipe is None: - # load pipeline - # todo: set decoding option for pipeline - pipe = pipeline( - task="automatic-speech-recognition", - model=model_name, - chunk_length_s=CHUNK_LENGTH_S, - device=device, - ) - # set forced_decoder_ids - pipe.model.config.forced_decoder_ids = pipe.tokenizer.get_decoder_prompt_ids(language=LANG, task="transcribe") - # limit genneration max length - pipe.model.config.max_length = MAX_NEW_TOKENS + 1 - - logger.info(f"`{model_name}` pipeline has been initialized") - print_memory_info() - - cached_models[model_name] = pipe - return pipe - - -def transcribe(microphone, file_upload, model_name): - warn_output = "" - if (microphone is not None) and (file_upload is not None): - warn_output = ( - "WARNING: You've uploaded an audio file and used the microphone. " - "The recorded file from the microphone will be used and the uploaded audio will be discarded.\n" - ) - - elif (microphone is None) and (file_upload is None): - return "ERROR: You have to either use the microphone or upload an audio file" - - file = microphone if microphone is not None else file_upload - - pipe = maybe_load_cached_pipeline(model_name) - text = pipe(file)["text"] - - logger.info(f"Transcription by `{model_name}`: {text}") - - return warn_output + text - - -def _return_yt_html_embed(yt_url): - video_id = yt_url.split("?v=")[-1] - HTML_str = ( - f'
      ' - "
      " - ) - return HTML_str - - -def yt_transcribe(yt_url, model_name): - yt = pt.YouTube(yt_url) - html_embed_str = _return_yt_html_embed(yt_url) - stream = yt.streams.filter(only_audio=True)[0] - stream.download(filename="audio.mp3") - - pipe = maybe_load_cached_pipeline(model_name) - text = pipe("audio.mp3")["text"] - - logger.info(f"Transcription: {text}") - - return html_embed_str, text - - -# load default model -maybe_load_cached_pipeline(DEFAULT_MODEL_NAME) - -demo = gr.Blocks() - -mf_transcribe = gr.Interface( - fn=transcribe, - inputs=[ - gr.inputs.Audio(source="microphone", type="filepath", optional=True, label="Record"), - gr.inputs.Audio(source="upload", type="filepath", optional=True, label="Upload File"), - gr.inputs.Dropdown(choices=MODEL_NAMES, default=DEFAULT_MODEL_NAME, label="Whisper Model"), - ], - # outputs="text", - outputs=gr.outputs.Textbox(label="Transcription"), - layout="horizontal", - theme="huggingface", - title="Whisper German Demo 🇩🇪 : Transcribe Audio", - description="Transcribe long-form microphone or audio inputs with the click of a button!", - allow_flagging="never", -) - -yt_transcribe = gr.Interface( - fn=yt_transcribe, - inputs=[ - gr.inputs.Textbox(lines=1, placeholder="Paste the URL to a YouTube video here", label="YouTube URL"), - gr.inputs.Dropdown(choices=MODEL_NAMES, default=DEFAULT_MODEL_NAME, label="Whisper Model"), - ], - # outputs=["html", "text"], - outputs=[ - gr.outputs.HTML(label="YouTube Page"), - gr.outputs.Textbox(label="Transcription"), - ], - layout="horizontal", - theme="huggingface", - title="Whisper German Demo 🇩🇪 : Transcribe YouTube", - description="Transcribe long-form YouTube videos with the click of a button!", - allow_flagging="never", -) - -with demo: - gr.TabbedInterface([mf_transcribe, yt_transcribe], ["Transcribe Audio", "Transcribe YouTube"]) - -# demo.launch(server_name="0.0.0.0", debug=True, share=True) -demo.launch(enable_queue=True) diff --git a/spaces/bradarrML/stablediffusion-infinity/js/keyboard.js b/spaces/bradarrML/stablediffusion-infinity/js/keyboard.js deleted file mode 100644 index cf9757878c3c12b6b178a129860c12ca2b68b5be..0000000000000000000000000000000000000000 --- a/spaces/bradarrML/stablediffusion-infinity/js/keyboard.js +++ /dev/null @@ -1,37 +0,0 @@ - -window.my_setup_keyboard=setInterval(function(){ - let app=document.querySelector("gradio-app"); - app=app.shadowRoot??app; - let frame=app.querySelector("#sdinfframe").contentWindow; - console.log("Check iframe..."); - if(frame.setup_shortcut) - { - frame.setup_shortcut(json); - clearInterval(window.my_setup_keyboard); - } -}, 1000); -var config=JSON.parse(json); -var key_map={}; -Object.keys(config.shortcut).forEach(k=>{ - key_map[config.shortcut[k]]=k; -}); -document.addEventListener("keydown", e => { - if(e.target.tagName!="INPUT"&&e.target.tagName!="GRADIO-APP"&&e.target.tagName!="TEXTAREA") - { - let key=e.key; - if(e.ctrlKey) - { - key="Ctrl+"+e.key; - if(key in key_map) - { - e.preventDefault(); - } - } - let app=document.querySelector("gradio-app"); - app=app.shadowRoot??app; - let frame=app.querySelector("#sdinfframe").contentDocument; - frame.dispatchEvent( - new KeyboardEvent("keydown", {key: e.key, ctrlKey: e.ctrlKey}) - ); - } -}) \ No newline at end of file diff --git a/spaces/brjathu/HMR2.0/vendor/detectron2/datasets/prepare_cocofied_lvis.py b/spaces/brjathu/HMR2.0/vendor/detectron2/datasets/prepare_cocofied_lvis.py deleted file mode 100644 index 245c88482a9e2405e5a912b5c560aed78a614a13..0000000000000000000000000000000000000000 --- a/spaces/brjathu/HMR2.0/vendor/detectron2/datasets/prepare_cocofied_lvis.py +++ /dev/null @@ -1,176 +0,0 @@ -#!/usr/bin/env python3 -# -*- coding: utf-8 -*- -# Copyright (c) Facebook, Inc. and its affiliates. - -import copy -import json -import os -from collections import defaultdict - -# This mapping is extracted from the official LVIS mapping: -# https://github.com/lvis-dataset/lvis-api/blob/master/data/coco_to_synset.json -COCO_SYNSET_CATEGORIES = [ - {"synset": "person.n.01", "coco_cat_id": 1}, - {"synset": "bicycle.n.01", "coco_cat_id": 2}, - {"synset": "car.n.01", "coco_cat_id": 3}, - {"synset": "motorcycle.n.01", "coco_cat_id": 4}, - {"synset": "airplane.n.01", "coco_cat_id": 5}, - {"synset": "bus.n.01", "coco_cat_id": 6}, - {"synset": "train.n.01", "coco_cat_id": 7}, - {"synset": "truck.n.01", "coco_cat_id": 8}, - {"synset": "boat.n.01", "coco_cat_id": 9}, - {"synset": "traffic_light.n.01", "coco_cat_id": 10}, - {"synset": "fireplug.n.01", "coco_cat_id": 11}, - {"synset": "stop_sign.n.01", "coco_cat_id": 13}, - {"synset": "parking_meter.n.01", "coco_cat_id": 14}, - {"synset": "bench.n.01", "coco_cat_id": 15}, - {"synset": "bird.n.01", "coco_cat_id": 16}, - {"synset": "cat.n.01", "coco_cat_id": 17}, - {"synset": "dog.n.01", "coco_cat_id": 18}, - {"synset": "horse.n.01", "coco_cat_id": 19}, - {"synset": "sheep.n.01", "coco_cat_id": 20}, - {"synset": "beef.n.01", "coco_cat_id": 21}, - {"synset": "elephant.n.01", "coco_cat_id": 22}, - {"synset": "bear.n.01", "coco_cat_id": 23}, - {"synset": "zebra.n.01", "coco_cat_id": 24}, - {"synset": "giraffe.n.01", "coco_cat_id": 25}, - {"synset": "backpack.n.01", "coco_cat_id": 27}, - {"synset": "umbrella.n.01", "coco_cat_id": 28}, - {"synset": "bag.n.04", "coco_cat_id": 31}, - {"synset": "necktie.n.01", "coco_cat_id": 32}, - {"synset": "bag.n.06", "coco_cat_id": 33}, - {"synset": "frisbee.n.01", "coco_cat_id": 34}, - {"synset": "ski.n.01", "coco_cat_id": 35}, - {"synset": "snowboard.n.01", "coco_cat_id": 36}, - {"synset": "ball.n.06", "coco_cat_id": 37}, - {"synset": "kite.n.03", "coco_cat_id": 38}, - {"synset": "baseball_bat.n.01", "coco_cat_id": 39}, - {"synset": "baseball_glove.n.01", "coco_cat_id": 40}, - {"synset": "skateboard.n.01", "coco_cat_id": 41}, - {"synset": "surfboard.n.01", "coco_cat_id": 42}, - {"synset": "tennis_racket.n.01", "coco_cat_id": 43}, - {"synset": "bottle.n.01", "coco_cat_id": 44}, - {"synset": "wineglass.n.01", "coco_cat_id": 46}, - {"synset": "cup.n.01", "coco_cat_id": 47}, - {"synset": "fork.n.01", "coco_cat_id": 48}, - {"synset": "knife.n.01", "coco_cat_id": 49}, - {"synset": "spoon.n.01", "coco_cat_id": 50}, - {"synset": "bowl.n.03", "coco_cat_id": 51}, - {"synset": "banana.n.02", "coco_cat_id": 52}, - {"synset": "apple.n.01", "coco_cat_id": 53}, - {"synset": "sandwich.n.01", "coco_cat_id": 54}, - {"synset": "orange.n.01", "coco_cat_id": 55}, - {"synset": "broccoli.n.01", "coco_cat_id": 56}, - {"synset": "carrot.n.01", "coco_cat_id": 57}, - {"synset": "frank.n.02", "coco_cat_id": 58}, - {"synset": "pizza.n.01", "coco_cat_id": 59}, - {"synset": "doughnut.n.02", "coco_cat_id": 60}, - {"synset": "cake.n.03", "coco_cat_id": 61}, - {"synset": "chair.n.01", "coco_cat_id": 62}, - {"synset": "sofa.n.01", "coco_cat_id": 63}, - {"synset": "pot.n.04", "coco_cat_id": 64}, - {"synset": "bed.n.01", "coco_cat_id": 65}, - {"synset": "dining_table.n.01", "coco_cat_id": 67}, - {"synset": "toilet.n.02", "coco_cat_id": 70}, - {"synset": "television_receiver.n.01", "coco_cat_id": 72}, - {"synset": "laptop.n.01", "coco_cat_id": 73}, - {"synset": "mouse.n.04", "coco_cat_id": 74}, - {"synset": "remote_control.n.01", "coco_cat_id": 75}, - {"synset": "computer_keyboard.n.01", "coco_cat_id": 76}, - {"synset": "cellular_telephone.n.01", "coco_cat_id": 77}, - {"synset": "microwave.n.02", "coco_cat_id": 78}, - {"synset": "oven.n.01", "coco_cat_id": 79}, - {"synset": "toaster.n.02", "coco_cat_id": 80}, - {"synset": "sink.n.01", "coco_cat_id": 81}, - {"synset": "electric_refrigerator.n.01", "coco_cat_id": 82}, - {"synset": "book.n.01", "coco_cat_id": 84}, - {"synset": "clock.n.01", "coco_cat_id": 85}, - {"synset": "vase.n.01", "coco_cat_id": 86}, - {"synset": "scissors.n.01", "coco_cat_id": 87}, - {"synset": "teddy.n.01", "coco_cat_id": 88}, - {"synset": "hand_blower.n.01", "coco_cat_id": 89}, - {"synset": "toothbrush.n.01", "coco_cat_id": 90}, -] - - -def cocofy_lvis(input_filename, output_filename): - """ - Filter LVIS instance segmentation annotations to remove all categories that are not included in - COCO. The new json files can be used to evaluate COCO AP using `lvis-api`. The category ids in - the output json are the incontiguous COCO dataset ids. - - Args: - input_filename (str): path to the LVIS json file. - output_filename (str): path to the COCOfied json file. - """ - - with open(input_filename, "r") as f: - lvis_json = json.load(f) - - lvis_annos = lvis_json.pop("annotations") - cocofied_lvis = copy.deepcopy(lvis_json) - lvis_json["annotations"] = lvis_annos - - # Mapping from lvis cat id to coco cat id via synset - lvis_cat_id_to_synset = {cat["id"]: cat["synset"] for cat in lvis_json["categories"]} - synset_to_coco_cat_id = {x["synset"]: x["coco_cat_id"] for x in COCO_SYNSET_CATEGORIES} - # Synsets that we will keep in the dataset - synsets_to_keep = set(synset_to_coco_cat_id.keys()) - coco_cat_id_with_instances = defaultdict(int) - - new_annos = [] - ann_id = 1 - for ann in lvis_annos: - lvis_cat_id = ann["category_id"] - synset = lvis_cat_id_to_synset[lvis_cat_id] - if synset not in synsets_to_keep: - continue - coco_cat_id = synset_to_coco_cat_id[synset] - new_ann = copy.deepcopy(ann) - new_ann["category_id"] = coco_cat_id - new_ann["id"] = ann_id - ann_id += 1 - new_annos.append(new_ann) - coco_cat_id_with_instances[coco_cat_id] += 1 - cocofied_lvis["annotations"] = new_annos - - for image in cocofied_lvis["images"]: - for key in ["not_exhaustive_category_ids", "neg_category_ids"]: - new_category_list = [] - for lvis_cat_id in image[key]: - synset = lvis_cat_id_to_synset[lvis_cat_id] - if synset not in synsets_to_keep: - continue - coco_cat_id = synset_to_coco_cat_id[synset] - new_category_list.append(coco_cat_id) - coco_cat_id_with_instances[coco_cat_id] += 1 - image[key] = new_category_list - - coco_cat_id_with_instances = set(coco_cat_id_with_instances.keys()) - - new_categories = [] - for cat in lvis_json["categories"]: - synset = cat["synset"] - if synset not in synsets_to_keep: - continue - coco_cat_id = synset_to_coco_cat_id[synset] - if coco_cat_id not in coco_cat_id_with_instances: - continue - new_cat = copy.deepcopy(cat) - new_cat["id"] = coco_cat_id - new_categories.append(new_cat) - cocofied_lvis["categories"] = new_categories - - with open(output_filename, "w") as f: - json.dump(cocofied_lvis, f) - print("{} is COCOfied and stored in {}.".format(input_filename, output_filename)) - - -if __name__ == "__main__": - dataset_dir = os.path.join(os.getenv("DETECTRON2_DATASETS", "datasets"), "lvis") - for s in ["lvis_v0.5_train", "lvis_v0.5_val"]: - print("Start COCOfing {}.".format(s)) - cocofy_lvis( - os.path.join(dataset_dir, "{}.json".format(s)), - os.path.join(dataset_dir, "{}_cocofied.json".format(s)), - ) diff --git a/spaces/brjathu/HMR2.0/vendor/detectron2/detectron2/data/datasets/builtin.py b/spaces/brjathu/HMR2.0/vendor/detectron2/detectron2/data/datasets/builtin.py deleted file mode 100644 index c3a68aa833f12f0fa324a269c36190f21b8a75bd..0000000000000000000000000000000000000000 --- a/spaces/brjathu/HMR2.0/vendor/detectron2/detectron2/data/datasets/builtin.py +++ /dev/null @@ -1,259 +0,0 @@ -# -*- coding: utf-8 -*- -# Copyright (c) Facebook, Inc. and its affiliates. - - -""" -This file registers pre-defined datasets at hard-coded paths, and their metadata. - -We hard-code metadata for common datasets. This will enable: -1. Consistency check when loading the datasets -2. Use models on these standard datasets directly and run demos, - without having to download the dataset annotations - -We hard-code some paths to the dataset that's assumed to -exist in "./datasets/". - -Users SHOULD NOT use this file to create new dataset / metadata for new dataset. -To add new dataset, refer to the tutorial "docs/DATASETS.md". -""" - -import os - -from detectron2.data import DatasetCatalog, MetadataCatalog - -from .builtin_meta import ADE20K_SEM_SEG_CATEGORIES, _get_builtin_metadata -from .cityscapes import load_cityscapes_instances, load_cityscapes_semantic -from .cityscapes_panoptic import register_all_cityscapes_panoptic -from .coco import load_sem_seg, register_coco_instances -from .coco_panoptic import register_coco_panoptic, register_coco_panoptic_separated -from .lvis import get_lvis_instances_meta, register_lvis_instances -from .pascal_voc import register_pascal_voc - -# ==== Predefined datasets and splits for COCO ========== - -_PREDEFINED_SPLITS_COCO = {} -_PREDEFINED_SPLITS_COCO["coco"] = { - "coco_2014_train": ("coco/train2014", "coco/annotations/instances_train2014.json"), - "coco_2014_val": ("coco/val2014", "coco/annotations/instances_val2014.json"), - "coco_2014_minival": ("coco/val2014", "coco/annotations/instances_minival2014.json"), - "coco_2014_valminusminival": ( - "coco/val2014", - "coco/annotations/instances_valminusminival2014.json", - ), - "coco_2017_train": ("coco/train2017", "coco/annotations/instances_train2017.json"), - "coco_2017_val": ("coco/val2017", "coco/annotations/instances_val2017.json"), - "coco_2017_test": ("coco/test2017", "coco/annotations/image_info_test2017.json"), - "coco_2017_test-dev": ("coco/test2017", "coco/annotations/image_info_test-dev2017.json"), - "coco_2017_val_100": ("coco/val2017", "coco/annotations/instances_val2017_100.json"), -} - -_PREDEFINED_SPLITS_COCO["coco_person"] = { - "keypoints_coco_2014_train": ( - "coco/train2014", - "coco/annotations/person_keypoints_train2014.json", - ), - "keypoints_coco_2014_val": ("coco/val2014", "coco/annotations/person_keypoints_val2014.json"), - "keypoints_coco_2014_minival": ( - "coco/val2014", - "coco/annotations/person_keypoints_minival2014.json", - ), - "keypoints_coco_2014_valminusminival": ( - "coco/val2014", - "coco/annotations/person_keypoints_valminusminival2014.json", - ), - "keypoints_coco_2017_train": ( - "coco/train2017", - "coco/annotations/person_keypoints_train2017.json", - ), - "keypoints_coco_2017_val": ("coco/val2017", "coco/annotations/person_keypoints_val2017.json"), - "keypoints_coco_2017_val_100": ( - "coco/val2017", - "coco/annotations/person_keypoints_val2017_100.json", - ), -} - - -_PREDEFINED_SPLITS_COCO_PANOPTIC = { - "coco_2017_train_panoptic": ( - # This is the original panoptic annotation directory - "coco/panoptic_train2017", - "coco/annotations/panoptic_train2017.json", - # This directory contains semantic annotations that are - # converted from panoptic annotations. - # It is used by PanopticFPN. - # You can use the script at detectron2/datasets/prepare_panoptic_fpn.py - # to create these directories. - "coco/panoptic_stuff_train2017", - ), - "coco_2017_val_panoptic": ( - "coco/panoptic_val2017", - "coco/annotations/panoptic_val2017.json", - "coco/panoptic_stuff_val2017", - ), - "coco_2017_val_100_panoptic": ( - "coco/panoptic_val2017_100", - "coco/annotations/panoptic_val2017_100.json", - "coco/panoptic_stuff_val2017_100", - ), -} - - -def register_all_coco(root): - for dataset_name, splits_per_dataset in _PREDEFINED_SPLITS_COCO.items(): - for key, (image_root, json_file) in splits_per_dataset.items(): - # Assume pre-defined datasets live in `./datasets`. - register_coco_instances( - key, - _get_builtin_metadata(dataset_name), - os.path.join(root, json_file) if "://" not in json_file else json_file, - os.path.join(root, image_root), - ) - - for ( - prefix, - (panoptic_root, panoptic_json, semantic_root), - ) in _PREDEFINED_SPLITS_COCO_PANOPTIC.items(): - prefix_instances = prefix[: -len("_panoptic")] - instances_meta = MetadataCatalog.get(prefix_instances) - image_root, instances_json = instances_meta.image_root, instances_meta.json_file - # The "separated" version of COCO panoptic segmentation dataset, - # e.g. used by Panoptic FPN - register_coco_panoptic_separated( - prefix, - _get_builtin_metadata("coco_panoptic_separated"), - image_root, - os.path.join(root, panoptic_root), - os.path.join(root, panoptic_json), - os.path.join(root, semantic_root), - instances_json, - ) - # The "standard" version of COCO panoptic segmentation dataset, - # e.g. used by Panoptic-DeepLab - register_coco_panoptic( - prefix, - _get_builtin_metadata("coco_panoptic_standard"), - image_root, - os.path.join(root, panoptic_root), - os.path.join(root, panoptic_json), - instances_json, - ) - - -# ==== Predefined datasets and splits for LVIS ========== - - -_PREDEFINED_SPLITS_LVIS = { - "lvis_v1": { - "lvis_v1_train": ("coco/", "lvis/lvis_v1_train.json"), - "lvis_v1_val": ("coco/", "lvis/lvis_v1_val.json"), - "lvis_v1_test_dev": ("coco/", "lvis/lvis_v1_image_info_test_dev.json"), - "lvis_v1_test_challenge": ("coco/", "lvis/lvis_v1_image_info_test_challenge.json"), - }, - "lvis_v0.5": { - "lvis_v0.5_train": ("coco/", "lvis/lvis_v0.5_train.json"), - "lvis_v0.5_val": ("coco/", "lvis/lvis_v0.5_val.json"), - "lvis_v0.5_val_rand_100": ("coco/", "lvis/lvis_v0.5_val_rand_100.json"), - "lvis_v0.5_test": ("coco/", "lvis/lvis_v0.5_image_info_test.json"), - }, - "lvis_v0.5_cocofied": { - "lvis_v0.5_train_cocofied": ("coco/", "lvis/lvis_v0.5_train_cocofied.json"), - "lvis_v0.5_val_cocofied": ("coco/", "lvis/lvis_v0.5_val_cocofied.json"), - }, -} - - -def register_all_lvis(root): - for dataset_name, splits_per_dataset in _PREDEFINED_SPLITS_LVIS.items(): - for key, (image_root, json_file) in splits_per_dataset.items(): - register_lvis_instances( - key, - get_lvis_instances_meta(dataset_name), - os.path.join(root, json_file) if "://" not in json_file else json_file, - os.path.join(root, image_root), - ) - - -# ==== Predefined splits for raw cityscapes images =========== -_RAW_CITYSCAPES_SPLITS = { - "cityscapes_fine_{task}_train": ("cityscapes/leftImg8bit/train/", "cityscapes/gtFine/train/"), - "cityscapes_fine_{task}_val": ("cityscapes/leftImg8bit/val/", "cityscapes/gtFine/val/"), - "cityscapes_fine_{task}_test": ("cityscapes/leftImg8bit/test/", "cityscapes/gtFine/test/"), -} - - -def register_all_cityscapes(root): - for key, (image_dir, gt_dir) in _RAW_CITYSCAPES_SPLITS.items(): - meta = _get_builtin_metadata("cityscapes") - image_dir = os.path.join(root, image_dir) - gt_dir = os.path.join(root, gt_dir) - - inst_key = key.format(task="instance_seg") - DatasetCatalog.register( - inst_key, - lambda x=image_dir, y=gt_dir: load_cityscapes_instances( - x, y, from_json=True, to_polygons=True - ), - ) - MetadataCatalog.get(inst_key).set( - image_dir=image_dir, gt_dir=gt_dir, evaluator_type="cityscapes_instance", **meta - ) - - sem_key = key.format(task="sem_seg") - DatasetCatalog.register( - sem_key, lambda x=image_dir, y=gt_dir: load_cityscapes_semantic(x, y) - ) - MetadataCatalog.get(sem_key).set( - image_dir=image_dir, - gt_dir=gt_dir, - evaluator_type="cityscapes_sem_seg", - ignore_label=255, - **meta, - ) - - -# ==== Predefined splits for PASCAL VOC =========== -def register_all_pascal_voc(root): - SPLITS = [ - ("voc_2007_trainval", "VOC2007", "trainval"), - ("voc_2007_train", "VOC2007", "train"), - ("voc_2007_val", "VOC2007", "val"), - ("voc_2007_test", "VOC2007", "test"), - ("voc_2012_trainval", "VOC2012", "trainval"), - ("voc_2012_train", "VOC2012", "train"), - ("voc_2012_val", "VOC2012", "val"), - ] - for name, dirname, split in SPLITS: - year = 2007 if "2007" in name else 2012 - register_pascal_voc(name, os.path.join(root, dirname), split, year) - MetadataCatalog.get(name).evaluator_type = "pascal_voc" - - -def register_all_ade20k(root): - root = os.path.join(root, "ADEChallengeData2016") - for name, dirname in [("train", "training"), ("val", "validation")]: - image_dir = os.path.join(root, "images", dirname) - gt_dir = os.path.join(root, "annotations_detectron2", dirname) - name = f"ade20k_sem_seg_{name}" - DatasetCatalog.register( - name, lambda x=image_dir, y=gt_dir: load_sem_seg(y, x, gt_ext="png", image_ext="jpg") - ) - MetadataCatalog.get(name).set( - stuff_classes=ADE20K_SEM_SEG_CATEGORIES[:], - image_root=image_dir, - sem_seg_root=gt_dir, - evaluator_type="sem_seg", - ignore_label=255, - ) - - -# True for open source; -# Internally at fb, we register them elsewhere -if __name__.endswith(".builtin"): - # Assume pre-defined datasets live in `./datasets`. - _root = os.path.expanduser(os.getenv("DETECTRON2_DATASETS", "datasets")) - register_all_coco(_root) - register_all_lvis(_root) - register_all_cityscapes(_root) - register_all_cityscapes_panoptic(_root) - register_all_pascal_voc(_root) - register_all_ade20k(_root) diff --git a/spaces/brjathu/HMR2.0/vendor/detectron2/docs/tutorials/evaluation.md b/spaces/brjathu/HMR2.0/vendor/detectron2/docs/tutorials/evaluation.md deleted file mode 100644 index 2ef94faa38cae1c5f4e49eed4887ebbcd147513c..0000000000000000000000000000000000000000 --- a/spaces/brjathu/HMR2.0/vendor/detectron2/docs/tutorials/evaluation.md +++ /dev/null @@ -1,68 +0,0 @@ - -# Evaluation - -Evaluation is a process that takes a number of inputs/outputs pairs and aggregate them. -You can always [use the model](./models.md) directly and just parse its inputs/outputs manually to perform -evaluation. -Alternatively, evaluation is implemented in detectron2 using the [DatasetEvaluator](../modules/evaluation.html#detectron2.evaluation.DatasetEvaluator) -interface. - -Detectron2 includes a few `DatasetEvaluator` that computes metrics using standard dataset-specific -APIs (e.g., COCO, LVIS). -You can also implement your own `DatasetEvaluator` that performs some other jobs -using the inputs/outputs pairs. -For example, to count how many instances are detected on the validation set: - -```python -class Counter(DatasetEvaluator): - def reset(self): - self.count = 0 - def process(self, inputs, outputs): - for output in outputs: - self.count += len(output["instances"]) - def evaluate(self): - # save self.count somewhere, or print it, or return it. - return {"count": self.count} -``` - -## Use evaluators - -To evaluate using the methods of evaluators manually: -```python -def get_all_inputs_outputs(): - for data in data_loader: - yield data, model(data) - -evaluator.reset() -for inputs, outputs in get_all_inputs_outputs(): - evaluator.process(inputs, outputs) -eval_results = evaluator.evaluate() -``` - -Evaluators can also be used with [inference_on_dataset](../modules/evaluation.html#detectron2.evaluation.inference_on_dataset). -For example, - -```python -eval_results = inference_on_dataset( - model, - data_loader, - DatasetEvaluators([COCOEvaluator(...), Counter()])) -``` -This will execute `model` on all inputs from `data_loader`, and call evaluator to process them. - -Compared to running the evaluation manually using the model, the benefit of this function is that -evaluators can be merged together using [DatasetEvaluators](../modules/evaluation.html#detectron2.evaluation.DatasetEvaluators), -and all the evaluation can finish in one forward pass over the dataset. -This function also provides accurate speed benchmarks for the given model and dataset. - -## Evaluators for custom dataset - -Many evaluators in detectron2 are made for specific datasets, -in order to obtain scores using each dataset's official API. -In addition to that, two evaluators are able to evaluate any generic dataset -that follows detectron2's [standard dataset format](./datasets.md), so they -can be used to evaluate custom datasets: - -* [COCOEvaluator](../modules/evaluation.html#detectron2.evaluation.COCOEvaluator) is able to evaluate AP (Average Precision) for box detection, - instance segmentation, keypoint detection on any custom dataset. -* [SemSegEvaluator](../modules/evaluation.html#detectron2.evaluation.SemSegEvaluator) is able to evaluate semantic segmentation metrics on any custom dataset. diff --git a/spaces/carlosalonso/Detection-video/carpeta_deteccion/detectron2/evaluation/testing.py b/spaces/carlosalonso/Detection-video/carpeta_deteccion/detectron2/evaluation/testing.py deleted file mode 100644 index 9e5ae625bb0593fc20739dd3ea549157e4df4f3d..0000000000000000000000000000000000000000 --- a/spaces/carlosalonso/Detection-video/carpeta_deteccion/detectron2/evaluation/testing.py +++ /dev/null @@ -1,85 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import logging -import numpy as np -import pprint -import sys -from collections.abc import Mapping - - -def print_csv_format(results): - """ - Print main metrics in a format similar to Detectron, - so that they are easy to copypaste into a spreadsheet. - - Args: - results (OrderedDict[dict]): task_name -> {metric -> score} - unordered dict can also be printed, but in arbitrary order - """ - assert isinstance(results, Mapping) or not len(results), results - logger = logging.getLogger(__name__) - for task, res in results.items(): - if isinstance(res, Mapping): - # Don't print "AP-category" metrics since they are usually not tracked. - important_res = [(k, v) for k, v in res.items() if "-" not in k] - logger.info("copypaste: Task: {}".format(task)) - logger.info("copypaste: " + ",".join([k[0] for k in important_res])) - logger.info("copypaste: " + ",".join(["{0:.4f}".format(k[1]) for k in important_res])) - else: - logger.info(f"copypaste: {task}={res}") - - -def verify_results(cfg, results): - """ - Args: - results (OrderedDict[dict]): task_name -> {metric -> score} - - Returns: - bool: whether the verification succeeds or not - """ - expected_results = cfg.TEST.EXPECTED_RESULTS - if not len(expected_results): - return True - - ok = True - for task, metric, expected, tolerance in expected_results: - actual = results[task].get(metric, None) - if actual is None: - ok = False - continue - if not np.isfinite(actual): - ok = False - continue - diff = abs(actual - expected) - if diff > tolerance: - ok = False - - logger = logging.getLogger(__name__) - if not ok: - logger.error("Result verification failed!") - logger.error("Expected Results: " + str(expected_results)) - logger.error("Actual Results: " + pprint.pformat(results)) - - sys.exit(1) - else: - logger.info("Results verification passed.") - return ok - - -def flatten_results_dict(results): - """ - Expand a hierarchical dict of scalars into a flat dict of scalars. - If results[k1][k2][k3] = v, the returned dict will have the entry - {"k1/k2/k3": v}. - - Args: - results (dict): - """ - r = {} - for k, v in results.items(): - if isinstance(v, Mapping): - v = flatten_results_dict(v) - for kk, vv in v.items(): - r[k + "/" + kk] = vv - else: - r[k] = v - return r diff --git a/spaces/chendl/compositional_test/multimodal/YOLOX/exps/default/yolox_nano.py b/spaces/chendl/compositional_test/multimodal/YOLOX/exps/default/yolox_nano.py deleted file mode 100644 index 8955dd2a7748c900cab7dca11adf877cd2cf5abd..0000000000000000000000000000000000000000 --- a/spaces/chendl/compositional_test/multimodal/YOLOX/exps/default/yolox_nano.py +++ /dev/null @@ -1,48 +0,0 @@ -#!/usr/bin/env python3 -# -*- coding:utf-8 -*- -# Copyright (c) Megvii, Inc. and its affiliates. - -import os - -import torch.nn as nn - -from yolox.exp import Exp as MyExp - - -class Exp(MyExp): - def __init__(self): - super(Exp, self).__init__() - self.depth = 0.33 - self.width = 0.25 - self.input_size = (416, 416) - self.random_size = (10, 20) - self.mosaic_scale = (0.5, 1.5) - self.test_size = (416, 416) - self.mosaic_prob = 0.5 - self.enable_mixup = False - self.exp_name = os.path.split(os.path.realpath(__file__))[1].split(".")[0] - - def get_model(self, sublinear=False): - - def init_yolo(M): - for m in M.modules(): - if isinstance(m, nn.BatchNorm2d): - m.eps = 1e-3 - m.momentum = 0.03 - if "model" not in self.__dict__: - from yolox.models import YOLOX, YOLOPAFPN, YOLOXHead - in_channels = [256, 512, 1024] - # NANO model use depthwise = True, which is main difference. - backbone = YOLOPAFPN( - self.depth, self.width, in_channels=in_channels, - act=self.act, depthwise=True, - ) - head = YOLOXHead( - self.num_classes, self.width, in_channels=in_channels, - act=self.act, depthwise=True - ) - self.model = YOLOX(backbone, head) - - self.model.apply(init_yolo) - self.model.head.initialize_biases(1e-2) - return self.model diff --git a/spaces/chendl/compositional_test/transformers/docker/transformers-pytorch-deepspeed-nightly-gpu/Dockerfile b/spaces/chendl/compositional_test/transformers/docker/transformers-pytorch-deepspeed-nightly-gpu/Dockerfile deleted file mode 100644 index fcb599ddc232d6a8b4d8369ff8a4c0f3c1e2c89a..0000000000000000000000000000000000000000 --- a/spaces/chendl/compositional_test/transformers/docker/transformers-pytorch-deepspeed-nightly-gpu/Dockerfile +++ /dev/null @@ -1,57 +0,0 @@ -# https://docs.nvidia.com/deeplearning/frameworks/pytorch-release-notes/rel_22-08.html#rel_22-08 -FROM nvcr.io/nvidia/pytorch:22.08-py3 -LABEL maintainer="Hugging Face" - -ARG DEBIAN_FRONTEND=noninteractive - -# Example: `cu102`, `cu113`, etc. -ARG CUDA='cu117' - -RUN apt -y update -RUN apt install -y libaio-dev -RUN python3 -m pip install --no-cache-dir --upgrade pip - -ARG REF=main -RUN git clone https://github.com/huggingface/transformers && cd transformers && git checkout $REF - -# Install **nightly** release PyTorch (flag `--pre`) -# (PyTorch must be installed before pre-compiling any DeepSpeed c++/cuda ops.) -# (https://www.deepspeed.ai/tutorials/advanced-install/#pre-install-deepspeed-ops) -RUN python3 -m pip install --no-cache-dir -U --pre torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/nightly/$CUDA - -RUN python3 -m pip install --no-cache-dir ./transformers[deepspeed-testing] - -# Uninstall `torch-tensorrt` and `apex` shipped with the base image -RUN python3 -m pip uninstall -y torch-tensorrt apex - -# Pre-build **nightly** release of DeepSpeed, so it would be ready for testing (otherwise, the 1st deepspeed test will timeout) -RUN python3 -m pip uninstall -y deepspeed -# This has to be run inside the GPU VMs running the tests. (So far, it fails here due to GPU checks during compilation.) -# Issue: https://github.com/microsoft/DeepSpeed/issues/2010 -# RUN git clone https://github.com/microsoft/DeepSpeed && cd DeepSpeed && rm -rf build && \ -# DS_BUILD_CPU_ADAM=1 DS_BUILD_FUSED_ADAM=1 DS_BUILD_AIO=1 DS_BUILD_UTILS=1 python3 -m pip install . --global-option="build_ext" --global-option="-j8" --no-cache -v --disable-pip-version-check 2>&1 - -## For `torchdynamo` tests -## (see https://github.com/huggingface/transformers/pull/17765) -#RUN git clone https://github.com/pytorch/functorch -#RUN python3 -m pip install --no-cache-dir ./functorch[aot] -#RUN cd functorch && python3 setup.py develop -# -#RUN git clone https://github.com/pytorch/torchdynamo -#RUN python3 -m pip install -r ./torchdynamo/requirements.txt -#RUN cd torchdynamo && python3 setup.py develop -# -## install TensorRT -#RUN python3 -m pip install --no-cache-dir -U nvidia-pyindex -#RUN python3 -m pip install --no-cache-dir -U nvidia-tensorrt==8.2.4.2 -# -## install torch_tensorrt (fx path) -#RUN git clone https://github.com/pytorch/TensorRT.git -#RUN cd TensorRT/py && python3 setup.py install --fx-only - -# When installing in editable mode, `transformers` is not recognized as a package. -# this line must be added in order for python to be aware of transformers. -RUN cd transformers && python3 setup.py develop - -# Disable for now as deepspeed is not installed above. To be enabled once the issue is fixed. -# RUN python3 -c "from deepspeed.launcher.runner import main" diff --git a/spaces/chendl/compositional_test/transformers/examples/legacy/pytorch-lightning/run_glue.py b/spaces/chendl/compositional_test/transformers/examples/legacy/pytorch-lightning/run_glue.py deleted file mode 100644 index 5f22e2fc7a131186b98a8126600b884cfa626d41..0000000000000000000000000000000000000000 --- a/spaces/chendl/compositional_test/transformers/examples/legacy/pytorch-lightning/run_glue.py +++ /dev/null @@ -1,201 +0,0 @@ -import argparse -import glob -import logging -import os -import time -from argparse import Namespace - -import numpy as np -import torch -from lightning_base import BaseTransformer, add_generic_args, generic_train -from torch.utils.data import DataLoader, TensorDataset - -from transformers import glue_compute_metrics as compute_metrics -from transformers import glue_convert_examples_to_features as convert_examples_to_features -from transformers import glue_output_modes, glue_tasks_num_labels -from transformers import glue_processors as processors - - -logger = logging.getLogger(__name__) - - -class GLUETransformer(BaseTransformer): - mode = "sequence-classification" - - def __init__(self, hparams): - if type(hparams) == dict: - hparams = Namespace(**hparams) - hparams.glue_output_mode = glue_output_modes[hparams.task] - num_labels = glue_tasks_num_labels[hparams.task] - - super().__init__(hparams, num_labels, self.mode) - - def forward(self, **inputs): - return self.model(**inputs) - - def training_step(self, batch, batch_idx): - inputs = {"input_ids": batch[0], "attention_mask": batch[1], "labels": batch[3]} - - if self.config.model_type not in ["distilbert", "bart"]: - inputs["token_type_ids"] = batch[2] if self.config.model_type in ["bert", "xlnet", "albert"] else None - - outputs = self(**inputs) - loss = outputs[0] - - lr_scheduler = self.trainer.lr_schedulers[0]["scheduler"] - tensorboard_logs = {"loss": loss, "rate": lr_scheduler.get_last_lr()[-1]} - return {"loss": loss, "log": tensorboard_logs} - - def prepare_data(self): - "Called to initialize data. Use the call to construct features" - args = self.hparams - processor = processors[args.task]() - self.labels = processor.get_labels() - - for mode in ["train", "dev"]: - cached_features_file = self._feature_file(mode) - if os.path.exists(cached_features_file) and not args.overwrite_cache: - logger.info("Loading features from cached file %s", cached_features_file) - else: - logger.info("Creating features from dataset file at %s", args.data_dir) - examples = ( - processor.get_dev_examples(args.data_dir) - if mode == "dev" - else processor.get_train_examples(args.data_dir) - ) - features = convert_examples_to_features( - examples, - self.tokenizer, - max_length=args.max_seq_length, - label_list=self.labels, - output_mode=args.glue_output_mode, - ) - logger.info("Saving features into cached file %s", cached_features_file) - torch.save(features, cached_features_file) - - def get_dataloader(self, mode: str, batch_size: int, shuffle: bool = False) -> DataLoader: - "Load datasets. Called after prepare data." - - # We test on dev set to compare to benchmarks without having to submit to GLUE server - mode = "dev" if mode == "test" else mode - - cached_features_file = self._feature_file(mode) - logger.info("Loading features from cached file %s", cached_features_file) - features = torch.load(cached_features_file) - all_input_ids = torch.tensor([f.input_ids for f in features], dtype=torch.long) - all_attention_mask = torch.tensor([f.attention_mask for f in features], dtype=torch.long) - all_token_type_ids = torch.tensor([f.token_type_ids for f in features], dtype=torch.long) - if self.hparams.glue_output_mode == "classification": - all_labels = torch.tensor([f.label for f in features], dtype=torch.long) - elif self.hparams.glue_output_mode == "regression": - all_labels = torch.tensor([f.label for f in features], dtype=torch.float) - - return DataLoader( - TensorDataset(all_input_ids, all_attention_mask, all_token_type_ids, all_labels), - batch_size=batch_size, - shuffle=shuffle, - ) - - def validation_step(self, batch, batch_idx): - inputs = {"input_ids": batch[0], "attention_mask": batch[1], "labels": batch[3]} - - if self.config.model_type not in ["distilbert", "bart"]: - inputs["token_type_ids"] = batch[2] if self.config.model_type in ["bert", "xlnet", "albert"] else None - - outputs = self(**inputs) - tmp_eval_loss, logits = outputs[:2] - preds = logits.detach().cpu().numpy() - out_label_ids = inputs["labels"].detach().cpu().numpy() - - return {"val_loss": tmp_eval_loss.detach().cpu(), "pred": preds, "target": out_label_ids} - - def _eval_end(self, outputs) -> tuple: - val_loss_mean = torch.stack([x["val_loss"] for x in outputs]).mean().detach().cpu().item() - preds = np.concatenate([x["pred"] for x in outputs], axis=0) - - if self.hparams.glue_output_mode == "classification": - preds = np.argmax(preds, axis=1) - elif self.hparams.glue_output_mode == "regression": - preds = np.squeeze(preds) - - out_label_ids = np.concatenate([x["target"] for x in outputs], axis=0) - out_label_list = [[] for _ in range(out_label_ids.shape[0])] - preds_list = [[] for _ in range(out_label_ids.shape[0])] - - results = {**{"val_loss": val_loss_mean}, **compute_metrics(self.hparams.task, preds, out_label_ids)} - - ret = dict(results.items()) - ret["log"] = results - return ret, preds_list, out_label_list - - def validation_epoch_end(self, outputs: list) -> dict: - ret, preds, targets = self._eval_end(outputs) - logs = ret["log"] - return {"val_loss": logs["val_loss"], "log": logs, "progress_bar": logs} - - def test_epoch_end(self, outputs) -> dict: - ret, predictions, targets = self._eval_end(outputs) - logs = ret["log"] - # `val_loss` is the key returned by `self._eval_end()` but actually refers to `test_loss` - return {"avg_test_loss": logs["val_loss"], "log": logs, "progress_bar": logs} - - @staticmethod - def add_model_specific_args(parser, root_dir): - BaseTransformer.add_model_specific_args(parser, root_dir) - parser.add_argument( - "--max_seq_length", - default=128, - type=int, - help=( - "The maximum total input sequence length after tokenization. Sequences longer " - "than this will be truncated, sequences shorter will be padded." - ), - ) - - parser.add_argument( - "--task", - default="", - type=str, - required=True, - help="The GLUE task to run", - ) - parser.add_argument( - "--gpus", - default=0, - type=int, - help="The number of GPUs allocated for this, it is by default 0 meaning none", - ) - - parser.add_argument( - "--overwrite_cache", action="store_true", help="Overwrite the cached training and evaluation sets" - ) - - return parser - - -def main(): - parser = argparse.ArgumentParser() - add_generic_args(parser, os.getcwd()) - parser = GLUETransformer.add_model_specific_args(parser, os.getcwd()) - args = parser.parse_args() - - # If output_dir not provided, a folder will be generated in pwd - if args.output_dir is None: - args.output_dir = os.path.join( - "./results", - f"{args.task}_{time.strftime('%Y%m%d_%H%M%S')}", - ) - os.makedirs(args.output_dir) - - model = GLUETransformer(args) - trainer = generic_train(model, args) - - # Optionally, predict on dev set and write to output_dir - if args.do_predict: - checkpoints = sorted(glob.glob(os.path.join(args.output_dir, "checkpoint-epoch=*.ckpt"), recursive=True)) - model = model.load_from_checkpoint(checkpoints[-1]) - return trainer.test(model) - - -if __name__ == "__main__": - main() diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/anyio/_core/__init__.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/anyio/_core/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/clickhouse_connect/cc_sqlalchemy/ddl/tableengine.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/clickhouse_connect/cc_sqlalchemy/ddl/tableengine.py deleted file mode 100644 index 598e2e5adb0227ed29049e90add6a4b606c03c54..0000000000000000000000000000000000000000 --- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/clickhouse_connect/cc_sqlalchemy/ddl/tableengine.py +++ /dev/null @@ -1,247 +0,0 @@ -import logging -from typing import Type, Sequence, Optional, Dict - -from sqlalchemy.exc import ArgumentError, SQLAlchemyError -from sqlalchemy.sql.base import SchemaEventTarget -from sqlalchemy.sql.visitors import Visitable - -logger = logging.getLogger(__name__) - -engine_map: Dict[str, Type['TableEngine']] = {} - - -def tuple_expr(expr_name, value): - """ - Create a table parameter with a tuple or list correctly formatted - :param expr_name: parameter - :param value: string or tuple of strings to format - :return: formatted parameter string - """ - if value is None: - return '' - v = f'{expr_name.strip()}' - if isinstance(value, (tuple, list)): - return f" {v} ({','.join(value)})" - return f'{v} {value}' - - -class TableEngine(SchemaEventTarget, Visitable): - """ - SqlAlchemy Schema element to support ClickHouse table engines. At the moment provides no real - functionality other than the CREATE TABLE argument string - """ - arg_names = () - quoted_args = set() - optional_args = set() - eng_params = () - - def __init_subclass__(cls, **kwargs): - engine_map[cls.__name__] = cls - - def __init__(self, kwargs): - # pylint: disable=no-value-for-parameter - Visitable.__init__(self) - self.name = self.__class__.__name__ - te_name = f'{self.name} Table Engine' - engine_args = [] - for arg_name in self.arg_names: - v = kwargs.pop(arg_name, None) - if v is None: - if arg_name in self.optional_args: - continue - raise ValueError(f'Required engine parameter {arg_name} not provided for {te_name}') - if arg_name in self.quoted_args: - engine_args.append(f"'{v}'") - else: - engine_args.append(v) - if engine_args: - self.arg_str = f'({", ".join(engine_args)})' - params = [] - for param_name in self.eng_params: - v = kwargs.pop(param_name, None) - if v is not None: - params.append(tuple_expr(param_name.upper().replace('_', ' '), v)) - - self.full_engine = 'Engine ' + self.name - if engine_args: - self.full_engine += f'({", ".join(engine_args)})' - if params: - self.full_engine += ' ' + ' '.join(params) - - def compile(self): - return self.full_engine - - def check_primary_keys(self, primary_keys: Sequence): - raise SQLAlchemyError(f'Table Engine {self.name} does not support primary keys') - - def _set_parent(self, parent, **_kwargs): - parent.engine = self - - -class Memory(TableEngine): - pass - - -class Log(TableEngine): - pass - - -class StripeLog(TableEngine): - pass - - -class TinyLog(TableEngine): - pass - - -class Null(TableEngine): - pass - - -class Set(TableEngine): - pass - - -class Dictionary(TableEngine): - arg_names = ['dictionary'] - - # pylint: disable=unused-argument - def __init__(self, dictionary: str = None): - super().__init__(locals()) - - -class Merge(TableEngine): - arg_names = ['db_name, tables_regexp'] - - # pylint: disable=unused-argument - def __init__(self, db_name: str = None, tables_regexp: str = None): - super().__init__(locals()) - - -class File(TableEngine): - arg_names = ['fmt'] - - # pylint: disable=unused-argument - def __init__(self, fmt: str = None): - super().__init__(locals()) - - -class Distributed(TableEngine): - arg_names = ['cluster', 'database', 'table', 'sharding_key', 'policy_name'] - optional_args = {'sharding_key', 'policy_name'} - - # pylint: disable=unused-argument - def __init__(self, cluster: str = None, database: str = None, table=None, - sharding_key: str = None, policy_name: str = None): - super().__init__(locals()) - - -class MergeTree(TableEngine): - eng_params = ['order_by', 'partition_key', 'primary_key', 'sample_by'] - - # pylint: disable=unused-argument - def __init__(self, order_by: str = None, primary_key: str = None, - partition_by: str = None, sample_by: str = None): - if not order_by and not primary_key: - raise ArgumentError(None, 'Either PRIMARY KEY or ORDER BY must be specified') - super().__init__(locals()) - - -class SummingMergeTree(MergeTree): - pass - - -class AggregatingMergeTree(MergeTree): - pass - - -class ReplacingMergeTree(TableEngine): - arg_names = ['ver'] - optional_args = set(arg_names) - eng_params = MergeTree.eng_params - - # pylint: disable=unused-argument - def __init__(self, ver: str = None, order_by: str = None, primary_key: str = None, - partition_by: str = None, sample_by: str = None): - if not order_by and not primary_key: - raise ArgumentError(None, 'Either PRIMARY KEY or ORDER BY must be specified') - super().__init__(locals()) - - -class CollapsingMergeTree(TableEngine): - arg_names = ['sign'] - eng_params = MergeTree.eng_params - - # pylint: disable=unused-argument - def __init__(self, sign: str = None, order_by: str = None, primary_key: str = None, - partition_by: str = None, sample_by: str = None): - if not order_by and not primary_key: - raise ArgumentError(None, 'Either PRIMARY KEY or ORDER BY must be specified') - super().__init__(locals()) - - -class VersionedCollapsingMergeTree(TableEngine): - arg_names = ['sign', 'version'] - eng_params = MergeTree.eng_params - - # pylint: disable=unused-argument - def __init__(self, sign: str = None, version: str = None, order_by: str = None, primary_key: str = None, - partition_by: str = None, sample_by: str = None): - if not order_by and not primary_key: - raise ArgumentError(None, 'Either PRIMARY KEY or ORDER BY must be specified') - super().__init__(locals()) - - -class GraphiteMergeTree(TableEngine): - arg_names = ['config_section'] - eng_params = MergeTree.eng_params - - # pylint: disable=unused-argument - def __init__(self, config_section: str = None, version: str = None, order_by: str = None, primary_key: str = None, - partition_by: str = None, sample_by: str = None): - if not order_by and not primary_key: - raise ArgumentError(None, 'Either PRIMARY KEY or ORDER BY must be specified') - super().__init__(locals()) - - -class ReplicatedMergeTree(TableEngine): - arg_names = ['zk_path', 'replica'] - quoted_args = set(arg_names) - optional_args = quoted_args - eng_params = MergeTree.eng_params - - # pylint: disable=unused-argument - def __init__(self, order_by: str = None, primary_key: str = None, partition_by: str = None, sample_by: str = None, - zk_path: str = None, replica: str = None): - if not order_by and not primary_key: - raise ArgumentError(None, 'Either PRIMARY KEY or ORDER BY must be specified') - super().__init__(locals()) - - -class ReplicatedAggregatingMergeTree(ReplicatedMergeTree): - pass - - -class ReplicatedSummingMergeTree(ReplicatedMergeTree): - pass - - -def build_engine(full_engine: str) -> Optional[TableEngine]: - """ - Factory function to create TableEngine class from ClickHouse full_engine expression - :param full_engine - :return: TableEngine DDL element - """ - if not full_engine: - return None - name = full_engine.split(' ')[0].split('(')[0] - try: - engine_cls = engine_map[name] - except KeyError: - if not name.startswith('System'): - logger.warning('Engine %s not found', name) - return None - engine = engine_cls.__new__(engine_cls) - engine.name = name - engine.full_engine = full_engine - return engine diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/Form-3812b7f1.css b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/Form-3812b7f1.css deleted file mode 100644 index 772d43d65ae1a3157ab24e69b7ecb88a3649b4fe..0000000000000000000000000000000000000000 --- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/Form-3812b7f1.css +++ /dev/null @@ -1 +0,0 @@ -div.svelte-sfqy0y{display:flex;flex-direction:inherit;flex-wrap:wrap;gap:var(--form-gap-width);box-shadow:var(--block-shadow);border:var(--block-border-width) solid var(--border-color-primary);border-radius:var(--block-radius);background:var(--border-color-primary);overflow-y:hidden}div.svelte-sfqy0y .block{box-shadow:none!important;border-width:0px!important;border-radius:0!important}.hidden.svelte-sfqy0y{display:none} diff --git a/spaces/cihyFjudo/fairness-paper-search/Casio Ctk-710 Driver Download For Mac The Best Way to Enjoy Your Music.md b/spaces/cihyFjudo/fairness-paper-search/Casio Ctk-710 Driver Download For Mac The Best Way to Enjoy Your Music.md deleted file mode 100644 index 425591871a15cefd3d43f61e2654f87bd01ad193..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/Casio Ctk-710 Driver Download For Mac The Best Way to Enjoy Your Music.md +++ /dev/null @@ -1,6 +0,0 @@ -

      Casio Ctk-710 Driver Download For Mac


      Download Filehttps://tinurli.com/2uwhLC



      -
      - aaccfb2cb3
      -
      -
      -

      diff --git a/spaces/cihyFjudo/fairness-paper-search/Color Climax Master Film Lolita Orgasml.md b/spaces/cihyFjudo/fairness-paper-search/Color Climax Master Film Lolita Orgasml.md deleted file mode 100644 index fee3d2ea6c2e1f0f78bf79ccba3698a537a9272b..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/Color Climax Master Film Lolita Orgasml.md +++ /dev/null @@ -1,6 +0,0 @@ -

      Color Climax Master Film Lolita Orgasml


      Download Ziphttps://tinurli.com/2uwknn



      -
      - aaccfb2cb3
      -
      -
      -

      diff --git a/spaces/cihyFjudo/fairness-paper-search/Indian Film Badal Full Movie Download A 2000 Hindi Action Film Directed by Raj Kanwar.md b/spaces/cihyFjudo/fairness-paper-search/Indian Film Badal Full Movie Download A 2000 Hindi Action Film Directed by Raj Kanwar.md deleted file mode 100644 index d3d96e534e3f767c0e33ac28020d07fb68682b9a..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/Indian Film Badal Full Movie Download A 2000 Hindi Action Film Directed by Raj Kanwar.md +++ /dev/null @@ -1,17 +0,0 @@ -
      -

      Users can download Hindi New Movie HD through The moviesverse pirated website. New Movie is uploaded on this website within a day as soon as it is released. Moviesversecom 2023 illegally uploads movies on its website without any legal permission, and it is against the government law. But a popular website for the latest movie free download movieverse for the citizens of India is Moviesverse.net.

      -

      Movies Verse 2023 is a free public torrent website that has become a popular website for people in India to download Bollywood and Hollywood movies. Apart from Movieverse .com Bollywood Download, you can download Movieverse Hollywood Movie, Moviesverse .com Punjabi Movies, Tamil Dubbed Movies, Telugu Movies and Malayalam Movies on this website. Movieverse 2023 is one of the popular torrent websites in Asia.

      -

      Indian Film Badal Full Movie Download


      Download Zip ····· https://tinurli.com/2uwjmA



      -

      However, through Movieverse, pirated content is illegally uploaded on their website. People use Moviesverse.me 2023 for latest leaked movies free download. The government banned the pirated uploading of this website without permission. So we always recommend our users to always use the platform approved by the concerned authorities to watch their favorite movies.

      -

      It is a popular torrent website and people search this website with different names like Moviesverse .net, Moviesverse cc, Moviesverse desiflix, Moviesverse apk. Users can also watch live streaming by downloading the Movieverse app. With Moviesverse 300MB download, you can download movies in various formats here.

      -

      Public torrent websites like Moviesverse 2023 make Hollywood and Bollywood movies available to download for free. Right now most of the people are searching on Movieverse by writing Latest Hollywood Hindi Dubbed Movie Download. However, apart from Hollywood Movies, here you can also download South Indian Hindi Dubbed Movie, Movieverse Bollywood Movies, Tamil Movie, Kannada Movie and Malayalam Movies.

      -

      You can join Movieverse Telegram channel for Movieverse English Movies Download. Because they provide links to download latest HD movies in high quality through their Telegram channel. But since it is an illegal pirated website, such websites should not be used for free movie downloads.

      -

      People use this website for Moviesverse Hollywood Hindi Dubbed Movie Free Download. Now most of the people are searching Blak Adam Movie Download Moviesverse on the internet. Movies Verse 2022 provides the users with the facility of Dual Audio Movies Download. However, both Moviesflix and Moviesverse provide similar movies. But since it is an illegal website, we suggest our users to use legal platforms like Amazon prime, Zee5, Hotstar, Netflix, Hungama Play, etc. to download movies.

      -

      Movieverse 2023 has now become a popular website among movie lovers. Apart from Hindi Movies, Moviesverse Korean Series, English Movies, South Indian Hindi Dubbed Movie Download, Hindi Dubbed Dual Audio Movies, etc. are available for download on Moviesverse. Moviesversecom website has many categories to download movies.

      -

      Moviesverse NL, Moviesverse com, Moviesverse in, Moviesverse net, Moviesverse pro.com, Moviesverse cc, etc. are pirated movie downloading websites. From these websites you can download Hindi movies and web series for free.

      -

      Mxplayer is an application offering free customizable TV, movies, web shows. On which movies and free web series are available in different languages. Furthermore, you can likewise see the value in web-based music on this application. TV programs, web series, movies named Hollywood, Bollywood, Tamil, Telugu, Punjabi, Gujarati and Hindi are open on this application, which you can watch or download completely free of cost without spending a single penny.

      -

      -

      Voot is a remarkable application for watching and downloading live movies. You can watch live organization programs, news, youth shows, movies in it completely free of cost. Voot is a phenomenal application to notice live movies and download them separately. This application is open in different types and local languages. It has a huge collection of movies which can be viewed on the web. TV projects can be downloaded for disjointed study.

      -

      According to film writer Shyam Goel, it was inspired by a real incident that happened in Punjab where a woman's forehead was branded with the message that her husband was a traitor. Her husband, a soldier, was accused of treachery, and she was thrown out of her village. The story was already made as a movie in Tamil in 1989 titled Thaai Naaducode: tam promoted to code: ta , starring Sathyaraj in dual roles.

      aaccfb2cb3
      -
      -
      \ No newline at end of file diff --git a/spaces/cleanmaster/akagi-sovits3/utils.py b/spaces/cleanmaster/akagi-sovits3/utils.py deleted file mode 100644 index 3733a75111dc89cefa333b34933ae01623550ea7..0000000000000000000000000000000000000000 --- a/spaces/cleanmaster/akagi-sovits3/utils.py +++ /dev/null @@ -1,338 +0,0 @@ -import os -import glob -import sys -import argparse -import logging -import json -import subprocess - -import librosa -import numpy as np -import torchaudio -from scipy.io.wavfile import read -import torch -import torchvision -from torch.nn import functional as F -from commons import sequence_mask -from hubert import hubert_model -MATPLOTLIB_FLAG = False - -logging.basicConfig(stream=sys.stdout, level=logging.DEBUG) -logger = logging - -f0_bin = 256 -f0_max = 1100.0 -f0_min = 50.0 -f0_mel_min = 1127 * np.log(1 + f0_min / 700) -f0_mel_max = 1127 * np.log(1 + f0_max / 700) - -def f0_to_coarse(f0): - is_torch = isinstance(f0, torch.Tensor) - f0_mel = 1127 * (1 + f0 / 700).log() if is_torch else 1127 * np.log(1 + f0 / 700) - f0_mel[f0_mel > 0] = (f0_mel[f0_mel > 0] - f0_mel_min) * (f0_bin - 2) / (f0_mel_max - f0_mel_min) + 1 - - f0_mel[f0_mel <= 1] = 1 - f0_mel[f0_mel > f0_bin - 1] = f0_bin - 1 - f0_coarse = (f0_mel + 0.5).long() if is_torch else np.rint(f0_mel).astype(np.int) - assert f0_coarse.max() <= 255 and f0_coarse.min() >= 1, (f0_coarse.max(), f0_coarse.min()) - return f0_coarse - - -def get_hubert_model(rank=None): - - hubert_soft = hubert_model.hubert_soft("hubert/hubert-soft-0d54a1f4.pt") - if rank is not None: - hubert_soft = hubert_soft.cuda(rank) - return hubert_soft - -def get_hubert_content(hmodel, y=None, path=None): - if path is not None: - source, sr = torchaudio.load(path) - source = torchaudio.functional.resample(source, sr, 16000) - if len(source.shape) == 2 and source.shape[1] >= 2: - source = torch.mean(source, dim=0).unsqueeze(0) - else: - source = y - source = source.unsqueeze(0) - with torch.inference_mode(): - units = hmodel.units(source) - return units.transpose(1,2) - - -def get_content(cmodel, y): - with torch.no_grad(): - c = cmodel.extract_features(y.squeeze(1))[0] - c = c.transpose(1, 2) - return c - - - -def transform(mel, height): # 68-92 - #r = np.random.random() - #rate = r * 0.3 + 0.85 # 0.85-1.15 - #height = int(mel.size(-2) * rate) - tgt = torchvision.transforms.functional.resize(mel, (height, mel.size(-1))) - if height >= mel.size(-2): - return tgt[:, :mel.size(-2), :] - else: - silence = tgt[:,-1:,:].repeat(1,mel.size(-2)-height,1) - silence += torch.randn_like(silence) / 10 - return torch.cat((tgt, silence), 1) - - -def stretch(mel, width): # 0.5-2 - return torchvision.transforms.functional.resize(mel, (mel.size(-2), width)) - - -def load_checkpoint(checkpoint_path, model, optimizer=None): - assert os.path.isfile(checkpoint_path) - checkpoint_dict = torch.load(checkpoint_path, map_location='cpu') - iteration = checkpoint_dict['iteration'] - learning_rate = checkpoint_dict['learning_rate'] - if iteration is None: - iteration = 1 - if learning_rate is None: - learning_rate = 0.0002 - if optimizer is not None and checkpoint_dict['optimizer'] is not None: - optimizer.load_state_dict(checkpoint_dict['optimizer']) - saved_state_dict = checkpoint_dict['model'] - if hasattr(model, 'module'): - state_dict = model.module.state_dict() - else: - state_dict = model.state_dict() - new_state_dict= {} - for k, v in state_dict.items(): - try: - new_state_dict[k] = saved_state_dict[k] - except: - logger.info("%s is not in the checkpoint" % k) - new_state_dict[k] = v - if hasattr(model, 'module'): - model.module.load_state_dict(new_state_dict) - else: - model.load_state_dict(new_state_dict) - logger.info("Loaded checkpoint '{}' (iteration {})" .format( - checkpoint_path, iteration)) - return model, optimizer, learning_rate, iteration - - -def save_checkpoint(model, optimizer, learning_rate, iteration, checkpoint_path): - # ckptname = checkpoint_path.split(os.sep)[-1] - # newest_step = int(ckptname.split(".")[0].split("_")[1]) - # val_steps = 2000 - # last_ckptname = checkpoint_path.replace(str(newest_step), str(newest_step - val_steps*3)) - # if newest_step >= val_steps*3: - # os.system(f"rm {last_ckptname}") - logger.info("Saving model and optimizer state at iteration {} to {}".format( - iteration, checkpoint_path)) - if hasattr(model, 'module'): - state_dict = model.module.state_dict() - else: - state_dict = model.state_dict() - torch.save({'model': state_dict, - 'iteration': iteration, - 'optimizer': optimizer.state_dict(), - 'learning_rate': learning_rate}, checkpoint_path) - - -def summarize(writer, global_step, scalars={}, histograms={}, images={}, audios={}, audio_sampling_rate=22050): - for k, v in scalars.items(): - writer.add_scalar(k, v, global_step) - for k, v in histograms.items(): - writer.add_histogram(k, v, global_step) - for k, v in images.items(): - writer.add_image(k, v, global_step, dataformats='HWC') - for k, v in audios.items(): - writer.add_audio(k, v, global_step, audio_sampling_rate) - - -def latest_checkpoint_path(dir_path, regex="G_*.pth"): - f_list = glob.glob(os.path.join(dir_path, regex)) - f_list.sort(key=lambda f: int("".join(filter(str.isdigit, f)))) - x = f_list[-1] - print(x) - return x - - -def plot_spectrogram_to_numpy(spectrogram): - global MATPLOTLIB_FLAG - if not MATPLOTLIB_FLAG: - import matplotlib - matplotlib.use("Agg") - MATPLOTLIB_FLAG = True - mpl_logger = logging.getLogger('matplotlib') - mpl_logger.setLevel(logging.WARNING) - import matplotlib.pylab as plt - import numpy as np - - fig, ax = plt.subplots(figsize=(10,2)) - im = ax.imshow(spectrogram, aspect="auto", origin="lower", - interpolation='none') - plt.colorbar(im, ax=ax) - plt.xlabel("Frames") - plt.ylabel("Channels") - plt.tight_layout() - - fig.canvas.draw() - data = np.fromstring(fig.canvas.tostring_rgb(), dtype=np.uint8, sep='') - data = data.reshape(fig.canvas.get_width_height()[::-1] + (3,)) - plt.close() - return data - - -def plot_alignment_to_numpy(alignment, info=None): - global MATPLOTLIB_FLAG - if not MATPLOTLIB_FLAG: - import matplotlib - matplotlib.use("Agg") - MATPLOTLIB_FLAG = True - mpl_logger = logging.getLogger('matplotlib') - mpl_logger.setLevel(logging.WARNING) - import matplotlib.pylab as plt - import numpy as np - - fig, ax = plt.subplots(figsize=(6, 4)) - im = ax.imshow(alignment.transpose(), aspect='auto', origin='lower', - interpolation='none') - fig.colorbar(im, ax=ax) - xlabel = 'Decoder timestep' - if info is not None: - xlabel += '\n\n' + info - plt.xlabel(xlabel) - plt.ylabel('Encoder timestep') - plt.tight_layout() - - fig.canvas.draw() - data = np.fromstring(fig.canvas.tostring_rgb(), dtype=np.uint8, sep='') - data = data.reshape(fig.canvas.get_width_height()[::-1] + (3,)) - plt.close() - return data - - -def load_wav_to_torch(full_path): - sampling_rate, data = read(full_path) - return torch.FloatTensor(data.astype(np.float32)), sampling_rate - - -def load_filepaths_and_text(filename, split="|"): - with open(filename, encoding='utf-8') as f: - filepaths_and_text = [line.strip().split(split) for line in f] - return filepaths_and_text - - -def get_hparams(init=True): - parser = argparse.ArgumentParser() - parser.add_argument('-c', '--config', type=str, default="./configs/base.json", - help='JSON file for configuration') - parser.add_argument('-m', '--model', type=str, required=True, - help='Model name') - - args = parser.parse_args() - model_dir = os.path.join("./logs", args.model) - - if not os.path.exists(model_dir): - os.makedirs(model_dir) - - config_path = args.config - config_save_path = os.path.join(model_dir, "config.json") - if init: - with open(config_path, "r") as f: - data = f.read() - with open(config_save_path, "w") as f: - f.write(data) - else: - with open(config_save_path, "r") as f: - data = f.read() - config = json.loads(data) - - hparams = HParams(**config) - hparams.model_dir = model_dir - return hparams - - -def get_hparams_from_dir(model_dir): - config_save_path = os.path.join(model_dir, "config.json") - with open(config_save_path, "r") as f: - data = f.read() - config = json.loads(data) - - hparams =HParams(**config) - hparams.model_dir = model_dir - return hparams - - -def get_hparams_from_file(config_path): - with open(config_path, "r") as f: - data = f.read() - config = json.loads(data) - - hparams =HParams(**config) - return hparams - - -def check_git_hash(model_dir): - source_dir = os.path.dirname(os.path.realpath(__file__)) - if not os.path.exists(os.path.join(source_dir, ".git")): - logger.warn("{} is not a git repository, therefore hash value comparison will be ignored.".format( - source_dir - )) - return - - cur_hash = subprocess.getoutput("git rev-parse HEAD") - - path = os.path.join(model_dir, "githash") - if os.path.exists(path): - saved_hash = open(path).read() - if saved_hash != cur_hash: - logger.warn("git hash values are different. {}(saved) != {}(current)".format( - saved_hash[:8], cur_hash[:8])) - else: - open(path, "w").write(cur_hash) - - -def get_logger(model_dir, filename="train.log"): - global logger - logger = logging.getLogger(os.path.basename(model_dir)) - logger.setLevel(logging.DEBUG) - - formatter = logging.Formatter("%(asctime)s\t%(name)s\t%(levelname)s\t%(message)s") - if not os.path.exists(model_dir): - os.makedirs(model_dir) - h = logging.FileHandler(os.path.join(model_dir, filename)) - h.setLevel(logging.DEBUG) - h.setFormatter(formatter) - logger.addHandler(h) - return logger - - -class HParams(): - def __init__(self, **kwargs): - for k, v in kwargs.items(): - if type(v) == dict: - v = HParams(**v) - self[k] = v - - def keys(self): - return self.__dict__.keys() - - def items(self): - return self.__dict__.items() - - def values(self): - return self.__dict__.values() - - def __len__(self): - return len(self.__dict__) - - def __getitem__(self, key): - return getattr(self, key) - - def __setitem__(self, key, value): - return setattr(self, key, value) - - def __contains__(self, key): - return key in self.__dict__ - - def __repr__(self): - return self.__dict__.__repr__() - diff --git a/spaces/codejin/diffsingerkr/README.md b/spaces/codejin/diffsingerkr/README.md deleted file mode 100644 index 801bc4b77c2f5266e8a0ea9a03e02f9717f6b86e..0000000000000000000000000000000000000000 --- a/spaces/codejin/diffsingerkr/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: DiffSinger-KR -emoji: 🐢 -colorFrom: blue -colorTo: blue -sdk: streamlit -sdk_version: 1.17.0 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/cfhd.h b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/cfhd.h deleted file mode 100644 index 9b09c9126266f962a0748c52847af31c63ac188c..0000000000000000000000000000000000000000 --- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/cfhd.h +++ /dev/null @@ -1,188 +0,0 @@ -/* - * Copyright (c) 2015 Kieran Kunhya - * - * This file is part of FFmpeg. - * - * FFmpeg is free software; you can redistribute it and/or - * modify it under the terms of the GNU Lesser General Public - * License as published by the Free Software Foundation; either - * version 2.1 of the License, or (at your option) any later version. - * - * FFmpeg is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Lesser General Public License for more details. - * - * You should have received a copy of the GNU Lesser General Public - * License along with FFmpeg; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA - */ - -#ifndef AVCODEC_CFHD_H -#define AVCODEC_CFHD_H - -#include - -#include "avcodec.h" -#include "bytestream.h" -#include "get_bits.h" -#include "cfhddsp.h" - -enum CFHDParam { - SampleType = 1, - SampleIndexTable = 2, - BitstreamMarker = 4, - VersionMajor = 5, - VersionMinor = 6, - VersionRevision = 7, - VersionEdit = 8, - TransformType = 10, - NumFrames = 11, - ChannelCount = 12, - WaveletCount = 13, - SubbandCount = 14, - NumSpatial = 15, - FirstWavelet = 16, - GroupTrailer = 18, - FrameType = 19, - ImageWidth = 20, - ImageHeight = 21, - FrameIndex = 23, - LowpassSubband = 25, - NumLevels = 26, - LowpassWidth = 27, - LowpassHeight = 28, - PixelOffset = 33, - LowpassQuantization=34, - LowpassPrecision = 35, - WaveletType = 37, - WaveletNumber = 38, - WaveletLevel = 39, - NumBands = 40, - HighpassWidth = 41, - HighpassHeight = 42, - LowpassBorder = 43, - HighpassBorder = 44, - LowpassScale = 45, - LowpassDivisor = 46, - SubbandNumber = 48, - BandWidth = 49, - BandHeight = 50, - SubbandBand = 51, - BandEncoding = 52, - Quantization = 53, - BandScale = 54, - BandHeader = 55, - BandTrailer = 56, - ChannelNumber = 62, - SampleFlags = 68, - FrameNumber = 69, - Precision = 70, - InputFormat = 71, - BandCodingFlags = 72, - PeakLevel = 74, - PeakOffsetLow = 75, - PeakOffsetHigh = 76, - Version = 79, - BandSecondPass = 82, - PrescaleTable = 83, - EncodedFormat = 84, - DisplayHeight = 85, - ChannelWidth = 104, - ChannelHeight = 105, -}; - -#define VLC_BITS 9 -#define SUBBAND_COUNT 10 -#define SUBBAND_COUNT_3D 17 - -typedef struct CFHD_RL_VLC_ELEM { - int16_t level; - int8_t len; - uint16_t run; -} CFHD_RL_VLC_ELEM; - -#define DWT_LEVELS 3 -#define DWT_LEVELS_3D 6 - -typedef struct SubBand { - ptrdiff_t stride; - int a_width; - int width; - int a_height; - int height; - int8_t read_ok; -} SubBand; - -typedef struct Plane { - int width; - int height; - ptrdiff_t stride; - - int16_t *idwt_buf; - int16_t *idwt_tmp; - int idwt_size; - - /* TODO: merge this into SubBand structure */ - int16_t *subband[SUBBAND_COUNT_3D]; - int16_t *l_h[10]; - - SubBand band[DWT_LEVELS_3D][4]; -} Plane; - -typedef struct Peak { - int level; - int offset; - GetByteContext base; -} Peak; - -typedef struct CFHDContext { - AVCodecContext *avctx; - - CFHD_RL_VLC_ELEM table_9_rl_vlc[2088]; - CFHD_RL_VLC_ELEM table_18_rl_vlc[4572]; - - int lut[2][256]; - - GetBitContext gb; - - int planes; - int frame_type; - int frame_index; - int sample_type; - int transform_type; - int coded_width; - int coded_height; - int cropped_height; - enum AVPixelFormat coded_format; - int progressive; - - int a_width; - int a_height; - int a_format; - int a_transform_type; - - int bpc; // bits per channel/component - int channel_cnt; - int subband_cnt; - int band_encoding; - int channel_num; - uint8_t lowpass_precision; - uint16_t quantisation; - - int codebook; - int difference_coding; - int subband_num; - int level; - int subband_num_actual; - - uint8_t prescale_table[8]; - Plane plane[4]; - Peak peak; - - CFHDDSPContext dsp; -} CFHDContext; - -int ff_cfhd_init_vlcs(CFHDContext *s); - -#endif /* AVCODEC_CFHD_H */ diff --git a/spaces/congsaPfin/Manga-OCR/logs/Clubillion Vegas Casino Slots Mod APK A Must-Have for Casino Lovers.md b/spaces/congsaPfin/Manga-OCR/logs/Clubillion Vegas Casino Slots Mod APK A Must-Have for Casino Lovers.md deleted file mode 100644 index 1a97ed3302a7daa0f548cf63ca4220ff867667ed..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Clubillion Vegas Casino Slots Mod APK A Must-Have for Casino Lovers.md +++ /dev/null @@ -1,14 +0,0 @@ -
      -

      Clubillion Vegas Casino Slots Mod Apk: A Guide for Beginners

      - Do you love playing casino slots games but don't want to spend real money? Do you want to experience the thrill of Las Vegas without leaving your home? Do you want to have unlimited coins and gems to play with? If you answered yes to any of these questions, then you might be interested in Clubillion Vegas Casino Slots Mod Apk. This is a modified version of the popular social slots casino game that offers you unlimited resources, access to all slots machines, and more. In this article, we will tell you everything you need to know about Clubillion Vegas Casino Slots Mod Apk, including what it is, how to download and install it, how to play it, and what are its benefits and drawbacks. Let's get started!

      What is Clubillion Vegas Casino Slots?

      - Clubillion Vegas Casino Slots is a free social slots casino game that brings you a real Vegas casino slots experience with 70+ free slots casino games with bonus. It’s a free casino 777 Vegas slots game which can bring you the real casino experience. You can play various types of slots machines, such as classic slots, video slots, fruit machines, progressive slots, and more. You can also enjoy bonus features, such as wild symbols, scatter symbols, free spins, multipliers, mini-games, and jackpots. You can win big prizes and have fun at the same time. But Clubillion Vegas Casino Slots is not just a game. It's also a social platform where you can play with friends and meet new people online. You can join clubs, chat with other players, send gifts, share your wins, and participate in tournaments and events. You can also invite your friends to play with you and earn more coins and gems. You can make new friends and have a blast with Clubillion Vegas Casino Slots.

      What is Clubillion Vegas Casino Slots Mod Apk?

      - Clubillion Vegas Casino Slots Mod Apk is a modified version of the original game that offers you unlimited coins and gems. Coins and gems are the main currencies in the game that you need to play slots machines, join clubs, enter tournaments, and buy gifts. Normally, you have to earn coins and gems by playing the game, watching ads, completing tasks, or buying them with real money. But with Clubillion Vegas Casino Slots Mod Apk, you don't have to worry about that. You can have as many coins and gems as you want, and play as much as you want. Clubillion Vegas Casino Slots Mod Apk also allows you to unlock all the slots machines and access premium features that are otherwise restricted or limited in the original game. You can play any slots machine you like, and enjoy the full range of bonus features and jackpots. You can also join any club you want, and enter any tournament or event you want. You can have the ultimate freedom and flexibility with Clubillion Vegas Casino Slots Mod Apk. Clubillion Vegas Casino Slots Mod Apk is a risk-free option to enjoy the game without spending real money. You don't have to worry about losing money or running out of resources. You can play for fun and entertainment, and not for gambling or addiction. You can have a safe and enjoyable gaming experience with Clubillion Vegas Casino Slots Mod Apk.

      How to Download and Install Clubillion Vegas Casino Slots Mod Apk?

      - If you are interested in trying out Clubillion Vegas Casino Slots Mod Apk, you need to follow these simple steps: - Find a reliable source that offers the mod apk file. There are many websites that claim to provide the mod apk file, but not all of them are trustworthy or safe. Some of them may contain viruses, malware, or spyware that can harm your device or steal your personal information. Therefore, you need to be careful and do some research before downloading anything from the internet. You can check the reviews, ratings, comments, and feedback of other users to verify the credibility and quality of the source. - Enable unknown sources on your device settings. Since the mod apk file is not from the official Google Play Store, you need to allow your device to install apps from unknown sources. To do this, go to your device settings, then security, then unknown sources, and enable it. This will allow you to install the mod apk file on your device. - Download and install the mod apk file on your device. Once you have found a reliable source and enabled unknown sources, you can download the mod apk file from the website. It will be a .apk file that you need to save on your device storage. Then, you need to locate the file and tap on it to start the installation process. Follow the instructions on the screen and wait for the installation to complete. - Enjoy playing Clubillion Vegas Casino Slots Mod Apk. After the installation is done, you can launch the game from your app drawer or home screen. You will see that you have unlimited coins and gems, and access to all slots machines and features. You can start playing and having fun with Clubillion Vegas Casino Slots Mod Apk.

      How to Play Clubillion Vegas Casino Slots Mod Apk?

      - Playing Clubillion Vegas Casino Slots Mod Apk is very easy and simple. Here are some tips on how to play it: - Choose your favorite slots machine from the lobby. You can browse through different categories of slots machines, such as hot slots, new slots, classic slots, etc. You can also use the search function to find a specific slots machine by name or theme. You can play any slots machine you like, as they are all unlocked and available for you. - Spin the reels and win coins, gems, free spins, and jackpots. Once you have selected a slots machine, you can start spinning the reels by tapping on the spin button. You can also adjust your bet size by using the plus or minus buttons. You can win coins and gems by matching symbols on the paylines. You can also trigger bonus features by landing special symbols, such as wilds, scatters, bonus symbols, etc. These bonus features can give you more rewards, such as free spins, multipliers, mini-games, and jackpots. - Join clubs, chat with other players, and participate in tournaments and events. Clubillion Vegas Casino Slots Mod Apk is not only a game but also a social platform where you can interact with other players from around the world. You can join clubs or create your own club with your friends. You can chat with other players in real-time using text or voice messages. You can also send gifts, share your wins, and invite your friends to play with you. You can also participate in tournaments and events that are held regularly in the game. These tournaments and events can give you more challenges, fun, and prizes.

      What are the Benefits of Clubillion Vegas Casino Slots Mod Apk?

      - Clubillion Vegas Casino Slots Mod Apk has many benefits that make it a great choice for casino slots lovers. Some of these benefits are: - You can play with unlimited resources and never run out of coins or gems. This means that you can play as much as you want without worrying about running low on resources or having to watch ads or complete tasks to earn more resources. - You can try out all the slots machines and find your lucky one. With Clubillion Vegas Casino Slots Mod Apk [user](# , you can unlock and play all the slots machines in the game, and find out which one suits your style and preference. You can also enjoy the full range of bonus features and jackpots that each slots machine offers. - You can have more fun and excitement without worrying about losing money. With Clubillion Vegas Casino Slots Mod Apk, you don't have to gamble with real money or risk losing anything. You can play for fun and entertainment, and enjoy the thrill of winning big prizes without any stress or pressure.

      What are the Drawbacks of Clubillion Vegas Casino Slots Mod Apk?

      - Clubillion Vegas Casino Slots Mod Apk is not perfect, and it has some drawbacks that you should be aware of. Some of these drawbacks are: - You may encounter some bugs or glitches that affect the game performance. Since the mod apk file is not from the official source, it may not be compatible with your device or the latest version of the game. This may cause some errors, crashes, or freezes that can ruin your gaming experience. - You may violate the terms and conditions of the original game and risk getting banned. By using Clubillion Vegas Casino Slots Mod Apk, you are modifying the original game and breaking its rules. This may result in your account getting banned or suspended by the game developers or moderators. You may also lose your progress, data, or rewards that you have earned in the game. - You may miss out on some updates or features that are only available in the official version. Since the mod apk file is not from the official source, it may not be updated regularly or include all the new features or content that are added to the original game. You may miss out on some new slots machines, bonus features, events, or improvements that are only available in the official version.

      Conclusion

      - Clubillion Vegas Casino Slots Mod Apk is a modified version of the popular social slots casino game that offers you unlimited coins and gems, access to all slots machines, and more. It is a great option for casino slots lovers who want to have more fun and excitement without spending real money. However, it also has some drawbacks that you should consider before downloading and installing it. You may encounter some bugs or glitches, violate the terms and conditions of the original game, or miss out on some updates or features. Therefore, you should weigh the pros and cons carefully before deciding whether to use Clubillion Vegas Casino Slots Mod Apk or not.

      FAQs

      - Here are some frequently asked questions about Clubillion Vegas Casino Slots Mod Apk: - Q: Is Clubillion Vegas Casino Slots Mod Apk safe to use? - A: Clubillion Vegas Casino Slots Mod Apk is safe to use as long as you download it from a reliable source that does not contain any viruses, malware, or spyware. However, you should also be careful about your device security and privacy when using any mod apk file. - Q: Is Clubillion Vegas Casino Slots Mod Apk legal to use? - A: Clubillion Vegas Casino Slots Mod Apk is not legal to use, as it violates the terms and conditions of the original game. By using it, you are modifying the original game and breaking its rules. This may result in your account getting banned or suspended by the game developers or moderators. - Q: How can I update Clubillion Vegas Casino Slots Mod Apk? - A: Clubillion Vegas Casino Slots Mod Apk may not be updated regularly or include all the new features or content that are added to the original game. Therefore, you may need to check for updates manually from the source where you downloaded it from. Alternatively, you can uninstall the mod apk file and install the official version of the game from the Google Play Store. - Q: How can I uninstall Clubillion Vegas Casino Slots Mod Apk? - A: You can uninstall Clubillion Vegas Casino Slots Mod Apk by following these steps: - Go to your device settings, then apps, then Clubillion Vegas Casino Slots. - Tap on uninstall and confirm your action. - Delete the mod apk file from your device storage if you still have it. - Q: Can I play Clubillion Vegas Casino Slots Mod Apk offline? - A: No, you cannot play Clubillion Vegas Casino Slots Mod Apk offline. You need an internet connection to play the game, as it is a social slots casino game that requires online interaction with other players.

      -

      clubillion vegas casino slots mod apk


      Download · https://urlca.com/2uO9Sj



      401be4b1e0
      -
      -
      \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Download Temple Run 3 APK for Android - The Ultimate Adventure Game.md b/spaces/congsaPfin/Manga-OCR/logs/Download Temple Run 3 APK for Android - The Ultimate Adventure Game.md deleted file mode 100644 index e211c304679e0b7fb906f0559020a65fb9144678..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Download Temple Run 3 APK for Android - The Ultimate Adventure Game.md +++ /dev/null @@ -1,114 +0,0 @@ - -

      Temple Run 3: How to Download and Play the Latest Version of the Popular Mobile Game

      -

      If you are a fan of endless runner games, you must have heard of Temple Run, one of the most successful and addictive mobile games of all time. The game has spawned several sequels and spin-offs, and the latest one is Temple Run 3, which was released in June 2023. In this article, we will tell you everything you need to know about Temple Run 3, including how to download it from APKPure and how to play it like a pro.

      -

      Introduction

      -

      What is Temple Run 3?

      -

      Temple Run 3 is the third installment in the Temple Run series, developed by Imangi Studios. It is an endless runner game where you control an adventurer who has stolen a cursed idol from a temple and must escape from the evil monkeys that are chasing him. Along the way, you have to avoid various obstacles, collect coins and power-ups, and unlock new characters and abilities.

      -

      temple run 3 game download _apkpure


      DOWNLOAD ★★★★★ https://urlca.com/2uObxq



      -

      Why should you play Temple Run 3?

      -

      Temple Run 3 is a fun and exciting game that will keep you entertained for hours. It has stunning graphics, smooth controls, and addictive gameplay. It also has many new features and modes that make it different from the previous versions. For example, you can now explore different environments, such as jungles, volcanoes, and caves. You can also choose from different characters, each with their own skills and outfits. You can also compete with other players online and earn rewards and achievements.

      -

      How to download Temple Run 3 from APKPure

      -

      What is APKPure?

      -

      APKPure is a website that allows you to download Android apps and games that are not available on Google Play Store. It is a safe and reliable source that offers fast downloads and updates. You can also find modded versions of some apps and games that have extra features or unlocked content.

      -

      How to install APKPure on your device

      -

      To install APKPure on your device, you need to follow these steps:

      -
        -
      • Go to [APKPure.com](^1^) on your browser.
      • -
      • Tap on the "Download APK" button and wait for the file to be downloaded.
      • -
      • Go to your device's settings and enable "Unknown sources" or "Install unknown apps" option.
      • -
      • Go to your file manager and locate the downloaded APK file.
      • -
      • Tap on it and follow the instructions to install it.
      • -
      -

      How to find and download Temple Run 3 from APKPure

      -

      To find and download Temple Run 3 from APKPure, you need to follow these steps:

      -
        -
      • Open the APKPure app on your device.
      • -
      • Tap on the search icon and type "Temple Run 3".
      • -
      • Select the game from the results and tap on the "Install" button.
      • -
      • Wait for the game to be downloaded and installed.
      • -
      • Enjoy playing Temple Run 3!
      • -
      -

      How to play Temple Run 3

      -

      The basic gameplay of Temple Run 3

      -

      The basic gameplay of Temple Run 3 is similar to the previous versions. You have to swipe left or right to turn, swipe up to jump, swipe down to slide, and tilt your device to move sideways. You have to avoid obstacles such as walls, gaps, fire, water, and of course, the monkeys. You have to collect coins and power-ups that can help you run faster, shield yourself, or magnetize coins. You also have to complete missions and objectives that can earn you extra coins and gems.

      -

      The new features and modes of Temple Run 3

      -

      Temple Run 3 has many new features and modes that make it more exciting and challenging than the previous versions. Some of them are:

      -

      temple run 3 apk download latest version from apkpure
      -how to install temple run 3 game on android using apkpure
      -temple run 3 mod apk unlimited money and gems download apkpure
      -temple run 3 game free download for pc windows 10 apkpure
      -temple run 3 online play without downloading apkpure
      -temple run 3 game download for jio phone apkpure
      -temple run 3 oz game download apkpure
      -temple run 3 brave game download apkpure
      -temple run 3 game download for samsung galaxy y apkpure
      -temple run 3 game download for nokia x2 apkpure
      -temple run 3 game download for iphone 4s apkpure
      -temple run 3 game download for android 2.3.6 apkpure
      -temple run 3 game download for tablet apkpure
      -temple run 3 game download for laptop apkpure
      -temple run 3 game download for blackberry z10 apkpure
      -temple run 3 game download for micromax a35 apkpure
      -temple run 3 game download for lava iris x1 apkpure
      -temple run 3 game download for karbonn a1+ apkpure
      -temple run 3 game download for lenovo a6000 apkpure
      -temple run 3 game download for oppo a37f apkpure
      -temple run 3 game download for vivo y51l apkpure
      -temple run 3 game download for redmi note 4 apkpure
      -temple run 3 game download for moto g4 plus apkpure
      -temple run 3 game download for huawei p8 lite apkpure
      -temple run 3 game download for sony xperia c4 apkpure
      -temple run 3 game download for lg g4 stylus apkpure
      -temple run 3 game download for htc desire 626g+ apkpure
      -temple run 3 game download for asus zenfone max pro m1 apkpure
      -temple run 3 game download for oneplus 6t apkpure
      -temple run 3 game download for samsung galaxy s10+ apkpure
      -temple run 3 game download for iphone xs max apkpure
      -temple run 3 game download for ipad mini 4 apkpure
      -temple run 3 game download for macbook air apkpure
      -temple run 3 game download for chromebook apkpure
      -temple run 3 game download for fire tablet apkpure
      -temple run 3 offline game download without internet connection from apkpure
      -best tips and tricks to play temple run 3 game downloaded from apkpure
      -how to update temple run 3 game downloaded from apkpure to the latest version
      -how to uninstall temple run 3 game downloaded from apkpure without losing data
      -how to backup and restore temple run 3 game downloaded from apkpure using google drive or dropbox
      -how to transfer temple run 3 game downloaded from apkpure from one device to another using bluetooth or wifi direct
      -how to fix common errors and issues in temple run 3 game downloaded from apkpure such as crashing, freezing, lagging, etc.
      -how to hack and cheat in temple run 3 game downloaded from apkpure using lucky patcher or other tools
      -how to get free coins and gems in temple run 3 game downloaded from apkpure without spending real money or using any surveys or verification methods

      -
        -
      • New environments: You can now run through different environments, such as jungles, volcanoes, and caves. Each environment has its own obstacles, scenery, and secrets. You can also switch between environments during your run by taking different paths.
      • -
      • New characters: You can now choose from different characters, each with their own skills and outfits. Some of the characters are Guy Dangerous, Scarlett Fox, Barry Bones, Karma Lee, Montana Smith, and Zack Wonder. You can also customize your characters with different hats, masks, and accessories.
      • -
      • New power-ups: You can now use different power-ups that can give you an edge in your run. Some of the power-ups are Boost, Shield, Coin Magnet, Gem Magnet, Score Multiplier, and Coin Rain. You can also upgrade your power-ups to make them last longer or have more effects.
      • -
      • New modes: You can now play different modes that can test your skills and endurance. Some of the modes are Classic Mode, Endless Mode, Time Trial Mode, and Challenge Mode. You can also compete with other players online and see who can run the farthest or the fastest.
      • -
      -

      The tips and tricks to master Temple Run 3

      -

      Temple Run 3 is a game that requires quick reflexes and concentration. Here are some tips and tricks that can help you master the game:

      -
        -
      • Practice: The best way to improve your skills is to practice regularly. Try to run as far as you can and learn from your mistakes. You can also replay the tutorial to refresh your memory on the controls and the basics.
      • -
      • Upgrade: The best way to enhance your performance is to upgrade your power-ups and abilities. Use your coins and gems to buy upgrades that can make your power-ups last longer or have more effects. You can also upgrade your abilities that can increase your speed, coin value, or score multiplier.
      • -
      • Strategize: The best way to optimize your run is to strategize your moves and choices. Use your power-ups wisely and at the right time. For example, use Boost when you need to escape from the monkeys or cross a gap. Use Shield when you need to protect yourself from obstacles or fire. Use Coin Magnet when you see a lot of coins ahead. Use Gem Magnet when you see a gem in a hard-to-reach place.
      • -
      • Explore: The best way to discover new things is to explore different paths and environments. Try to take different turns and see where they lead you. You might find hidden secrets, shortcuts, or bonuses. You might also encounter new obstacles or challenges that can spice up your run.
      • -
      -

      Conclusion

      -

      Summary of the main points

      -

      In conclusion, Temple Run 3 is a fun and exciting game that will keep you entertained for hours. It has stunning graphics, smooth controls, and addictive gameplay. It also has many new features and modes that make it different from the previous versions. You can download it from APKPure and play it on your device easily.

      -

      Call to action for the readers

      -

      If you are looking for a game that will challenge your reflexes and concentration, Temple Run 3 is the game for you. Download it now from APKPure and start running for your life! Don't forget to share your high scores and achievements with your friends and family!

      -

      FAQs

      -

      Here are some frequently asked questions about Temple Run 3:

      -
        -
      1. Is Temple Run 3 free?
        -Yes, Temple Run 3 is free to download and play. However, it contains ads and in-app purchases that can enhance your gaming experience.
      2. -
      3. Is Temple Run 3 safe?
        -Yes, Temple Run 3 is safe to download and play. It does not contain any viruses or malware that can harm your device or data.
      4. -
      5. Is Temple Run 3 compatible with my device?
        -Temple Run 3 is compatible with most Android devices that have Android 4.4 or higher. However, some devices may experience lagging or crashing issues due to low memory or performance.
      6. -
      7. How do I How do I update Temple Run 3?
        -You can update Temple Run 3 by using the APKPure app. Just open the app and tap on the "Update" button next to the game. You can also enable the "Auto-update" option to get the latest version automatically.
      8. -
      9. How do I contact the developers of Temple Run 3?
        -You can contact the developers of Temple Run 3 by visiting their website, [imangistudios.com], or by sending them an email at [support@imangistudios.com]. You can also follow them on Facebook, Twitter, and Instagram for the latest news and updates.
      10. -

      401be4b1e0
      -
      -
      \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Download Yes Or No by Jass Manak - Punjabi Pop Song 2020.md b/spaces/congsaPfin/Manga-OCR/logs/Download Yes Or No by Jass Manak - Punjabi Pop Song 2020.md deleted file mode 100644 index 1fd7e3bcece0af71a821b726812a95805aedd340..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Download Yes Or No by Jass Manak - Punjabi Pop Song 2020.md +++ /dev/null @@ -1,110 +0,0 @@ -
      -

      Yes or No Jass Manak MP3 Song Download Pendujatt

      -

      If you are a fan of Punjabi pop music, you must have heard of Jass Manak and his hit song Yes or No. This song has been ruling the charts since its release in August 2020 and has garnered millions of views and streams on various platforms. But if you want to download this song in MP3 format for offline listening, you might be wondering where to find it. One of the popular websites that offer MP3 downloads of Punjabi songs is Pendujatt. In this article, we will tell you everything you need to know about Yes or No Jass Manak MP3 song download Pendujatt.

      -

      yes or no jass manak mp3 song download pendujatt


      DOWNLOADhttps://urlca.com/2uOeVr



      -

      Introduction

      -

      Who is Jass Manak?

      -

      Jass Manak is a young and talented Punjabi singer, songwriter, and composer who rose to fame with his debut song Prada in 2018. He is also the founder of Geet MP3, a music label that promotes new and upcoming artists in the Punjabi music industry. Jass Manak has delivered many hit songs such as Suit Punjabi, Lehanga, Boss, Shopping, and Bad Munda. He has also collaborated with other famous singers like Karan Randhawa, Guri, and Sidhu Moose Wala.

      -

      What is Yes or No song?

      -

      Yes or No is a romantic Punjabi song sung by Jass Manak and composed by Sharry Nexus. It was released on August 13, 2020 as a single by Geet MP3. The lyrics of the song are written by Jass Manak himself and express his love and proposal to a girl. The song has a catchy melody and a groovy beat that makes it perfect for dancing and singing along. The video of the song features Jass Manak and actress Larissa Bonesi as the lead pair and shows their cute chemistry.

      -

      Why is it popular?

      -

      Yes or No song has become very popular among the fans of Punjabi music because of its appealing music, lyrics, and video. The song has received positive reviews from critics and audiences alike and has been praised for its freshness and originality. The song has also achieved many milestones such as crossing 100 million views on YouTube, becoming the most streamed Punjabi song on Spotify, and topping the charts on various music platforms. The song has also inspired many covers, remixes, and dance videos by fans and celebrities.

      -

      How to download Yes or No song from Pendujatt?

      -

      What is Pendujatt?

      -

      Pendujatt is a website that offers free MP3 downloads of Punjabi songs, albums, and movies. It also provides information about the latest releases, upcoming projects, and news related to the Punjabi music industry. Pendujatt has a huge collection of songs from different genres such as pop, folk, bhangra, rap, and more. You can also find songs from other languages such as Hindi, English, Tamil, Telugu, etc. on Pendujatt.

      -

      Steps to download Yes or No song from Pendujatt

      -

      If you want to download Yes or No song from Pendujatt, you can follow these simple steps:

      -
        -
      1. Go to the official website of Pendujatt at www.pendujatt.net.
      2. -
      3. In the search box at the top right corner, type "Yes or No Jass Manak" and click on the search icon.
      4. -
      5. You will see a list of results related to your query. Click on the one that says "Yes Or No - Jass Manak - Single Track (2020)".
      6. -
      7. You will be redirected to a new page where you will see the details of the song such as singer, composer, lyricist, duration, etc. You will also see a download button at the bottom of the page.
      8. -
      9. Click on the download button and choose the quality and format of the MP3 file you want to download. You can choose from 48 kbps, 128 kbps, or 320 kbps.
      10. -
      11. After selecting the quality and format, click on the save button and wait for the download to complete.
      12. -
      13. Once the download is finished, you can enjoy listening to Yes or No song offline on your device.
      14. -
      -

      Benefits of downloading from Pendujatt

      -

      There are many benefits of downloading Yes or No song from Pendujatt, such as:

      -
        -
      • You can download the song for free without any registration or subscription.
      • -
      • You can choose the quality and format of the MP3 file according to your preference and device compatibility.
      • -
      • You can access a large collection of Punjabi songs and other languages on Pendujatt.
      • -
      • You can also get information about the latest releases, upcoming projects, and news related to the Punjabi music industry on Pendujatt.
      • -
      -

      Alternatives to Pendujatt for downloading Yes or No song

      -

      If you are looking for some alternatives to Pendujatt for downloading Yes or No song, you can try these options:

      -

      Apple Music

      -

      Apple Music is a popular music streaming service that offers access to over 75 million songs, including Yes or No by Jass Manak. You can also download songs for offline listening on your Apple devices. Apple Music also provides personalized recommendations, curated playlists, live radio, podcasts, and more. You can get a free trial of Apple Music for three months and then pay $9.99 per month for an individual plan or $14.99 per month for a family plan.

      -

      yes or no jass manak mp3 song free download
      -yes or no jass manak mp3 song download mr jatt
      -yes or no jass manak mp3 song download djpunjab
      -yes or no jass manak mp3 song download pagalworld
      -yes or no jass manak mp3 song download 320kbps
      -yes or no jass manak mp3 song download raagjatt
      -yes or no jass manak mp3 song download djyoungster
      -yes or no jass manak mp3 song download bestwap
      -yes or no jass manak mp3 song download wapking
      -yes or no jass manak mp3 song download 2020
      -yes or no jass manak mp3 song lyrics
      -yes or no jass manak mp3 song ringtone
      -yes or no jass manak mp3 song video
      -yes or no jass manak mp3 song status
      -yes or no jass manak mp3 song remix
      -yes or no jass manak full mp3 song download
      -yes or no jass manak new mp3 song download
      -yes or no jass manak latest mp3 song download
      -yes or no jass manak punjabi mp3 song download
      -yes or no jass manak geet mp3 song download
      -yes or no by jass manak mp3 song download
      -yes or no by jass manak full mp3 song download
      -yes or no by jass manak new mp3 song download
      -yes or no by jass manak latest mp3 song download
      -yes or no by jass manak punjabi mp3 song download
      -yes or no by jass manak geet mp3 song download
      -listen to yes or no by jass manak mp3 song online
      -stream yes or no by jass manak mp3 song online
      -play yes or no by jass manak mp3 song online
      -watch yes or no by jass manak mp3 song video online
      -how to download yes or no by jass manak mp3 song for free
      -how to download yes or no by jass manak full mp3 song for free
      -how to download yes or no by jass manak new mp3 song for free
      -how to download yes or no by jass manak latest mp3 song for free
      -how to download yes or no by jass manak punjabi mp3 song for free
      -how to download yes or no by jass manak geet mp3 song for free
      -where to download yes or no by jass manak mp3 song for free
      -where to download yes or no by jass manak full mp3 song for free
      -where to download yes or no by jass manak new mp3 song for free
      -where to download yes or no by jass manak latest mp3 song for free
      -where to download yes or no by jass manak punjabi mp3 song for free
      -where to download yes or no by jass manak geet mp3 song for free
      -what is the meaning of yes or no by jass manak mp3 song lyrics
      -what is the translation of yes or no by jass manak mp3 song lyrics in english
      -what is the review of yes or no by jass manak mp3 song
      -what is the rating of yes or no by jass manak mp3 song
      -what is the genre of yes or no by jass manak mp3 song
      -what is the release date of yes or no by jass manak mp3 song

      -

      JioSaavn

      -

      JioSaavn is another music streaming service that specializes in Indian music, including Punjabi songs. You can find Yes or No by Jass Manak on JioSaavn and stream it online or download it for offline listening. JioSaavn also offers features such as lyrics, podcasts, radio, original shows, and more. You can get a free trial of JioSaavn Pro for seven days and then pay $4.99 per month for unlimited downloads and ad-free listening.

      -

      Hungama Music

      -

      Hungama Music is a music streaming and downloading app that offers a variety of songs from different languages and genres. You can listen to Yes or No by Jass Manak on Hungama Music and download it in MP3 format. Hungama Music also gives you access to videos, podcasts, live shows, and more. You can earn coins by listening to songs and redeem them for subscriptions or downloads. You can also get a free trial of Hungama Music Pro for one month and then pay $4.99 per month for unlimited downloads and ad-free listening.

      -

      Conclusion

      -

      Summary of the article

      -

      In this article, we have discussed everything you need to know about Yes or No Jass Manak MP3 song download Pendujatt. We have given you an overview of who is Jass Manak and what is Yes or No song. We have also explained why this song is popular and how to download it from Pendujatt. We have also suggested some alternatives to Pendujatt for downloading this song. We hope you have found this article helpful and informative.

      -

      Call to action

      -

      If you are a fan of Jass Manak and his songs, you should definitely check out his other songs such as Prada, Lehanga, Boss, Shopping, and Bad Munda. You can also follow him on his social media accounts to stay updated with his latest news and releases. And if you liked this article, please share it with your friends and family who love Punjabi music. Thank you for reading!

      -

      FAQs

      -

      Here are some frequently asked questions about Yes or No Jass Manak MP3 song download Pendujatt:

      -
        -
      1. Is Pendujatt legal?
      2. -

        Pendujatt is not a legal website as it does not have the permission or license to distribute the songs it offers for download. Downloading songs from Pendujatt may violate the copyright laws and may result in legal action.

        -
      3. Is Pendujatt safe?
      4. -

        Pendujatt is not a safe website as it may contain viruses, malware, spyware , and pop-ups that may harm your device or compromise your privacy. Downloading songs from Pendujatt may also expose you to unwanted ads and redirects that may annoy you or lead you to malicious websites.

        -
      5. How can I download Yes or No song legally?
      6. -

        If you want to download Yes or No song legally, you should use the official platforms that have the rights to distribute the song. Some of these platforms are Apple Music, JioSaavn, Hungama Music, Spotify, YouTube Music, Amazon Music, etc. These platforms may require you to pay a subscription fee or a per-download fee to access the song.

        -
      7. How can I support Jass Manak and his music?
      8. -

        If you want to support Jass Manak and his music, you should stream or download his songs from the official platforms that pay him royalties. You should also buy his albums or merchandise if available. You should also follow him on his social media accounts and show him your love and appreciation. You should also attend his live shows or concerts if possible.

        -
      9. Where can I find more information about Jass Manak and his songs?
      10. -

        If you want to find more information about Jass Manak and his songs, you can visit his official website at www.jassmanak.com. You can also follow him on his social media accounts such as Instagram, Facebook, Twitter, Snapchat, etc. You can also visit his YouTube channel at www.youtube.com/c/JassManakOfficial. You can also read his interviews, articles, reviews, and news on various online platforms.

        -

      197e85843d
      -
      -
      \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Pokmon GO APK 0.239.2 - Free Download for Android Devices.md b/spaces/congsaPfin/Manga-OCR/logs/Pokmon GO APK 0.239.2 - Free Download for Android Devices.md deleted file mode 100644 index 18cbe6db63324b66ec33a1d943b97f70957bc02e..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Pokmon GO APK 0.239.2 - Free Download for Android Devices.md +++ /dev/null @@ -1,111 +0,0 @@ - -

      Pokemon Go APK 239.2: What's New and How to Download It

      -

      If you are a fan of Pokemon Go, you probably know that this game is constantly updated with new features, events, and fixes. But did you know that you can also download the latest version of Pokemon Go APK file and install it on your Android device? This way, you can enjoy the game even before it is officially released in your region or on Google Play Store.

      -

      pokemon go apk 239.2


      Download ✯✯✯ https://urlca.com/2uO7kf



      -

      But what is an APK file and why do you need it? And what are the benefits of downloading the latest version of Pokemon Go APK? In this article, we will answer these questions and show you how to download and install Pokemon Go APK 239.2 on your Android device.

      -

      What's New in Pokemon Go APK 239.2

      -

      Pokemon Go APK 239.2 is the latest version of the game as of June 2023. It brings a lot of new features, improvements, and bug fixes to the game. Here are some of the highlights:

      -

      New features and improvements

      -
        -
      • The Season of Mischief event has begun, featuring the mythical Pokemon Hoopa. You can encounter Hoopa by completing special research tasks and unlock its Unbound form by collecting Hoopa Candy.
      • -
      • New Pokemon from various regions have been added to the game, including Inkay, Malamar, Skrelp, Dragalge, Clauncher, Clawitzer, Binacle, Barbaracle, Phantump, Trevenant, Pumpkaboo, Gourgeist, Espurr, Meowstic, Swirlix, Slurpuff, Spritzee, Aromatisse, Dedenne, Carbink, Klefki, Bergmite, Avalugg, Noibat, Noivern, Xerneas, Yveltal, Zygarde, Diancie, Hoopa Confined, Hoopa Unbound.
      • -
      • New raids, research tasks, and rewards have been added to the game, featuring some of the new Pokemon and Hoopa.
      • -
      • The game's performance and stability have been improved, especially on older devices.
      • -
      -

      Bug fixes and issues

      -
        -
      • An issue with Adventure Sync not working properly has been fixed. You can now track your distance and earn rewards even when the game is closed.
      • -
      • An issue with incorrect distance tracking has been fixed. You can now hatch eggs and earn buddy candy more accurately.
      • -
      • An issue with missing sound effects and notifications has been fixed. You can now hear the game sounds and receive alerts for nearby Pokemon, raids, and events.
      • -
      -

      How to Download and Install Pokemon Go APK 239.2

      -

      If you want to download and install Pokemon Go APK 239.2 on your Android device, you need to follow some simple steps. But before you do that, you need to make sure that your device meets the requirements and that you take some precautions.

      -

      Requirements and precautions

      -
        -
      • Your Android device must have at least 4 GB of RAM and run on Android 6 or higher. If your device is older or has lower specifications, you may experience lag, crashes, or errors while playing the game.
      • -
      • You need to have enough storage space on your device for the APK file and the game data. The APK file is about 100 MB in size, while the game data may vary depending on your region and progress. You can check how much space you have by going to your device settings and looking for storage or memory.
      • -
      • You need to have a stable internet connection and a Google account to play the game. The game requires an online connection to access the game servers, update the game data, and sync your progress. You also need a Google account to log in to the game and access your Google Play Services.
      • -
      • You need to enable unknown sources in your device settings to install the APK file. This is because the APK file is not from the official Google Play Store and may not be verified by Google. To enable unknown sources, go to your device settings, look for security or privacy, and toggle on the option that allows installing apps from unknown sources.
      • -
      • You need to backup your game data before installing the APK file. This is to prevent losing your progress, items, or settings in case something goes wrong during the installation process. You can backup your game data by going to your device settings, looking for backup or cloud, and enabling the option that backs up your app data.
      • -
      -

      Steps to download and install the APK file

      -
        -
      1. Find a reliable source for the APK file. There are many websites that offer APK files for various apps and games, but not all of them are safe or trustworthy. Some of them may contain malware, viruses, or outdated versions of the APK file. To avoid these risks, you should only download the APK file from reputable sources that have positive reviews and ratings from other users. Some of the sources that we recommend are [Pokémon GO 0.239.2 APK Download] or [Pokémon GO APK (Android Game)].
      2. -
      3. Download the APK file to your device or transfer it from your computer. You can either download the APK file directly to your device using your browser or a download manager app, or you can download it to your computer and then transfer it to your device using a USB cable or a wireless method such as Bluetooth or Wi-Fi Direct.
      4. -
      5. Locate the APK file in your device's file manager and tap on it to install it. You can use any file manager app that you have on your device, such as Files by Google, ES File Explorer, or Solid Explorer. You can also use the default file manager app that comes with your device. To locate the APK file, look for the folder where you downloaded or transferred it, such as Downloads, Documents, or Transfers. Once you find the APK file, tap on it to start the installation process.
      6. -
      7. Follow the on-screen instructions and grant the necessary permissions. The installation process may take a few minutes depending on your device and internet speed. During this time, you may see some prompts asking you to grant certain permissions to the app, such as access to your location, camera, contacts, storage, etc. You need to grant these permissions for the app to work properly.
      8. -
      9. Launch the game and enjoy the new features and improvements. Once the installation is complete, you will see a notification saying that the app is installed successfully. You can then launch the game by tapping on its icon on your home screen or app drawer. You will be asked to log in with your Google account and agree to the terms of service and privacy policy. After that, you can start playing the game and explore the new features and improvements.
      10. -
      -

      Conclusion

      -

      Pokemon Go is one of the most popular and fun games that you can play on your Android device. It lets you catch, battle, and trade Pokemon in the real world using augmented reality technology. By downloading and installing Pokemon Go APK 239.2, you can enjoy the latest version of the game with new features, improvements, and bug fixes.

      -

      pokemon go apk 239.2 download
      -pokemon go apk 239.2 update
      -pokemon go apk 239.2 mod
      -pokemon go apk 239.2 hack
      -pokemon go apk 239.2 latest version
      -pokemon go apk 239.2 free download
      -pokemon go apk 239.2 android
      -pokemon go apk 239.2 mirror
      -pokemon go apk 239.2 softpedia
      -pokemon go apk 239.2 apkcombo
      -pokemon go apk 239.2 install
      -pokemon go apk 239.2 features
      -pokemon go apk 239.2 gameplay
      -pokemon go apk 239.2 review
      -pokemon go apk 239.2 bugs
      -pokemon go apk 239.2 fixes
      -pokemon go apk 239.2 changelog
      -pokemon go apk 239.2 release date
      -pokemon go apk 239.2 size
      -pokemon go apk 239.2 requirements
      -pokemon go apk 239.2 compatibility
      -pokemon go apk 239.2 niantic
      -pokemon go apk 239.2 official website
      -pokemon go apk 239.2 reddit
      -pokemon go apk 239.2 youtube
      -pokemon go apk 239.2 tips
      -pokemon go apk 239.2 tricks
      -pokemon go apk 239.2 cheats
      -pokemon go apk 239.2 guide
      -pokemon go apk 239.2 news
      -pokemon go apk 239.2 events
      -pokemon go apk 239.2 community day
      -pokemon go apk 239.2 legendary raids
      -pokemon go apk 239.2 shiny pokémon
      -pokemon go apk 239.2 mega evolutions
      -pokemon go apk 239.2 pvp battles
      -pokemon go apk 239.2 gyms
      -pokemon go apk 239.2 pokestops
      -pokemon go apk 239.2 eggs
      -pokemon go apk 239.2 hatching
      -pokemon go apk 239.2 buddy system
      -pokemon go apk 239.2 adventure sync
      -pokemon go apk 239.2 ar mode
      -pokemon go apk 239.2 snapshots
      -pokemon go apk 239.2 medals
      -pokemon go apk 239.2 achievements
      -pokemon go apk 239.2 rewards
      -pokemon go apk 239.2 shop
      -pokemon go apk 239.2 coins

      -

      If you want to download and install Pokemon Go APK 239.2 on your Android device, you can follow the steps that we have explained in this article. You just need to find a reliable source for the APK file, download and install it on your device, and launch the game. It's that simple!

      -

      We hope that this article has helped you learn more about Pokemon Go APK 239.2 and how to download and install it on your Android device. If you have any questions, comments, or feedback, feel free to leave them below. We would love to hear from you!

      -

      For more information about Pokemon Go and other related topics, you can visit the official website of the game or check out some of these resources:

      -
        -
      • [Pokémon GO | Pokémon Video Games]
      • -
      • [Pokémon GO - Apps on Google Play]
      • -
      • [Pokémon GO Support]
      • -
      -

      FAQs

      -

      Here are some of the frequently asked questions about Pokemon Go APK 239.2 and their answers:

      -

      Q: Is Pokemon Go APK 239.2 safe to download and install?

      -

      A: Yes, as long as you download the APK file from a reputable source and follow the instructions carefully, it is safe to download and install Pokemon Go APK 239.2 on your Android device. However, you should always be careful when downloading and installing any APK file from unknown sources, as they may contain malware or viruses that can harm your device or compromise your privacy.

      -

      Q: Will I lose my progress or get banned if I install Pokemon Go APK 239.2?

      -

      A: No, you will not lose your progress or get banned if you install Pokemon Go APK 239.2 on your Android device. Your progress is saved on your Google account and synced with the game servers, so you can access it from any device that you log in with. However, you should always backup your game data before installing any APK file, just in case something goes wrong during the installation process. You will also not get banned for installing Pokemon Go APK 239.2, as long as you do not use any cheats, hacks, or mods that violate the terms of service and privacy policy of the game.

      -

      Q: What are the differences between Pokemon Go APK 239.2 and the official version of the game?

      -

      A: Pokemon Go APK 239.2 is the same as the official version of the game that is available on Google Play Store, except that it may have some features, events, or fixes that are not yet released in your region or on Google Play Store. By downloading and installing Pokemon Go APK 239.2, you can enjoy the game even before it is officially released in your region or on Google Play Store.

      -

      Q: How can I update Pokemon Go APK 239.2 to the latest version of the game?

      -

      A: You can update Pokemon Go APK 239.2 to the latest version of the game by downloading and installing the new APK file from a reliable source, following the same steps that we have explained in this article. Alternatively, you can also update the game from Google Play Store, if it is available in your region or on Google Play Store.

      -

      Q: How can I uninstall Pokemon Go APK 239.2 from my Android device?

      -

      A: You can uninstall Pokemon Go APK 239.2 from your Android device by going to your device settings, looking for apps or applications, finding and selecting Pokemon Go, and tapping on uninstall. You can also uninstall the game by long-pressing its icon on your home screen or app drawer and dragging it to the uninstall option.

      197e85843d
      -
      -
      \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Tekken 3 Cheat APK Unlock All Characters and Modes.md b/spaces/congsaPfin/Manga-OCR/logs/Tekken 3 Cheat APK Unlock All Characters and Modes.md deleted file mode 100644 index 2bca40b1eb69b9ccb875b02eb936634399bf69f6..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Tekken 3 Cheat APK Unlock All Characters and Modes.md +++ /dev/null @@ -1,130 +0,0 @@ - -

      Tekken 3 Cheat APK: How to Download and Use It

      -

      Tekken 3 is one of the most popular and classic fighting games of all time. It has a large and diverse cast of characters, a variety of modes and challenges, and a fast and fluid gameplay. However, if you want to unlock all the characters, modes, and secrets in the game, you might need some help from a cheat app. In this article, we will show you how to download and use Tekken 3 Cheat APK, a free app that allows you to modify the game settings and enjoy unlimited features.

      -

      tekken 3 cheat apk


      Download Ziphttps://urlca.com/2uO8ot



      -

      What is Tekken 3?

      -

      Tekken 3 is a fighting game that was released in 1997 for the arcades and in 1998 for the PlayStation. It is the third entry in the Tekken series, which is known for its realistic and martial arts-based combat system. The game features a largely new cast of characters, including the debut of several now-staple characters such as Jin Kazama, Ling Xiaoyu, Bryan Fury, Eddy Gordo, Hwoarang, Forest Law and Julia Chang, with a total of twenty-three characters. The home version includes a new beat 'em up mode called Tekken Force, and the bonus Tekken Ball mode. The game was a major hit for both arcades and consoles, selling more than 8 million PlayStation copies worldwide. Since its release, Tekken 3 has been cited as one of the greatest video games of all time.

      -

      Features of Tekken 3

      -

      Here are some of the features that make Tekken 3 so amazing:

      -
        -
      • A large and diverse cast of characters: Tekken 3 features 23 different characters, each with their own unique moves, combos, and styles. You can choose from classic characters like Jin Kazama, Paul Phoenix, Nina Williams, and Yoshimitsu, or new characters like Ling Xiaoyu, Eddy Gordo, Hwoarang, and Julia Chang.
      • -
      • A variety of modes and challenges: Tekken 3 offers more than just the standard Arcade and Versus modes. You can also play in Team Battle, Time Attack, Survival, Practice, Tekken Force, and Tekken Ball modes. Each mode has its own objectives and rewards.
      • -
      • A fast and fluid gameplay: Tekken 3 is known for its speed and smoothness of gameplay. The game adds emphasis on the third axis by allowing characters to sidestep in or out of the background. Fighters also jump more reasonable heights than in the previous games. The game also introduces new improvements such as quicker recoveries from knockdowns, more escapes from tackles and stuns, more moves with juggling enabled, and newly created combo throws.
      • -
      -

      Characters of Tekken 3

      -

      Tekken 3 features a largely new cast of characters, with only six returning from the previous games. Here are some of the most popular characters in Tekken 3:

      - - - - - - - - - - - - - -

      What is Tekken 3 Cheat APK?

      -

      Tekken 3 Cheat APK is a free app that allows you to modify the game settings and enjoy unlimited features in Tekken 3. With this app, you can unlock all the characters, modes, and secrets in the game, as well as customize the difficulty, health, damage, and time settings. You can also use cheats such as infinite health, one-hit kill, no gravity, and turbo mode. Tekken 3 Cheat APK is compatible with both the original and the modded versions of Tekken 3.

      -

      Benefits of Tekken 3 Cheat APK

      -

      Here are some of the benefits of using Tekken 3 Cheat APK:

      -

      tekken 3 cheat codes apk
      -tekken 3 mod apk with cheats
      -tekken 3 hack apk download
      -tekken 3 cheat menu apk
      -tekken 3 unlimited money apk
      -tekken 3 unlock all characters apk
      -tekken 3 cheat engine apk
      -tekken 3 mod apk unlimited health
      -tekken 3 apk with cheats for android
      -tekken 3 cheat app download
      -tekken 3 trainer apk
      -tekken 3 mod apk all unlocked
      -tekken 3 hack version apk
      -tekken 3 cheat code list apk
      -tekken 3 unlimited coins apk
      -tekken 3 god mode apk
      -tekken 3 cheat file apk
      -tekken 3 mod apk no root
      -tekken 3 apk with cheats for pc
      -tekken 3 cheat tool apk
      -tekken 3 patcher apk
      -tekken 3 mod apk latest version
      -tekken 3 hack apk free download
      -tekken 3 cheat code generator apk
      -tekken 3 unlimited gems apk
      -tekken 3 one hit kill apk
      -tekken 3 cheat keyboard apk
      -tekken 3 mod apk offline
      -tekken 3 apk with cheats for ios
      -tekken 3 cheat guide apk
      -tekken 3 editor apk
      -tekken 3 mod apk full version
      -tekken 3 hack apk online
      -tekken 3 cheat code unlocker apk
      -tekken 3 unlimited lives apk
      -tekken 3 mega mod apk
      -tekken 3 cheat book apk
      -tekken 3 mod apk android 1
      -tekken 3 hack apk no verification
      -tekken 3 cheat code app apk
      -tekken 3 pro mod apk
      -tekken 3 mod apk revdl
      -tekken 3 hack apk without root
      -tekken 3 cheat code download apk
      -tekken 3 unlimited time apk
      -tekken 3 premium mod apk
      -tekken 3 cheat sheet apk
      -tekken 3 mod apk rexdl
      -tekken 3 hack apk android oyun club

      -
        -
      • You can access all the features of the game without spending any money or time. You can play with any character, mode, or stage you want, without having to complete any requirements or challenges.
      • -
      • You can have more fun and excitement in the game by using cheats and hacks. You can make the game easier or harder according to your preference, or experiment with different combinations of cheats and settings.
      • -
      • You can explore and discover new aspects of the game that you might have missed otherwise. You can find hidden secrets, easter eggs, glitches, and bugs in the game, or create your own scenarios and stories.
      • -
      -

      Risks of Tekken 3 Cheat APK

      -

      However, there are also some risks of using Tekken 3 Cheat APK that you should be aware of:

      -
        -
      • You might lose the original charm and challenge of the game by using cheats and hacks. You might get bored or lose interest in the game if you have everything unlocked and easy to achieve.
      • -
      • You might encounter some technical issues or errors in the game by using cheats and hacks. The game might crash, freeze, lag, or glitch if you use too many or incompatible cheats or settings.
      • -
      • You might violate the terms and conditions of the game by using cheats and hacks. The game developers might ban or suspend your account if they detect that you are using an unauthorized app or modification.
      • -
      -

      How to Download and Install Tekken 3 Cheat APK?

      -

      If you want to download and install Tekken 3 Cheat APK on your Android device, you need to follow these steps:

      -

      Step 1: Enable Unknown Sources

      -

      Before you can install any app that is not from the Google Play Store, you need to enable unknown sources on your device. To do this, go to Settings > Security > Unknown Sources and toggle it on. This will allow you to install apps from other sources than the official store.

      -

      Step 2: Download the APK File

      -

      Next, you need to download the APK file of Tekken 3 Cheat APK from a reliable source. You can use this link to download the latest version of the app. The file size is about 5 MB and it requires Android 4.0 or higher to run. Make sure you have enough storage space on your device before downloading.

      -

      Step 3: Install the APK File

      -

      After downloading the APK file, you need to install it on your device. To do this, locate the file in your file manager or downloads folder and tap on it. You will see a pop-up window asking for your permission to install the app. Tap on Install and wait for the installation process to finish. Once done, you will see a confirmation message that says "App installed".

      -

      How to Use Tekken 3 Cheat APK?

      -

      Now that you have installed Tekken 3 Cheat APK on your device, you can use it to modify the game settings and enjoy unlimited features in Tekken 3. Here are some tips and tricks for using Tekken 3 Cheat APK:

      -

      Tips and Tricks for Tekken 3 Cheat APK

      -
        -
      • To launch Tekken 3 Cheat APK, you need to open it from your app drawer or home screen. You will see a simple interface with a list of cheats and settings that you can enable or disable.
      • -
      • To unlock all the characters in Tekken 3, you need to enable the "Unlock All Characters" cheat in Tekken 3 Cheat APK. This will allow you to select any character from the character selection screen without having to complete any requirements.
      • -in Tekken 3 Cheat APK. This will allow you to play in any mode from the mode selection screen without having to complete any challenges. -
      • To customize the difficulty, health, damage, and time settings in Tekken 3, you need to enable the "Custom Settings" cheat in Tekken 3 Cheat APK. This will allow you to adjust the sliders for each setting according to your preference.
      • -
      • To use other cheats such as infinite health, one-hit kill, no gravity, and turbo mode in Tekken 3, you need to enable the corresponding cheats in Tekken 3 Cheat APK. These cheats will affect the gameplay and make it more fun or challenging.
      • -
      • To save your changes and apply them to the game, you need to tap on the "Save" button in Tekken 3 Cheat APK. This will save your settings and close the app. You can then launch Tekken 3 from your app drawer or home screen and enjoy the modified game.
      • -
      -

      Conclusion

      -

      Tekken 3 is a classic and popular fighting game that has many features and modes to offer. However, if you want to unlock all the features and enjoy unlimited fun in the game, you might need some help from a cheat app. Tekken 3 Cheat APK is a free app that allows you to modify the game settings and use cheats and hacks in Tekken 3. You can download and install Tekken 3 Cheat APK on your Android device by following the steps above. You can also use Tekken 3 Cheat APK to unlock all the characters, modes, and secrets in the game, as well as customize the difficulty, health, damage, and time settings. You can also use other cheats such as infinite health, one-hit kill, no gravity, and turbo mode to make the game more fun or challenging. However, you should also be aware of the risks of using Tekken 3 Cheat APK, such as losing the original charm and challenge of the game, encountering technical issues or errors in the game, or violating the terms and conditions of the game. Therefore, you should use Tekken 3 Cheat APK at your own risk and discretion.

      -

      FAQs

      -

      Here are some of the frequently asked questions about Tekken 3 Cheat APK:

      -
        -
      • Q: Is Tekken 3 Cheat APK safe to use?
      • -
      • A: Tekken 3 Cheat APK is a third-party app that is not affiliated with or endorsed by the official game developers. Therefore, it is not guaranteed to be safe or secure to use. You might encounter some viruses, malware, or spyware when downloading or installing Tekken 3 Cheat APK. You might also damage your device or compromise your personal data by using Tekken 3 Cheat APK. Therefore, you should use Tekken 3 Cheat APK at your own risk and discretion.
      • -
      • Q: Is Tekken 3 Cheat APK legal to use?
      • -
      • A: Tekken 3 Cheat APK is a cheat app that modifies the game settings and uses cheats and hacks in Tekken 3. Therefore, it is not legal to use according to the terms and conditions of the game. You might violate the intellectual property rights of the game developers by using Tekken 3 Cheat APK. You might also face legal consequences or penalties by using Tekken 3 Cheat APK. Therefore, you should use Tekken 3 Cheat APK at your own risk and discretion.
      • -
      • Q: Does Tekken 3 Cheat APK work on all devices?
      • -
      • A: Tekken 3 Cheat APK is compatible with Android devices that run on Android 4.0 or higher. However, it might not work on some devices due to compatibility issues or technical limitations. You might also need to root your device or grant some permissions to use Tekken 3 Cheat APK. Therefore, you should check your device specifications and requirements before using Tekken 3 Cheat APK.
      • -
      • Q: Does Tekken 3 Cheat APK work online?
      • -
      • A: No, Tekken 3 Cheat APK does not work online. It only works offline on your device. You cannot use Tekken 3 Cheat APK to play online multiplayer modes or connect with other players online. You might also get banned or suspended from online services if you try to use Tekken 3 Cheat APK online.
      • -
      • Q: Where can I get more information about Tekken 3 Cheat APK?
      • -
      • A: You can get more information about Tekken 3 Cheat APK from its official website or its social media pages. You can also contact its developers or support team for any queries or feedback about Tekken 3 Cheat APK.
      • -

      197e85843d
      -
      -
      \ No newline at end of file diff --git a/spaces/contluForse/HuggingGPT/assets/Dhoom Dadakka Full Movie 720p The Crazy Quest for an Heir in Mumbais Underworld.md b/spaces/contluForse/HuggingGPT/assets/Dhoom Dadakka Full Movie 720p The Crazy Quest for an Heir in Mumbais Underworld.md deleted file mode 100644 index 844c8826721bc2204c3afe7f7d7bfd753a48c922..0000000000000000000000000000000000000000 --- a/spaces/contluForse/HuggingGPT/assets/Dhoom Dadakka Full Movie 720p The Crazy Quest for an Heir in Mumbais Underworld.md +++ /dev/null @@ -1,6 +0,0 @@ -

      Dhoom Dadakka Full Movie 720p


      Download ✔✔✔ https://ssurll.com/2uzvjQ



      -
      - aaccfb2cb3
      -
      -
      -

      diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/mmpkg/mmcv/utils/env.py b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/mmpkg/mmcv/utils/env.py deleted file mode 100644 index a0c6e64a63f8a3ed813b749c134823a0ef69964c..0000000000000000000000000000000000000000 --- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/mmpkg/mmcv/utils/env.py +++ /dev/null @@ -1,95 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -"""This file holding some environment constant for sharing by other files.""" - -import os.path as osp -import subprocess -import sys -from collections import defaultdict - -import cv2 -import torch - -import annotator.mmpkg.mmcv as mmcv -from .parrots_wrapper import get_build_config - - -def collect_env(): - """Collect the information of the running environments. - - Returns: - dict: The environment information. The following fields are contained. - - - sys.platform: The variable of ``sys.platform``. - - Python: Python version. - - CUDA available: Bool, indicating if CUDA is available. - - GPU devices: Device type of each GPU. - - CUDA_HOME (optional): The env var ``CUDA_HOME``. - - NVCC (optional): NVCC version. - - GCC: GCC version, "n/a" if GCC is not installed. - - PyTorch: PyTorch version. - - PyTorch compiling details: The output of \ - ``torch.__config__.show()``. - - TorchVision (optional): TorchVision version. - - OpenCV: OpenCV version. - - MMCV: MMCV version. - - MMCV Compiler: The GCC version for compiling MMCV ops. - - MMCV CUDA Compiler: The CUDA version for compiling MMCV ops. - """ - env_info = {} - env_info['sys.platform'] = sys.platform - env_info['Python'] = sys.version.replace('\n', '') - - cuda_available = torch.cuda.is_available() - env_info['CUDA available'] = cuda_available - - if cuda_available: - devices = defaultdict(list) - for k in range(torch.cuda.device_count()): - devices[torch.cuda.get_device_name(k)].append(str(k)) - for name, device_ids in devices.items(): - env_info['GPU ' + ','.join(device_ids)] = name - - from annotator.mmpkg.mmcv.utils.parrots_wrapper import _get_cuda_home - CUDA_HOME = _get_cuda_home() - env_info['CUDA_HOME'] = CUDA_HOME - - if CUDA_HOME is not None and osp.isdir(CUDA_HOME): - try: - nvcc = osp.join(CUDA_HOME, 'bin/nvcc') - nvcc = subprocess.check_output( - f'"{nvcc}" -V | tail -n1', shell=True) - nvcc = nvcc.decode('utf-8').strip() - except subprocess.SubprocessError: - nvcc = 'Not Available' - env_info['NVCC'] = nvcc - - try: - gcc = subprocess.check_output('gcc --version | head -n1', shell=True) - gcc = gcc.decode('utf-8').strip() - env_info['GCC'] = gcc - except subprocess.CalledProcessError: # gcc is unavailable - env_info['GCC'] = 'n/a' - - env_info['PyTorch'] = torch.__version__ - env_info['PyTorch compiling details'] = get_build_config() - - try: - import torchvision - env_info['TorchVision'] = torchvision.__version__ - except ModuleNotFoundError: - pass - - env_info['OpenCV'] = cv2.__version__ - - env_info['MMCV'] = mmcv.__version__ - - try: - from annotator.mmpkg.mmcv.ops import get_compiler_version, get_compiling_cuda_version - except ModuleNotFoundError: - env_info['MMCV Compiler'] = 'n/a' - env_info['MMCV CUDA Compiler'] = 'n/a' - else: - env_info['MMCV Compiler'] = get_compiler_version() - env_info['MMCV CUDA Compiler'] = get_compiling_cuda_version() - - return env_info diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/oneformer/detectron2/data/datasets/lvis.py b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/oneformer/detectron2/data/datasets/lvis.py deleted file mode 100644 index 6e1e6ecc657e83d6df57da342b0655177402c514..0000000000000000000000000000000000000000 --- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/oneformer/detectron2/data/datasets/lvis.py +++ /dev/null @@ -1,241 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import logging -import os -from fvcore.common.timer import Timer - -from annotator.oneformer.detectron2.data import DatasetCatalog, MetadataCatalog -from annotator.oneformer.detectron2.structures import BoxMode -from annotator.oneformer.detectron2.utils.file_io import PathManager - -from .builtin_meta import _get_coco_instances_meta -from .lvis_v0_5_categories import LVIS_CATEGORIES as LVIS_V0_5_CATEGORIES -from .lvis_v1_categories import LVIS_CATEGORIES as LVIS_V1_CATEGORIES -from .lvis_v1_category_image_count import LVIS_CATEGORY_IMAGE_COUNT as LVIS_V1_CATEGORY_IMAGE_COUNT - -""" -This file contains functions to parse LVIS-format annotations into dicts in the -"Detectron2 format". -""" - -logger = logging.getLogger(__name__) - -__all__ = ["load_lvis_json", "register_lvis_instances", "get_lvis_instances_meta"] - - -def register_lvis_instances(name, metadata, json_file, image_root): - """ - Register a dataset in LVIS's json annotation format for instance detection and segmentation. - - Args: - name (str): a name that identifies the dataset, e.g. "lvis_v0.5_train". - metadata (dict): extra metadata associated with this dataset. It can be an empty dict. - json_file (str): path to the json instance annotation file. - image_root (str or path-like): directory which contains all the images. - """ - DatasetCatalog.register(name, lambda: load_lvis_json(json_file, image_root, name)) - MetadataCatalog.get(name).set( - json_file=json_file, image_root=image_root, evaluator_type="lvis", **metadata - ) - - -def load_lvis_json(json_file, image_root, dataset_name=None, extra_annotation_keys=None): - """ - Load a json file in LVIS's annotation format. - - Args: - json_file (str): full path to the LVIS json annotation file. - image_root (str): the directory where the images in this json file exists. - dataset_name (str): the name of the dataset (e.g., "lvis_v0.5_train"). - If provided, this function will put "thing_classes" into the metadata - associated with this dataset. - extra_annotation_keys (list[str]): list of per-annotation keys that should also be - loaded into the dataset dict (besides "bbox", "bbox_mode", "category_id", - "segmentation"). The values for these keys will be returned as-is. - - Returns: - list[dict]: a list of dicts in Detectron2 standard format. (See - `Using Custom Datasets `_ ) - - Notes: - 1. This function does not read the image files. - The results do not have the "image" field. - """ - from lvis import LVIS - - json_file = PathManager.get_local_path(json_file) - - timer = Timer() - lvis_api = LVIS(json_file) - if timer.seconds() > 1: - logger.info("Loading {} takes {:.2f} seconds.".format(json_file, timer.seconds())) - - if dataset_name is not None: - meta = get_lvis_instances_meta(dataset_name) - MetadataCatalog.get(dataset_name).set(**meta) - - # sort indices for reproducible results - img_ids = sorted(lvis_api.imgs.keys()) - # imgs is a list of dicts, each looks something like: - # {'license': 4, - # 'url': 'http://farm6.staticflickr.com/5454/9413846304_881d5e5c3b_z.jpg', - # 'file_name': 'COCO_val2014_000000001268.jpg', - # 'height': 427, - # 'width': 640, - # 'date_captured': '2013-11-17 05:57:24', - # 'id': 1268} - imgs = lvis_api.load_imgs(img_ids) - # anns is a list[list[dict]], where each dict is an annotation - # record for an object. The inner list enumerates the objects in an image - # and the outer list enumerates over images. Example of anns[0]: - # [{'segmentation': [[192.81, - # 247.09, - # ... - # 219.03, - # 249.06]], - # 'area': 1035.749, - # 'image_id': 1268, - # 'bbox': [192.81, 224.8, 74.73, 33.43], - # 'category_id': 16, - # 'id': 42986}, - # ...] - anns = [lvis_api.img_ann_map[img_id] for img_id in img_ids] - - # Sanity check that each annotation has a unique id - ann_ids = [ann["id"] for anns_per_image in anns for ann in anns_per_image] - assert len(set(ann_ids)) == len(ann_ids), "Annotation ids in '{}' are not unique".format( - json_file - ) - - imgs_anns = list(zip(imgs, anns)) - - logger.info("Loaded {} images in the LVIS format from {}".format(len(imgs_anns), json_file)) - - if extra_annotation_keys: - logger.info( - "The following extra annotation keys will be loaded: {} ".format(extra_annotation_keys) - ) - else: - extra_annotation_keys = [] - - def get_file_name(img_root, img_dict): - # Determine the path including the split folder ("train2017", "val2017", "test2017") from - # the coco_url field. Example: - # 'coco_url': 'http://images.cocodataset.org/train2017/000000155379.jpg' - split_folder, file_name = img_dict["coco_url"].split("/")[-2:] - return os.path.join(img_root + split_folder, file_name) - - dataset_dicts = [] - - for (img_dict, anno_dict_list) in imgs_anns: - record = {} - record["file_name"] = get_file_name(image_root, img_dict) - record["height"] = img_dict["height"] - record["width"] = img_dict["width"] - record["not_exhaustive_category_ids"] = img_dict.get("not_exhaustive_category_ids", []) - record["neg_category_ids"] = img_dict.get("neg_category_ids", []) - image_id = record["image_id"] = img_dict["id"] - - objs = [] - for anno in anno_dict_list: - # Check that the image_id in this annotation is the same as - # the image_id we're looking at. - # This fails only when the data parsing logic or the annotation file is buggy. - assert anno["image_id"] == image_id - obj = {"bbox": anno["bbox"], "bbox_mode": BoxMode.XYWH_ABS} - # LVIS data loader can be used to load COCO dataset categories. In this case `meta` - # variable will have a field with COCO-specific category mapping. - if dataset_name is not None and "thing_dataset_id_to_contiguous_id" in meta: - obj["category_id"] = meta["thing_dataset_id_to_contiguous_id"][anno["category_id"]] - else: - obj["category_id"] = anno["category_id"] - 1 # Convert 1-indexed to 0-indexed - segm = anno["segmentation"] # list[list[float]] - # filter out invalid polygons (< 3 points) - valid_segm = [poly for poly in segm if len(poly) % 2 == 0 and len(poly) >= 6] - assert len(segm) == len( - valid_segm - ), "Annotation contains an invalid polygon with < 3 points" - assert len(segm) > 0 - obj["segmentation"] = segm - for extra_ann_key in extra_annotation_keys: - obj[extra_ann_key] = anno[extra_ann_key] - objs.append(obj) - record["annotations"] = objs - dataset_dicts.append(record) - - return dataset_dicts - - -def get_lvis_instances_meta(dataset_name): - """ - Load LVIS metadata. - - Args: - dataset_name (str): LVIS dataset name without the split name (e.g., "lvis_v0.5"). - - Returns: - dict: LVIS metadata with keys: thing_classes - """ - if "cocofied" in dataset_name: - return _get_coco_instances_meta() - if "v0.5" in dataset_name: - return _get_lvis_instances_meta_v0_5() - elif "v1" in dataset_name: - return _get_lvis_instances_meta_v1() - raise ValueError("No built-in metadata for dataset {}".format(dataset_name)) - - -def _get_lvis_instances_meta_v0_5(): - assert len(LVIS_V0_5_CATEGORIES) == 1230 - cat_ids = [k["id"] for k in LVIS_V0_5_CATEGORIES] - assert min(cat_ids) == 1 and max(cat_ids) == len( - cat_ids - ), "Category ids are not in [1, #categories], as expected" - # Ensure that the category list is sorted by id - lvis_categories = sorted(LVIS_V0_5_CATEGORIES, key=lambda x: x["id"]) - thing_classes = [k["synonyms"][0] for k in lvis_categories] - meta = {"thing_classes": thing_classes} - return meta - - -def _get_lvis_instances_meta_v1(): - assert len(LVIS_V1_CATEGORIES) == 1203 - cat_ids = [k["id"] for k in LVIS_V1_CATEGORIES] - assert min(cat_ids) == 1 and max(cat_ids) == len( - cat_ids - ), "Category ids are not in [1, #categories], as expected" - # Ensure that the category list is sorted by id - lvis_categories = sorted(LVIS_V1_CATEGORIES, key=lambda x: x["id"]) - thing_classes = [k["synonyms"][0] for k in lvis_categories] - meta = {"thing_classes": thing_classes, "class_image_count": LVIS_V1_CATEGORY_IMAGE_COUNT} - return meta - - -if __name__ == "__main__": - """ - Test the LVIS json dataset loader. - - Usage: - python -m detectron2.data.datasets.lvis \ - path/to/json path/to/image_root dataset_name vis_limit - """ - import sys - import numpy as np - from annotator.oneformer.detectron2.utils.logger import setup_logger - from PIL import Image - import annotator.oneformer.detectron2.data.datasets # noqa # add pre-defined metadata - from annotator.oneformer.detectron2.utils.visualizer import Visualizer - - logger = setup_logger(name=__name__) - meta = MetadataCatalog.get(sys.argv[3]) - - dicts = load_lvis_json(sys.argv[1], sys.argv[2], sys.argv[3]) - logger.info("Done loading {} samples.".format(len(dicts))) - - dirname = "lvis-data-vis" - os.makedirs(dirname, exist_ok=True) - for d in dicts[: int(sys.argv[4])]: - img = np.array(Image.open(d["file_name"])) - visualizer = Visualizer(img, metadata=meta) - vis = visualizer.draw_dataset_dict(d) - fpath = os.path.join(dirname, os.path.basename(d["file_name"])) - vis.save(fpath) diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/oneformer/detectron2/utils/comm.py b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/oneformer/detectron2/utils/comm.py deleted file mode 100644 index a9ea9a9f578c5704d1e7ff563ef156e9133ab465..0000000000000000000000000000000000000000 --- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/oneformer/detectron2/utils/comm.py +++ /dev/null @@ -1,238 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -""" -This file contains primitives for multi-gpu communication. -This is useful when doing distributed training. -""" - -import functools -import numpy as np -import torch -import torch.distributed as dist - -_LOCAL_PROCESS_GROUP = None -_MISSING_LOCAL_PG_ERROR = ( - "Local process group is not yet created! Please use detectron2's `launch()` " - "to start processes and initialize pytorch process group. If you need to start " - "processes in other ways, please call comm.create_local_process_group(" - "num_workers_per_machine) after calling torch.distributed.init_process_group()." -) - - -def get_world_size() -> int: - if not dist.is_available(): - return 1 - if not dist.is_initialized(): - return 1 - return dist.get_world_size() - - -def get_rank() -> int: - if not dist.is_available(): - return 0 - if not dist.is_initialized(): - return 0 - return dist.get_rank() - - -@functools.lru_cache() -def create_local_process_group(num_workers_per_machine: int) -> None: - """ - Create a process group that contains ranks within the same machine. - - Detectron2's launch() in engine/launch.py will call this function. If you start - workers without launch(), you'll have to also call this. Otherwise utilities - like `get_local_rank()` will not work. - - This function contains a barrier. All processes must call it together. - - Args: - num_workers_per_machine: the number of worker processes per machine. Typically - the number of GPUs. - """ - global _LOCAL_PROCESS_GROUP - assert _LOCAL_PROCESS_GROUP is None - assert get_world_size() % num_workers_per_machine == 0 - num_machines = get_world_size() // num_workers_per_machine - machine_rank = get_rank() // num_workers_per_machine - for i in range(num_machines): - ranks_on_i = list(range(i * num_workers_per_machine, (i + 1) * num_workers_per_machine)) - pg = dist.new_group(ranks_on_i) - if i == machine_rank: - _LOCAL_PROCESS_GROUP = pg - - -def get_local_process_group(): - """ - Returns: - A torch process group which only includes processes that are on the same - machine as the current process. This group can be useful for communication - within a machine, e.g. a per-machine SyncBN. - """ - assert _LOCAL_PROCESS_GROUP is not None, _MISSING_LOCAL_PG_ERROR - return _LOCAL_PROCESS_GROUP - - -def get_local_rank() -> int: - """ - Returns: - The rank of the current process within the local (per-machine) process group. - """ - if not dist.is_available(): - return 0 - if not dist.is_initialized(): - return 0 - assert _LOCAL_PROCESS_GROUP is not None, _MISSING_LOCAL_PG_ERROR - return dist.get_rank(group=_LOCAL_PROCESS_GROUP) - - -def get_local_size() -> int: - """ - Returns: - The size of the per-machine process group, - i.e. the number of processes per machine. - """ - if not dist.is_available(): - return 1 - if not dist.is_initialized(): - return 1 - assert _LOCAL_PROCESS_GROUP is not None, _MISSING_LOCAL_PG_ERROR - return dist.get_world_size(group=_LOCAL_PROCESS_GROUP) - - -def is_main_process() -> bool: - return get_rank() == 0 - - -def synchronize(): - """ - Helper function to synchronize (barrier) among all processes when - using distributed training - """ - if not dist.is_available(): - return - if not dist.is_initialized(): - return - world_size = dist.get_world_size() - if world_size == 1: - return - if dist.get_backend() == dist.Backend.NCCL: - # This argument is needed to avoid warnings. - # It's valid only for NCCL backend. - dist.barrier(device_ids=[torch.cuda.current_device()]) - else: - dist.barrier() - - -@functools.lru_cache() -def _get_global_gloo_group(): - """ - Return a process group based on gloo backend, containing all the ranks - The result is cached. - """ - if dist.get_backend() == "nccl": - return dist.new_group(backend="gloo") - else: - return dist.group.WORLD - - -def all_gather(data, group=None): - """ - Run all_gather on arbitrary picklable data (not necessarily tensors). - - Args: - data: any picklable object - group: a torch process group. By default, will use a group which - contains all ranks on gloo backend. - - Returns: - list[data]: list of data gathered from each rank - """ - if get_world_size() == 1: - return [data] - if group is None: - group = _get_global_gloo_group() # use CPU group by default, to reduce GPU RAM usage. - world_size = dist.get_world_size(group) - if world_size == 1: - return [data] - - output = [None for _ in range(world_size)] - dist.all_gather_object(output, data, group=group) - return output - - -def gather(data, dst=0, group=None): - """ - Run gather on arbitrary picklable data (not necessarily tensors). - - Args: - data: any picklable object - dst (int): destination rank - group: a torch process group. By default, will use a group which - contains all ranks on gloo backend. - - Returns: - list[data]: on dst, a list of data gathered from each rank. Otherwise, - an empty list. - """ - if get_world_size() == 1: - return [data] - if group is None: - group = _get_global_gloo_group() - world_size = dist.get_world_size(group=group) - if world_size == 1: - return [data] - rank = dist.get_rank(group=group) - - if rank == dst: - output = [None for _ in range(world_size)] - dist.gather_object(data, output, dst=dst, group=group) - return output - else: - dist.gather_object(data, None, dst=dst, group=group) - return [] - - -def shared_random_seed(): - """ - Returns: - int: a random number that is the same across all workers. - If workers need a shared RNG, they can use this shared seed to - create one. - - All workers must call this function, otherwise it will deadlock. - """ - ints = np.random.randint(2**31) - all_ints = all_gather(ints) - return all_ints[0] - - -def reduce_dict(input_dict, average=True): - """ - Reduce the values in the dictionary from all processes so that process with rank - 0 has the reduced results. - - Args: - input_dict (dict): inputs to be reduced. All the values must be scalar CUDA Tensor. - average (bool): whether to do average or sum - - Returns: - a dict with the same keys as input_dict, after reduction. - """ - world_size = get_world_size() - if world_size < 2: - return input_dict - with torch.no_grad(): - names = [] - values = [] - # sort the keys so that they are consistent across processes - for k in sorted(input_dict.keys()): - names.append(k) - values.append(input_dict[k]) - values = torch.stack(values, dim=0) - dist.reduce(values, dst=0) - if dist.get_rank() == 0 and average: - # only main process gets accumulated, so only divide by - # world_size in this case - values /= world_size - reduced_dict = {k: v for k, v in zip(names, values)} - return reduced_dict diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/uniformer/mmseg/models/backbones/resnest.py b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/uniformer/mmseg/models/backbones/resnest.py deleted file mode 100644 index b45a837f395230029e9d4194ff9f7f2f8f7067b0..0000000000000000000000000000000000000000 --- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/uniformer/mmseg/models/backbones/resnest.py +++ /dev/null @@ -1,314 +0,0 @@ -import math - -import torch -import torch.nn as nn -import torch.nn.functional as F -import torch.utils.checkpoint as cp -from annotator.uniformer.mmcv.cnn import build_conv_layer, build_norm_layer - -from ..builder import BACKBONES -from ..utils import ResLayer -from .resnet import Bottleneck as _Bottleneck -from .resnet import ResNetV1d - - -class RSoftmax(nn.Module): - """Radix Softmax module in ``SplitAttentionConv2d``. - - Args: - radix (int): Radix of input. - groups (int): Groups of input. - """ - - def __init__(self, radix, groups): - super().__init__() - self.radix = radix - self.groups = groups - - def forward(self, x): - batch = x.size(0) - if self.radix > 1: - x = x.view(batch, self.groups, self.radix, -1).transpose(1, 2) - x = F.softmax(x, dim=1) - x = x.reshape(batch, -1) - else: - x = torch.sigmoid(x) - return x - - -class SplitAttentionConv2d(nn.Module): - """Split-Attention Conv2d in ResNeSt. - - Args: - in_channels (int): Same as nn.Conv2d. - out_channels (int): Same as nn.Conv2d. - kernel_size (int | tuple[int]): Same as nn.Conv2d. - stride (int | tuple[int]): Same as nn.Conv2d. - padding (int | tuple[int]): Same as nn.Conv2d. - dilation (int | tuple[int]): Same as nn.Conv2d. - groups (int): Same as nn.Conv2d. - radix (int): Radix of SpltAtConv2d. Default: 2 - reduction_factor (int): Reduction factor of inter_channels. Default: 4. - conv_cfg (dict): Config dict for convolution layer. Default: None, - which means using conv2d. - norm_cfg (dict): Config dict for normalization layer. Default: None. - dcn (dict): Config dict for DCN. Default: None. - """ - - def __init__(self, - in_channels, - channels, - kernel_size, - stride=1, - padding=0, - dilation=1, - groups=1, - radix=2, - reduction_factor=4, - conv_cfg=None, - norm_cfg=dict(type='BN'), - dcn=None): - super(SplitAttentionConv2d, self).__init__() - inter_channels = max(in_channels * radix // reduction_factor, 32) - self.radix = radix - self.groups = groups - self.channels = channels - self.with_dcn = dcn is not None - self.dcn = dcn - fallback_on_stride = False - if self.with_dcn: - fallback_on_stride = self.dcn.pop('fallback_on_stride', False) - if self.with_dcn and not fallback_on_stride: - assert conv_cfg is None, 'conv_cfg must be None for DCN' - conv_cfg = dcn - self.conv = build_conv_layer( - conv_cfg, - in_channels, - channels * radix, - kernel_size, - stride=stride, - padding=padding, - dilation=dilation, - groups=groups * radix, - bias=False) - self.norm0_name, norm0 = build_norm_layer( - norm_cfg, channels * radix, postfix=0) - self.add_module(self.norm0_name, norm0) - self.relu = nn.ReLU(inplace=True) - self.fc1 = build_conv_layer( - None, channels, inter_channels, 1, groups=self.groups) - self.norm1_name, norm1 = build_norm_layer( - norm_cfg, inter_channels, postfix=1) - self.add_module(self.norm1_name, norm1) - self.fc2 = build_conv_layer( - None, inter_channels, channels * radix, 1, groups=self.groups) - self.rsoftmax = RSoftmax(radix, groups) - - @property - def norm0(self): - """nn.Module: the normalization layer named "norm0" """ - return getattr(self, self.norm0_name) - - @property - def norm1(self): - """nn.Module: the normalization layer named "norm1" """ - return getattr(self, self.norm1_name) - - def forward(self, x): - x = self.conv(x) - x = self.norm0(x) - x = self.relu(x) - - batch, rchannel = x.shape[:2] - batch = x.size(0) - if self.radix > 1: - splits = x.view(batch, self.radix, -1, *x.shape[2:]) - gap = splits.sum(dim=1) - else: - gap = x - gap = F.adaptive_avg_pool2d(gap, 1) - gap = self.fc1(gap) - - gap = self.norm1(gap) - gap = self.relu(gap) - - atten = self.fc2(gap) - atten = self.rsoftmax(atten).view(batch, -1, 1, 1) - - if self.radix > 1: - attens = atten.view(batch, self.radix, -1, *atten.shape[2:]) - out = torch.sum(attens * splits, dim=1) - else: - out = atten * x - return out.contiguous() - - -class Bottleneck(_Bottleneck): - """Bottleneck block for ResNeSt. - - Args: - inplane (int): Input planes of this block. - planes (int): Middle planes of this block. - groups (int): Groups of conv2. - width_per_group (int): Width per group of conv2. 64x4d indicates - ``groups=64, width_per_group=4`` and 32x8d indicates - ``groups=32, width_per_group=8``. - radix (int): Radix of SpltAtConv2d. Default: 2 - reduction_factor (int): Reduction factor of inter_channels in - SplitAttentionConv2d. Default: 4. - avg_down_stride (bool): Whether to use average pool for stride in - Bottleneck. Default: True. - kwargs (dict): Key word arguments for base class. - """ - expansion = 4 - - def __init__(self, - inplanes, - planes, - groups=1, - base_width=4, - base_channels=64, - radix=2, - reduction_factor=4, - avg_down_stride=True, - **kwargs): - """Bottleneck block for ResNeSt.""" - super(Bottleneck, self).__init__(inplanes, planes, **kwargs) - - if groups == 1: - width = self.planes - else: - width = math.floor(self.planes * - (base_width / base_channels)) * groups - - self.avg_down_stride = avg_down_stride and self.conv2_stride > 1 - - self.norm1_name, norm1 = build_norm_layer( - self.norm_cfg, width, postfix=1) - self.norm3_name, norm3 = build_norm_layer( - self.norm_cfg, self.planes * self.expansion, postfix=3) - - self.conv1 = build_conv_layer( - self.conv_cfg, - self.inplanes, - width, - kernel_size=1, - stride=self.conv1_stride, - bias=False) - self.add_module(self.norm1_name, norm1) - self.with_modulated_dcn = False - self.conv2 = SplitAttentionConv2d( - width, - width, - kernel_size=3, - stride=1 if self.avg_down_stride else self.conv2_stride, - padding=self.dilation, - dilation=self.dilation, - groups=groups, - radix=radix, - reduction_factor=reduction_factor, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - dcn=self.dcn) - delattr(self, self.norm2_name) - - if self.avg_down_stride: - self.avd_layer = nn.AvgPool2d(3, self.conv2_stride, padding=1) - - self.conv3 = build_conv_layer( - self.conv_cfg, - width, - self.planes * self.expansion, - kernel_size=1, - bias=False) - self.add_module(self.norm3_name, norm3) - - def forward(self, x): - - def _inner_forward(x): - identity = x - - out = self.conv1(x) - out = self.norm1(out) - out = self.relu(out) - - if self.with_plugins: - out = self.forward_plugin(out, self.after_conv1_plugin_names) - - out = self.conv2(out) - - if self.avg_down_stride: - out = self.avd_layer(out) - - if self.with_plugins: - out = self.forward_plugin(out, self.after_conv2_plugin_names) - - out = self.conv3(out) - out = self.norm3(out) - - if self.with_plugins: - out = self.forward_plugin(out, self.after_conv3_plugin_names) - - if self.downsample is not None: - identity = self.downsample(x) - - out += identity - - return out - - if self.with_cp and x.requires_grad: - out = cp.checkpoint(_inner_forward, x) - else: - out = _inner_forward(x) - - out = self.relu(out) - - return out - - -@BACKBONES.register_module() -class ResNeSt(ResNetV1d): - """ResNeSt backbone. - - Args: - groups (int): Number of groups of Bottleneck. Default: 1 - base_width (int): Base width of Bottleneck. Default: 4 - radix (int): Radix of SpltAtConv2d. Default: 2 - reduction_factor (int): Reduction factor of inter_channels in - SplitAttentionConv2d. Default: 4. - avg_down_stride (bool): Whether to use average pool for stride in - Bottleneck. Default: True. - kwargs (dict): Keyword arguments for ResNet. - """ - - arch_settings = { - 50: (Bottleneck, (3, 4, 6, 3)), - 101: (Bottleneck, (3, 4, 23, 3)), - 152: (Bottleneck, (3, 8, 36, 3)), - 200: (Bottleneck, (3, 24, 36, 3)) - } - - def __init__(self, - groups=1, - base_width=4, - radix=2, - reduction_factor=4, - avg_down_stride=True, - **kwargs): - self.groups = groups - self.base_width = base_width - self.radix = radix - self.reduction_factor = reduction_factor - self.avg_down_stride = avg_down_stride - super(ResNeSt, self).__init__(**kwargs) - - def make_res_layer(self, **kwargs): - """Pack all blocks in a stage into a ``ResLayer``.""" - return ResLayer( - groups=self.groups, - base_width=self.base_width, - base_channels=self.base_channels, - radix=self.radix, - reduction_factor=self.reduction_factor, - avg_down_stride=self.avg_down_stride, - **kwargs) diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/uniformer/mmseg/models/utils/res_layer.py b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/uniformer/mmseg/models/utils/res_layer.py deleted file mode 100644 index b2c07b47007e92e4c3945b989e79f9d50306f5fe..0000000000000000000000000000000000000000 --- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/uniformer/mmseg/models/utils/res_layer.py +++ /dev/null @@ -1,94 +0,0 @@ -from annotator.uniformer.mmcv.cnn import build_conv_layer, build_norm_layer -from torch import nn as nn - - -class ResLayer(nn.Sequential): - """ResLayer to build ResNet style backbone. - - Args: - block (nn.Module): block used to build ResLayer. - inplanes (int): inplanes of block. - planes (int): planes of block. - num_blocks (int): number of blocks. - stride (int): stride of the first block. Default: 1 - avg_down (bool): Use AvgPool instead of stride conv when - downsampling in the bottleneck. Default: False - conv_cfg (dict): dictionary to construct and config conv layer. - Default: None - norm_cfg (dict): dictionary to construct and config norm layer. - Default: dict(type='BN') - multi_grid (int | None): Multi grid dilation rates of last - stage. Default: None - contract_dilation (bool): Whether contract first dilation of each layer - Default: False - """ - - def __init__(self, - block, - inplanes, - planes, - num_blocks, - stride=1, - dilation=1, - avg_down=False, - conv_cfg=None, - norm_cfg=dict(type='BN'), - multi_grid=None, - contract_dilation=False, - **kwargs): - self.block = block - - downsample = None - if stride != 1 or inplanes != planes * block.expansion: - downsample = [] - conv_stride = stride - if avg_down: - conv_stride = 1 - downsample.append( - nn.AvgPool2d( - kernel_size=stride, - stride=stride, - ceil_mode=True, - count_include_pad=False)) - downsample.extend([ - build_conv_layer( - conv_cfg, - inplanes, - planes * block.expansion, - kernel_size=1, - stride=conv_stride, - bias=False), - build_norm_layer(norm_cfg, planes * block.expansion)[1] - ]) - downsample = nn.Sequential(*downsample) - - layers = [] - if multi_grid is None: - if dilation > 1 and contract_dilation: - first_dilation = dilation // 2 - else: - first_dilation = dilation - else: - first_dilation = multi_grid[0] - layers.append( - block( - inplanes=inplanes, - planes=planes, - stride=stride, - dilation=first_dilation, - downsample=downsample, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - **kwargs)) - inplanes = planes * block.expansion - for i in range(1, num_blocks): - layers.append( - block( - inplanes=inplanes, - planes=planes, - stride=1, - dilation=dilation if multi_grid is None else multi_grid[i], - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - **kwargs)) - super(ResLayer, self).__init__(*layers) diff --git a/spaces/cozyanduofen/bingo/src/components/turn-counter.tsx b/spaces/cozyanduofen/bingo/src/components/turn-counter.tsx deleted file mode 100644 index 08a9e488f044802a8600f4d195b106567c35aab4..0000000000000000000000000000000000000000 --- a/spaces/cozyanduofen/bingo/src/components/turn-counter.tsx +++ /dev/null @@ -1,23 +0,0 @@ -import React from 'react' -import { Throttling } from '@/lib/bots/bing/types' - -export interface TurnCounterProps { - throttling?: Throttling -} - -export function TurnCounter({ throttling }: TurnCounterProps) { - if (!throttling) { - return null - } - - return ( -
      -
      - {throttling.numUserMessagesInConversation} - - {throttling.maxNumUserMessagesInConversation} -
      -
      -
      - ) -} diff --git a/spaces/cscan/CodeFormer/CodeFormer/facelib/detection/retinaface/retinaface_utils.py b/spaces/cscan/CodeFormer/CodeFormer/facelib/detection/retinaface/retinaface_utils.py deleted file mode 100644 index 8c357757741c6d9bd7ce4d8ce740fefd51850fbf..0000000000000000000000000000000000000000 --- a/spaces/cscan/CodeFormer/CodeFormer/facelib/detection/retinaface/retinaface_utils.py +++ /dev/null @@ -1,421 +0,0 @@ -import numpy as np -import torch -import torchvision -from itertools import product as product -from math import ceil - - -class PriorBox(object): - - def __init__(self, cfg, image_size=None, phase='train'): - super(PriorBox, self).__init__() - self.min_sizes = cfg['min_sizes'] - self.steps = cfg['steps'] - self.clip = cfg['clip'] - self.image_size = image_size - self.feature_maps = [[ceil(self.image_size[0] / step), ceil(self.image_size[1] / step)] for step in self.steps] - self.name = 's' - - def forward(self): - anchors = [] - for k, f in enumerate(self.feature_maps): - min_sizes = self.min_sizes[k] - for i, j in product(range(f[0]), range(f[1])): - for min_size in min_sizes: - s_kx = min_size / self.image_size[1] - s_ky = min_size / self.image_size[0] - dense_cx = [x * self.steps[k] / self.image_size[1] for x in [j + 0.5]] - dense_cy = [y * self.steps[k] / self.image_size[0] for y in [i + 0.5]] - for cy, cx in product(dense_cy, dense_cx): - anchors += [cx, cy, s_kx, s_ky] - - # back to torch land - output = torch.Tensor(anchors).view(-1, 4) - if self.clip: - output.clamp_(max=1, min=0) - return output - - -def py_cpu_nms(dets, thresh): - """Pure Python NMS baseline.""" - keep = torchvision.ops.nms( - boxes=torch.Tensor(dets[:, :4]), - scores=torch.Tensor(dets[:, 4]), - iou_threshold=thresh, - ) - - return list(keep) - - -def point_form(boxes): - """ Convert prior_boxes to (xmin, ymin, xmax, ymax) - representation for comparison to point form ground truth data. - Args: - boxes: (tensor) center-size default boxes from priorbox layers. - Return: - boxes: (tensor) Converted xmin, ymin, xmax, ymax form of boxes. - """ - return torch.cat( - ( - boxes[:, :2] - boxes[:, 2:] / 2, # xmin, ymin - boxes[:, :2] + boxes[:, 2:] / 2), - 1) # xmax, ymax - - -def center_size(boxes): - """ Convert prior_boxes to (cx, cy, w, h) - representation for comparison to center-size form ground truth data. - Args: - boxes: (tensor) point_form boxes - Return: - boxes: (tensor) Converted xmin, ymin, xmax, ymax form of boxes. - """ - return torch.cat( - (boxes[:, 2:] + boxes[:, :2]) / 2, # cx, cy - boxes[:, 2:] - boxes[:, :2], - 1) # w, h - - -def intersect(box_a, box_b): - """ We resize both tensors to [A,B,2] without new malloc: - [A,2] -> [A,1,2] -> [A,B,2] - [B,2] -> [1,B,2] -> [A,B,2] - Then we compute the area of intersect between box_a and box_b. - Args: - box_a: (tensor) bounding boxes, Shape: [A,4]. - box_b: (tensor) bounding boxes, Shape: [B,4]. - Return: - (tensor) intersection area, Shape: [A,B]. - """ - A = box_a.size(0) - B = box_b.size(0) - max_xy = torch.min(box_a[:, 2:].unsqueeze(1).expand(A, B, 2), box_b[:, 2:].unsqueeze(0).expand(A, B, 2)) - min_xy = torch.max(box_a[:, :2].unsqueeze(1).expand(A, B, 2), box_b[:, :2].unsqueeze(0).expand(A, B, 2)) - inter = torch.clamp((max_xy - min_xy), min=0) - return inter[:, :, 0] * inter[:, :, 1] - - -def jaccard(box_a, box_b): - """Compute the jaccard overlap of two sets of boxes. The jaccard overlap - is simply the intersection over union of two boxes. Here we operate on - ground truth boxes and default boxes. - E.g.: - A ∩ B / A ∪ B = A ∩ B / (area(A) + area(B) - A ∩ B) - Args: - box_a: (tensor) Ground truth bounding boxes, Shape: [num_objects,4] - box_b: (tensor) Prior boxes from priorbox layers, Shape: [num_priors,4] - Return: - jaccard overlap: (tensor) Shape: [box_a.size(0), box_b.size(0)] - """ - inter = intersect(box_a, box_b) - area_a = ((box_a[:, 2] - box_a[:, 0]) * (box_a[:, 3] - box_a[:, 1])).unsqueeze(1).expand_as(inter) # [A,B] - area_b = ((box_b[:, 2] - box_b[:, 0]) * (box_b[:, 3] - box_b[:, 1])).unsqueeze(0).expand_as(inter) # [A,B] - union = area_a + area_b - inter - return inter / union # [A,B] - - -def matrix_iou(a, b): - """ - return iou of a and b, numpy version for data augenmentation - """ - lt = np.maximum(a[:, np.newaxis, :2], b[:, :2]) - rb = np.minimum(a[:, np.newaxis, 2:], b[:, 2:]) - - area_i = np.prod(rb - lt, axis=2) * (lt < rb).all(axis=2) - area_a = np.prod(a[:, 2:] - a[:, :2], axis=1) - area_b = np.prod(b[:, 2:] - b[:, :2], axis=1) - return area_i / (area_a[:, np.newaxis] + area_b - area_i) - - -def matrix_iof(a, b): - """ - return iof of a and b, numpy version for data augenmentation - """ - lt = np.maximum(a[:, np.newaxis, :2], b[:, :2]) - rb = np.minimum(a[:, np.newaxis, 2:], b[:, 2:]) - - area_i = np.prod(rb - lt, axis=2) * (lt < rb).all(axis=2) - area_a = np.prod(a[:, 2:] - a[:, :2], axis=1) - return area_i / np.maximum(area_a[:, np.newaxis], 1) - - -def match(threshold, truths, priors, variances, labels, landms, loc_t, conf_t, landm_t, idx): - """Match each prior box with the ground truth box of the highest jaccard - overlap, encode the bounding boxes, then return the matched indices - corresponding to both confidence and location preds. - Args: - threshold: (float) The overlap threshold used when matching boxes. - truths: (tensor) Ground truth boxes, Shape: [num_obj, 4]. - priors: (tensor) Prior boxes from priorbox layers, Shape: [n_priors,4]. - variances: (tensor) Variances corresponding to each prior coord, - Shape: [num_priors, 4]. - labels: (tensor) All the class labels for the image, Shape: [num_obj]. - landms: (tensor) Ground truth landms, Shape [num_obj, 10]. - loc_t: (tensor) Tensor to be filled w/ encoded location targets. - conf_t: (tensor) Tensor to be filled w/ matched indices for conf preds. - landm_t: (tensor) Tensor to be filled w/ encoded landm targets. - idx: (int) current batch index - Return: - The matched indices corresponding to 1)location 2)confidence - 3)landm preds. - """ - # jaccard index - overlaps = jaccard(truths, point_form(priors)) - # (Bipartite Matching) - # [1,num_objects] best prior for each ground truth - best_prior_overlap, best_prior_idx = overlaps.max(1, keepdim=True) - - # ignore hard gt - valid_gt_idx = best_prior_overlap[:, 0] >= 0.2 - best_prior_idx_filter = best_prior_idx[valid_gt_idx, :] - if best_prior_idx_filter.shape[0] <= 0: - loc_t[idx] = 0 - conf_t[idx] = 0 - return - - # [1,num_priors] best ground truth for each prior - best_truth_overlap, best_truth_idx = overlaps.max(0, keepdim=True) - best_truth_idx.squeeze_(0) - best_truth_overlap.squeeze_(0) - best_prior_idx.squeeze_(1) - best_prior_idx_filter.squeeze_(1) - best_prior_overlap.squeeze_(1) - best_truth_overlap.index_fill_(0, best_prior_idx_filter, 2) # ensure best prior - # TODO refactor: index best_prior_idx with long tensor - # ensure every gt matches with its prior of max overlap - for j in range(best_prior_idx.size(0)): # 判别此anchor是预测哪一个boxes - best_truth_idx[best_prior_idx[j]] = j - matches = truths[best_truth_idx] # Shape: [num_priors,4] 此处为每一个anchor对应的bbox取出来 - conf = labels[best_truth_idx] # Shape: [num_priors] 此处为每一个anchor对应的label取出来 - conf[best_truth_overlap < threshold] = 0 # label as background overlap<0.35的全部作为负样本 - loc = encode(matches, priors, variances) - - matches_landm = landms[best_truth_idx] - landm = encode_landm(matches_landm, priors, variances) - loc_t[idx] = loc # [num_priors,4] encoded offsets to learn - conf_t[idx] = conf # [num_priors] top class label for each prior - landm_t[idx] = landm - - -def encode(matched, priors, variances): - """Encode the variances from the priorbox layers into the ground truth boxes - we have matched (based on jaccard overlap) with the prior boxes. - Args: - matched: (tensor) Coords of ground truth for each prior in point-form - Shape: [num_priors, 4]. - priors: (tensor) Prior boxes in center-offset form - Shape: [num_priors,4]. - variances: (list[float]) Variances of priorboxes - Return: - encoded boxes (tensor), Shape: [num_priors, 4] - """ - - # dist b/t match center and prior's center - g_cxcy = (matched[:, :2] + matched[:, 2:]) / 2 - priors[:, :2] - # encode variance - g_cxcy /= (variances[0] * priors[:, 2:]) - # match wh / prior wh - g_wh = (matched[:, 2:] - matched[:, :2]) / priors[:, 2:] - g_wh = torch.log(g_wh) / variances[1] - # return target for smooth_l1_loss - return torch.cat([g_cxcy, g_wh], 1) # [num_priors,4] - - -def encode_landm(matched, priors, variances): - """Encode the variances from the priorbox layers into the ground truth boxes - we have matched (based on jaccard overlap) with the prior boxes. - Args: - matched: (tensor) Coords of ground truth for each prior in point-form - Shape: [num_priors, 10]. - priors: (tensor) Prior boxes in center-offset form - Shape: [num_priors,4]. - variances: (list[float]) Variances of priorboxes - Return: - encoded landm (tensor), Shape: [num_priors, 10] - """ - - # dist b/t match center and prior's center - matched = torch.reshape(matched, (matched.size(0), 5, 2)) - priors_cx = priors[:, 0].unsqueeze(1).expand(matched.size(0), 5).unsqueeze(2) - priors_cy = priors[:, 1].unsqueeze(1).expand(matched.size(0), 5).unsqueeze(2) - priors_w = priors[:, 2].unsqueeze(1).expand(matched.size(0), 5).unsqueeze(2) - priors_h = priors[:, 3].unsqueeze(1).expand(matched.size(0), 5).unsqueeze(2) - priors = torch.cat([priors_cx, priors_cy, priors_w, priors_h], dim=2) - g_cxcy = matched[:, :, :2] - priors[:, :, :2] - # encode variance - g_cxcy /= (variances[0] * priors[:, :, 2:]) - # g_cxcy /= priors[:, :, 2:] - g_cxcy = g_cxcy.reshape(g_cxcy.size(0), -1) - # return target for smooth_l1_loss - return g_cxcy - - -# Adapted from https://github.com/Hakuyume/chainer-ssd -def decode(loc, priors, variances): - """Decode locations from predictions using priors to undo - the encoding we did for offset regression at train time. - Args: - loc (tensor): location predictions for loc layers, - Shape: [num_priors,4] - priors (tensor): Prior boxes in center-offset form. - Shape: [num_priors,4]. - variances: (list[float]) Variances of priorboxes - Return: - decoded bounding box predictions - """ - - boxes = torch.cat((priors[:, :2] + loc[:, :2] * variances[0] * priors[:, 2:], - priors[:, 2:] * torch.exp(loc[:, 2:] * variances[1])), 1) - boxes[:, :2] -= boxes[:, 2:] / 2 - boxes[:, 2:] += boxes[:, :2] - return boxes - - -def decode_landm(pre, priors, variances): - """Decode landm from predictions using priors to undo - the encoding we did for offset regression at train time. - Args: - pre (tensor): landm predictions for loc layers, - Shape: [num_priors,10] - priors (tensor): Prior boxes in center-offset form. - Shape: [num_priors,4]. - variances: (list[float]) Variances of priorboxes - Return: - decoded landm predictions - """ - tmp = ( - priors[:, :2] + pre[:, :2] * variances[0] * priors[:, 2:], - priors[:, :2] + pre[:, 2:4] * variances[0] * priors[:, 2:], - priors[:, :2] + pre[:, 4:6] * variances[0] * priors[:, 2:], - priors[:, :2] + pre[:, 6:8] * variances[0] * priors[:, 2:], - priors[:, :2] + pre[:, 8:10] * variances[0] * priors[:, 2:], - ) - landms = torch.cat(tmp, dim=1) - return landms - - -def batched_decode(b_loc, priors, variances): - """Decode locations from predictions using priors to undo - the encoding we did for offset regression at train time. - Args: - b_loc (tensor): location predictions for loc layers, - Shape: [num_batches,num_priors,4] - priors (tensor): Prior boxes in center-offset form. - Shape: [1,num_priors,4]. - variances: (list[float]) Variances of priorboxes - Return: - decoded bounding box predictions - """ - boxes = ( - priors[:, :, :2] + b_loc[:, :, :2] * variances[0] * priors[:, :, 2:], - priors[:, :, 2:] * torch.exp(b_loc[:, :, 2:] * variances[1]), - ) - boxes = torch.cat(boxes, dim=2) - - boxes[:, :, :2] -= boxes[:, :, 2:] / 2 - boxes[:, :, 2:] += boxes[:, :, :2] - return boxes - - -def batched_decode_landm(pre, priors, variances): - """Decode landm from predictions using priors to undo - the encoding we did for offset regression at train time. - Args: - pre (tensor): landm predictions for loc layers, - Shape: [num_batches,num_priors,10] - priors (tensor): Prior boxes in center-offset form. - Shape: [1,num_priors,4]. - variances: (list[float]) Variances of priorboxes - Return: - decoded landm predictions - """ - landms = ( - priors[:, :, :2] + pre[:, :, :2] * variances[0] * priors[:, :, 2:], - priors[:, :, :2] + pre[:, :, 2:4] * variances[0] * priors[:, :, 2:], - priors[:, :, :2] + pre[:, :, 4:6] * variances[0] * priors[:, :, 2:], - priors[:, :, :2] + pre[:, :, 6:8] * variances[0] * priors[:, :, 2:], - priors[:, :, :2] + pre[:, :, 8:10] * variances[0] * priors[:, :, 2:], - ) - landms = torch.cat(landms, dim=2) - return landms - - -def log_sum_exp(x): - """Utility function for computing log_sum_exp while determining - This will be used to determine unaveraged confidence loss across - all examples in a batch. - Args: - x (Variable(tensor)): conf_preds from conf layers - """ - x_max = x.data.max() - return torch.log(torch.sum(torch.exp(x - x_max), 1, keepdim=True)) + x_max - - -# Original author: Francisco Massa: -# https://github.com/fmassa/object-detection.torch -# Ported to PyTorch by Max deGroot (02/01/2017) -def nms(boxes, scores, overlap=0.5, top_k=200): - """Apply non-maximum suppression at test time to avoid detecting too many - overlapping bounding boxes for a given object. - Args: - boxes: (tensor) The location preds for the img, Shape: [num_priors,4]. - scores: (tensor) The class predscores for the img, Shape:[num_priors]. - overlap: (float) The overlap thresh for suppressing unnecessary boxes. - top_k: (int) The Maximum number of box preds to consider. - Return: - The indices of the kept boxes with respect to num_priors. - """ - - keep = torch.Tensor(scores.size(0)).fill_(0).long() - if boxes.numel() == 0: - return keep - x1 = boxes[:, 0] - y1 = boxes[:, 1] - x2 = boxes[:, 2] - y2 = boxes[:, 3] - area = torch.mul(x2 - x1, y2 - y1) - v, idx = scores.sort(0) # sort in ascending order - # I = I[v >= 0.01] - idx = idx[-top_k:] # indices of the top-k largest vals - xx1 = boxes.new() - yy1 = boxes.new() - xx2 = boxes.new() - yy2 = boxes.new() - w = boxes.new() - h = boxes.new() - - # keep = torch.Tensor() - count = 0 - while idx.numel() > 0: - i = idx[-1] # index of current largest val - # keep.append(i) - keep[count] = i - count += 1 - if idx.size(0) == 1: - break - idx = idx[:-1] # remove kept element from view - # load bboxes of next highest vals - torch.index_select(x1, 0, idx, out=xx1) - torch.index_select(y1, 0, idx, out=yy1) - torch.index_select(x2, 0, idx, out=xx2) - torch.index_select(y2, 0, idx, out=yy2) - # store element-wise max with next highest score - xx1 = torch.clamp(xx1, min=x1[i]) - yy1 = torch.clamp(yy1, min=y1[i]) - xx2 = torch.clamp(xx2, max=x2[i]) - yy2 = torch.clamp(yy2, max=y2[i]) - w.resize_as_(xx2) - h.resize_as_(yy2) - w = xx2 - xx1 - h = yy2 - yy1 - # check sizes of xx1 and xx2.. after each iteration - w = torch.clamp(w, min=0.0) - h = torch.clamp(h, min=0.0) - inter = w * h - # IoU = i / (area(a) + area(b) - i) - rem_areas = torch.index_select(area, 0, idx) # load remaining areas) - union = (rem_areas - inter) + area[i] - IoU = inter / union # store result in iou - # keep only elements with an IoU <= overlap - idx = idx[IoU.le(overlap)] - return keep, count diff --git a/spaces/cvlab/zero123-live/CLIP/clip/__init__.py b/spaces/cvlab/zero123-live/CLIP/clip/__init__.py deleted file mode 100644 index dcc5619538c0f7c782508bdbd9587259d805e0d9..0000000000000000000000000000000000000000 --- a/spaces/cvlab/zero123-live/CLIP/clip/__init__.py +++ /dev/null @@ -1 +0,0 @@ -from .clip import * diff --git a/spaces/dajuzi/img-to-music/constants.py b/spaces/dajuzi/img-to-music/constants.py deleted file mode 100644 index 86863d1b778d4c66f0d8e1e0b699f1bb937c1d50..0000000000000000000000000000000000000000 --- a/spaces/dajuzi/img-to-music/constants.py +++ /dev/null @@ -1,9 +0,0 @@ -import numpy as np -import os - -MUBERT_LICENSE = os.environ.get('MUBERT_LICENSE') -MUBERT_TOKEN = os.environ.get('MUBERT_TOKEN') - -MUBERT_MODE = "loop" -MUBERT_TAGS_STRING = 'tribal,action,kids,neo-classic,run 130,pumped,jazz / funk,ethnic,dubtechno,reggae,acid jazz,liquidfunk,funk,witch house,tech house,underground,artists,mystical,disco,sensorium,r&b,agender,psychedelic trance / psytrance,peaceful,run 140,piano,run 160,setting,meditation,christmas,ambient,horror,cinematic,electro house,idm,bass,minimal,underscore,drums,glitchy,beautiful,technology,tribal house,country pop,jazz & funk,documentary,space,classical,valentines,chillstep,experimental,trap,new jack swing,drama,post-rock,tense,corporate,neutral,happy,analog,funky,spiritual,sberzvuk special,chill hop,dramatic,catchy,holidays,fitness 90,optimistic,orchestra,acid techno,energizing,romantic,minimal house,breaks,hyper pop,warm up,dreamy,dark,urban,microfunk,dub,nu disco,vogue,keys,hardcore,aggressive,indie,electro funk,beauty,relaxing,trance,pop,hiphop,soft,acoustic,chillrave / ethno-house,deep techno,angry,dance,fun,dubstep,tropical,latin pop,heroic,world music,inspirational,uplifting,atmosphere,art,epic,advertising,chillout,scary,spooky,slow ballad,saxophone,summer,erotic,jazzy,energy 100,kara mar,xmas,atmospheric,indie pop,hip-hop,yoga,reggaeton,lounge,travel,running,folk,chillrave & ethno-house,detective,darkambient,chill,fantasy,minimal techno,special,night,tropical house,downtempo,lullaby,meditative,upbeat,glitch hop,fitness,neurofunk,sexual,indie rock,future pop,jazz,cyberpunk,melancholic,happy hardcore,family / kids,synths,electric guitar,comedy,psychedelic trance & psytrance,edm,psychedelic rock,calm,zen,bells,podcast,melodic house,ethnic percussion,nature,heavy,bassline,indie dance,techno,drumnbass,synth pop,vaporwave,sad,8-bit,chillgressive,deep,orchestral,futuristic,hardtechno,nostalgic,big room,sci-fi,tutorial,joyful,pads,minimal 170,drill,ethnic 108,amusing,sleepy ambient,psychill,italo disco,lofi,house,acoustic guitar,bassline house,rock,k-pop,synthwave,deep house,electronica,gabber,nightlife,sport & fitness,road trip,celebration,electro,disco house,electronic' -MUBERT_TAGS = np.array(MUBERT_TAGS_STRING.split(',')) \ No newline at end of file diff --git a/spaces/dakaiye/dky_xuexi/README.md b/spaces/dakaiye/dky_xuexi/README.md deleted file mode 100644 index 449f6da36139b85721a650204375592f102b5c03..0000000000000000000000000000000000000000 --- a/spaces/dakaiye/dky_xuexi/README.md +++ /dev/null @@ -1,343 +0,0 @@ ---- -title: ChatImprovement -emoji: 😻 -colorFrom: blue -colorTo: blue -sdk: gradio -sdk_version: 3.32.0 -app_file: app.py -pinned: false -duplicated_from: qingxu98/gpt-academic ---- - -# ChatGPT 学术优化 -> **Note** -> -> 5月27日对gradio依赖进行了较大的修复和调整,fork并解决了官方Gradio的一系列bug。但如果27日当天进行了更新,可能会导致代码报错(依赖缺失,卡在loading界面等),请及时更新到**最新版代码**并重新安装pip依赖即可。若给您带来困扰还请谅解。安装依赖时,请严格选择requirements.txt中**指定的版本**: -> -> `pip install -r requirements.txt -i https://pypi.org/simple` -> - -# GPT 学术优化 (GPT Academic) - -**如果喜欢这个项目,请给它一个Star;如果你发明了更好用的快捷键或函数插件,欢迎发pull requests** - -If you like this project, please give it a Star. If you've come up with more useful academic shortcuts or functional plugins, feel free to open an issue or pull request. We also have a README in [English|](docs/README_EN.md)[日本語|](docs/README_JP.md)[한국어|](https://github.com/mldljyh/ko_gpt_academic)[Русский|](docs/README_RS.md)[Français](docs/README_FR.md) translated by this project itself. -To translate this project to arbitary language with GPT, read and run [`multi_language.py`](multi_language.py) (experimental). - -> **Note** -> -> 1.请注意只有**红颜色**标识的函数插件(按钮)才支持读取文件,部分插件位于插件区的**下拉菜单**中。另外我们以**最高优先级**欢迎和处理任何新插件的PR! -> -> 2.本项目中每个文件的功能都在自译解[`self_analysis.md`](https://github.com/binary-husky/chatgpt_academic/wiki/chatgpt-academic%E9%A1%B9%E7%9B%AE%E8%87%AA%E8%AF%91%E8%A7%A3%E6%8A%A5%E5%91%8A)详细说明。随着版本的迭代,您也可以随时自行点击相关函数插件,调用GPT重新生成项目的自我解析报告。常见问题汇总在[`wiki`](https://github.com/binary-husky/chatgpt_academic/wiki/%E5%B8%B8%E8%A7%81%E9%97%AE%E9%A2%98)当中。[安装方法](#installation)。 -> -> 3.本项目兼容并鼓励尝试国产大语言模型chatglm和RWKV, 盘古等等。支持多个api-key共存,可在配置文件中填写如`API_KEY="openai-key1,openai-key2,api2d-key3"`。需要临时更换`API_KEY`时,在输入区输入临时的`API_KEY`然后回车键提交后即可生效。 - - - - -
      - -功能 | 描述 ---- | --- -一键润色 | 支持一键润色、一键查找论文语法错误 -一键中英互译 | 一键中英互译 -一键代码解释 | 显示代码、解释代码、生成代码、给代码加注释 -[自定义快捷键](https://www.bilibili.com/video/BV14s4y1E7jN) | 支持自定义快捷键 -模块化设计 | 支持自定义强大的[函数插件](https://github.com/binary-husky/chatgpt_academic/tree/master/crazy_functions),插件支持[热更新](https://github.com/binary-husky/chatgpt_academic/wiki/%E5%87%BD%E6%95%B0%E6%8F%92%E4%BB%B6%E6%8C%87%E5%8D%97) -[自我程序剖析](https://www.bilibili.com/video/BV1cj411A7VW) | [函数插件] [一键读懂](https://github.com/binary-husky/chatgpt_academic/wiki/chatgpt-academic%E9%A1%B9%E7%9B%AE%E8%87%AA%E8%AF%91%E8%A7%A3%E6%8A%A5%E5%91%8A)本项目的源代码 -[程序剖析](https://www.bilibili.com/video/BV1cj411A7VW) | [函数插件] 一键可以剖析其他Python/C/C++/Java/Lua/...项目树 -读论文、[翻译](https://www.bilibili.com/video/BV1KT411x7Wn)论文 | [函数插件] 一键解读latex/pdf论文全文并生成摘要 -Latex全文[翻译](https://www.bilibili.com/video/BV1nk4y1Y7Js/)、[润色](https://www.bilibili.com/video/BV1FT411H7c5/) | [函数插件] 一键翻译或润色latex论文 -批量注释生成 | [函数插件] 一键批量生成函数注释 -Markdown[中英互译](https://www.bilibili.com/video/BV1yo4y157jV/) | [函数插件] 看到上面5种语言的[README](https://github.com/binary-husky/chatgpt_academic/blob/master/docs/README_EN.md)了吗? -chat分析报告生成 | [函数插件] 运行后自动生成总结汇报 -[PDF论文全文翻译功能](https://www.bilibili.com/video/BV1KT411x7Wn) | [函数插件] PDF论文提取题目&摘要+翻译全文(多线程) -[Arxiv小助手](https://www.bilibili.com/video/BV1LM4y1279X) | [函数插件] 输入arxiv文章url即可一键翻译摘要+下载PDF -[谷歌学术统合小助手](https://www.bilibili.com/video/BV19L411U7ia) | [函数插件] 给定任意谷歌学术搜索页面URL,让gpt帮你[写relatedworks](https://www.bilibili.com/video/BV1GP411U7Az/) -互联网信息聚合+GPT | [函数插件] 一键[让GPT先从互联网获取信息](https://www.bilibili.com/video/BV1om4y127ck),再回答问题,让信息永不过时 -公式/图片/表格显示 | 可以同时显示公式的[tex形式和渲染形式](https://user-images.githubusercontent.com/96192199/230598842-1d7fcddd-815d-40ee-af60-baf488a199df.png),支持公式、代码高亮 -多线程函数插件支持 | 支持多线调用chatgpt,一键处理[海量文本](https://www.bilibili.com/video/BV1FT411H7c5/)或程序 -启动暗色gradio[主题](https://github.com/binary-husky/chatgpt_academic/issues/173) | 在浏览器url后面添加```/?__theme=dark```可以切换dark主题 -[多LLM模型](https://www.bilibili.com/video/BV1wT411p7yf)支持,[API2D](https://api2d.com/)接口支持 | 同时被GPT3.5、GPT4、[清华ChatGLM](https://github.com/THUDM/ChatGLM-6B)、[复旦MOSS](https://github.com/OpenLMLab/MOSS)同时伺候的感觉一定会很不错吧? -更多LLM模型接入,支持[huggingface部署](https://huggingface.co/spaces/qingxu98/gpt-academic) | 加入Newbing接口(新必应),引入清华[Jittorllms](https://github.com/Jittor/JittorLLMs)支持[LLaMA](https://github.com/facebookresearch/llama),[RWKV](https://github.com/BlinkDL/ChatRWKV)和[盘古α](https://openi.org.cn/pangu/) -更多新功能展示(图像生成等) …… | 见本文档结尾处 …… - -
      - - -- 新界面(修改`config.py`中的LAYOUT选项即可实现“左右布局”和“上下布局”的切换) -
      - -
      - - -- 所有按钮都通过读取functional.py动态生成,可随意加自定义功能,解放粘贴板 -
      - -
      - -- 润色/纠错 -
      - -
      - -- 如果输出包含公式,会同时以tex形式和渲染形式显示,方便复制和阅读 -
      - -
      - -- 懒得看项目代码?整个工程直接给chatgpt炫嘴里 -
      - -
      - -- 多种大语言模型混合调用(ChatGLM + OpenAI-GPT3.5 + [API2D](https://api2d.com/)-GPT4) -
      - -
      - ---- -# Installation -## 安装-方法1:直接运行 (Windows, Linux or MacOS) - -1. 下载项目 -```sh -git clone https://github.com/binary-husky/chatgpt_academic.git -cd chatgpt_academic -``` - -2. 配置API_KEY - -在`config.py`中,配置API KEY等设置,[特殊网络环境设置](https://github.com/binary-husky/gpt_academic/issues/1) 。 - -(P.S. 程序运行时会优先检查是否存在名为`config_private.py`的私密配置文件,并用其中的配置覆盖`config.py`的同名配置。因此,如果您能理解我们的配置读取逻辑,我们强烈建议您在`config.py`旁边创建一个名为`config_private.py`的新配置文件,并把`config.py`中的配置转移(复制)到`config_private.py`中。`config_private.py`不受git管控,可以让您的隐私信息更加安全。P.S.项目同样支持通过`环境变量`配置大多数选项,环境变量的书写格式参考`docker-compose`文件。读取优先级: `环境变量` > `config_private.py` > `config.py`) - - -3. 安装依赖 -```sh -# (选择I: 如熟悉python)(python版本3.9以上,越新越好),备注:使用官方pip源或者阿里pip源,临时换源方法:python -m pip install -r requirements.txt -i https://mirrors.aliyun.com/pypi/simple/ -python -m pip install -r requirements.txt - -# (选择II: 如不熟悉python)使用anaconda,步骤也是类似的 (https://www.bilibili.com/video/BV1rc411W7Dr): -conda create -n gptac_venv python=3.11 # 创建anaconda环境 -conda activate gptac_venv # 激活anaconda环境 -python -m pip install -r requirements.txt # 这个步骤和pip安装一样的步骤 -``` - -
      如果需要支持清华ChatGLM/复旦MOSS作为后端,请点击展开此处 -

      - -【可选步骤】如果需要支持清华ChatGLM/复旦MOSS作为后端,需要额外安装更多依赖(前提条件:熟悉Python + 用过Pytorch + 电脑配置够强): -```sh -# 【可选步骤I】支持清华ChatGLM。清华ChatGLM备注:如果遇到"Call ChatGLM fail 不能正常加载ChatGLM的参数" 错误,参考如下: 1:以上默认安装的为torch+cpu版,使用cuda需要卸载torch重新安装torch+cuda; 2:如因本机配置不够无法加载模型,可以修改request_llm/bridge_chatglm.py中的模型精度, 将 AutoTokenizer.from_pretrained("THUDM/chatglm-6b", trust_remote_code=True) 都修改为 AutoTokenizer.from_pretrained("THUDM/chatglm-6b-int4", trust_remote_code=True) -python -m pip install -r request_llm/requirements_chatglm.txt - -# 【可选步骤II】支持复旦MOSS -python -m pip install -r request_llm/requirements_moss.txt -git clone https://github.com/OpenLMLab/MOSS.git request_llm/moss # 注意执行此行代码时,必须处于项目根路径 - -# 【可选步骤III】确保config.py配置文件的AVAIL_LLM_MODELS包含了期望的模型,目前支持的全部模型如下(jittorllms系列目前仅支持docker方案): -AVAIL_LLM_MODELS = ["gpt-3.5-turbo", "api2d-gpt-3.5-turbo", "gpt-4", "api2d-gpt-4", "chatglm", "newbing", "moss"] # + ["jittorllms_rwkv", "jittorllms_pangualpha", "jittorllms_llama"] -``` - -

      -
      - - - -4. 运行 -```sh -python main.py -``` - -5. 测试函数插件 -``` -- 测试函数插件模板函数(要求gpt回答历史上的今天发生了什么),您可以根据此函数为模板,实现更复杂的功能 - 点击 "[函数插件模板Demo] 历史上的今天" -``` - -## 安装-方法2:使用Docker - -1. 仅ChatGPT(推荐大多数人选择) - -``` sh -git clone https://github.com/binary-husky/chatgpt_academic.git # 下载项目 -cd chatgpt_academic # 进入路径 -nano config.py # 用任意文本编辑器编辑config.py, 配置 “Proxy”, “API_KEY” 以及 “WEB_PORT” (例如50923) 等 -docker build -t gpt-academic . # 安装 - -#(最后一步-选择1)在Linux环境下,用`--net=host`更方便快捷 -docker run --rm -it --net=host gpt-academic -#(最后一步-选择2)在macOS/windows环境下,只能用-p选项将容器上的端口(例如50923)暴露给主机上的端口 -docker run --rm -it -e WEB_PORT=50923 -p 50923:50923 gpt-academic -``` - -2. ChatGPT + ChatGLM + MOSS(需要熟悉Docker) - -``` sh -# 修改docker-compose.yml,删除方案1和方案3,保留方案2。修改docker-compose.yml中方案2的配置,参考其中注释即可 -docker-compose up -``` - -3. ChatGPT + LLAMA + 盘古 + RWKV(需要熟悉Docker) -``` sh -# 修改docker-compose.yml,删除方案1和方案2,保留方案3。修改docker-compose.yml中方案3的配置,参考其中注释即可 -docker-compose up -``` - - -## 安装-方法3:其他部署姿势 - -1. 如何使用反代URL/微软云AzureAPI -按照`config.py`中的说明配置API_URL_REDIRECT即可。 - -2. 远程云服务器部署(需要云服务器知识与经验) -请访问[部署wiki-1](https://github.com/binary-husky/chatgpt_academic/wiki/%E4%BA%91%E6%9C%8D%E5%8A%A1%E5%99%A8%E8%BF%9C%E7%A8%8B%E9%83%A8%E7%BD%B2%E6%8C%87%E5%8D%97) - -3. 使用WSL2(Windows Subsystem for Linux 子系统) -请访问[部署wiki-2](https://github.com/binary-husky/chatgpt_academic/wiki/%E4%BD%BF%E7%94%A8WSL2%EF%BC%88Windows-Subsystem-for-Linux-%E5%AD%90%E7%B3%BB%E7%BB%9F%EF%BC%89%E9%83%A8%E7%BD%B2) - -4. 如何在二级网址(如`http://localhost/subpath`)下运行 -请访问[FastAPI运行说明](docs/WithFastapi.md) - -5. 使用docker-compose运行 -请阅读docker-compose.yml后,按照其中的提示操作即可 ---- -# Advanced Usage -## 自定义新的便捷按钮 / 自定义函数插件 - -1. 自定义新的便捷按钮(学术快捷键) -任意文本编辑器打开`core_functional.py`,添加条目如下,然后重启程序即可。(如果按钮已经添加成功并可见,那么前缀、后缀都支持热修改,无需重启程序即可生效。) -例如 -``` -"超级英译中": { - # 前缀,会被加在你的输入之前。例如,用来描述你的要求,例如翻译、解释代码、润色等等 - "Prefix": "请翻译把下面一段内容成中文,然后用一个markdown表格逐一解释文中出现的专有名词:\n\n", - - # 后缀,会被加在你的输入之后。例如,配合前缀可以把你的输入内容用引号圈起来。 - "Suffix": "", -}, -``` -
      - -
      - -2. 自定义函数插件 - -编写强大的函数插件来执行任何你想得到的和想不到的任务。 -本项目的插件编写、调试难度很低,只要您具备一定的python基础知识,就可以仿照我们提供的模板实现自己的插件功能。 -详情请参考[函数插件指南](https://github.com/binary-husky/chatgpt_academic/wiki/%E5%87%BD%E6%95%B0%E6%8F%92%E4%BB%B6%E6%8C%87%E5%8D%97)。 - ---- -# Latest Update -## 新功能动态 - -1. 对话保存功能。在函数插件区调用 `保存当前的对话` 即可将当前对话保存为可读+可复原的html文件, -另外在函数插件区(下拉菜单)调用 `载入对话历史存档` ,即可还原之前的会话。 -Tip:不指定文件直接点击 `载入对话历史存档` 可以查看历史html存档缓存,点击 `删除所有本地对话历史记录` 可以删除所有html存档缓存。 -
      - -
      - - - -2. 生成报告。大部分插件都会在执行结束后,生成工作报告 -
      - - - -
      - -3. 模块化功能设计,简单的接口却能支持强大的功能 -
      - - -
      - -4. 这是一个能够“自我译解”的开源项目 -
      - -
      - -5. 译解其他开源项目,不在话下 -
      - -
      - -
      - -
      - -6. 装饰[live2d](https://github.com/fghrsh/live2d_demo)的小功能(默认关闭,需要修改`config.py`) -
      - -
      - -7. 新增MOSS大语言模型支持 -
      - -
      - -8. OpenAI图像生成 -
      - -
      - -9. OpenAI音频解析与总结 -
      - -
      - -10. Latex全文校对纠错 -
      - -
      - - -## 版本: -- version 3.5(Todo): 使用自然语言调用本项目的所有函数插件(高优先级) -- version 3.4(Todo): 完善chatglm本地大模型的多线支持 -- version 3.3: +互联网信息综合功能 -- version 3.2: 函数插件支持更多参数接口 (保存对话功能, 解读任意语言代码+同时询问任意的LLM组合) -- version 3.1: 支持同时问询多个gpt模型!支持api2d,支持多个apikey负载均衡 -- version 3.0: 对chatglm和其他小型llm的支持 -- version 2.6: 重构了插件结构,提高了交互性,加入更多插件 -- version 2.5: 自更新,解决总结大工程源代码时文本过长、token溢出的问题 -- version 2.4: (1)新增PDF全文翻译功能; (2)新增输入区切换位置的功能; (3)新增垂直布局选项; (4)多线程函数插件优化。 -- version 2.3: 增强多线程交互性 -- version 2.2: 函数插件支持热重载 -- version 2.1: 可折叠式布局 -- version 2.0: 引入模块化函数插件 -- version 1.0: 基础功能 - -gpt_academic开发者QQ群-2:610599535 - -- 已知问题 - - 某些浏览器翻译插件干扰此软件前端的运行 - - 官方Gradio目前有很多兼容性Bug,请务必使用requirement.txt安装Gradio - -## 参考与学习 - -``` -代码中参考了很多其他优秀项目中的设计,主要包括: - -# 项目1:清华ChatGLM-6B: -https://github.com/THUDM/ChatGLM-6B - -# 项目2:清华JittorLLMs: -https://github.com/Jittor/JittorLLMs - -# 项目3:Edge-GPT: -https://github.com/acheong08/EdgeGPT - -# 项目4:ChuanhuChatGPT: -https://github.com/GaiZhenbiao/ChuanhuChatGPT - -# 项目5:ChatPaper: -https://github.com/kaixindelele/ChatPaper - -# 更多: -https://github.com/gradio-app/gradio -https://github.com/fghrsh/live2d_demo -``` diff --git a/spaces/daspartho/predict-subreddit/README.md b/spaces/daspartho/predict-subreddit/README.md deleted file mode 100644 index 837b0dd32f0f84b008ef5bba2d160d7a735f97b3..0000000000000000000000000000000000000000 --- a/spaces/daspartho/predict-subreddit/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Predict Subreddit -emoji: 📈 -colorFrom: green -colorTo: pink -sdk: gradio -sdk_version: 3.3.1 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/daveckw/custom-chatgpt/sample_documents/README.md b/spaces/daveckw/custom-chatgpt/sample_documents/README.md deleted file mode 100644 index 41860ee5c993187256ef3266a9deb3e032528a0d..0000000000000000000000000000000000000000 --- a/spaces/daveckw/custom-chatgpt/sample_documents/README.md +++ /dev/null @@ -1,53 +0,0 @@ -# Custom-Knowledge-Based Querying with ChatGPT - -This project is a Python backend service that runs on Google Cloud Functions. It utilizes ChatGPT to provide custom knowledge-based querying by reading an `index.json` file from Firebase Storage and using it as a Llama index for generating responses. - -## Features - -- Reads `index.json` file from Firebase Storage and uses it as a Llama index -- Provides custom knowledge-based querying powered by ChatGPT -- Accepts `input_text` as an API parameter and generates a relevant response along with the source of the information - -## Installation & Deployment - -This project is designed to be deployed as a Google Cloud Function. Follow the official [Google Cloud Functions documentation](https://cloud.google.com/functions/docs) to set up and deploy the function. - -Deploy using script below in the command line: - -`gcloud functions deploy hello_world --runtime python310 --trigger-http --allow-unauthenticated --entry-point hello_world --source . --set-env-vars OPENAI_API_KEY=YOUR_SECRET_KEY --memory 1024MB` - -Change YOUR_SECRET_KEY to your own OpenAI secret key - -### Prerequisites - -- A Google Cloud account with billing enabled -- Firebase Storage configured with an `index.json` file -- Google Cloud SDK installed on your local machine -- OpenAI API key - -### Setup - -1. Clone this repository. -2. Install the required dependencies with `pip install -r requirements.txt`. -3. Set the `OPENAI_API_KEY` environment variable in your Google Cloud Function configuration. -4. Configure the `storageBucket` option in `firebase_admin.initialize_app()` to match your Firebase Storage bucket. -5. Deploy the function to Google Cloud Functions following their documentation. - -## Usage - -Make a GET request to the deployed Cloud Function with an `input_text` query parameter. The response will be a JSON object containing a generated response and the source of the information. - -## http - -GET https://<your-cloud-function-url>/hello_world?input_text=<your-query-text> - -## Contributing - -Fork the project. -Create a new branch (git checkout -b feature_branch). -Commit your changes (git commit -am 'Add a new feature'). -Push to the branch (git push origin feature_branch). -Create a new Pull Request. - -## License -This project is licensed under the MIT License. diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/markdown_it/presets/commonmark.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/markdown_it/presets/commonmark.py deleted file mode 100644 index 3990d4344aeb9e07449acf8aa749cb27b0a0e66c..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/markdown_it/presets/commonmark.py +++ /dev/null @@ -1,74 +0,0 @@ -"""Commonmark default options. - -This differs to presets.default, -primarily in that it allows HTML and does not enable components: - -- block: table -- inline: strikethrough -""" -from ..utils import PresetType - - -def make() -> PresetType: - return { - "options": { - "maxNesting": 20, # Internal protection, recursion limit - "html": True, # Enable HTML tags in source, - # this is just a shorthand for .enable(["html_inline", "html_block"]) - # used by the linkify rule: - "linkify": False, # autoconvert URL-like texts to links - # used by the replacements and smartquotes rules - # Enable some language-neutral replacements + quotes beautification - "typographer": False, - # used by the smartquotes rule: - # Double + single quotes replacement pairs, when typographer enabled, - # and smartquotes on. Could be either a String or an Array. - # - # For example, you can use '«»„“' for Russian, '„“‚‘' for German, - # and ['«\xA0', '\xA0»', '‹\xA0', '\xA0›'] for French (including nbsp). - "quotes": "\u201c\u201d\u2018\u2019", # /* “”‘’ */ - # Renderer specific; these options are used directly in the HTML renderer - "xhtmlOut": True, # Use '/' to close single tags (
      ) - "breaks": False, # Convert '\n' in paragraphs into
      - "langPrefix": "language-", # CSS language prefix for fenced blocks - # Highlighter function. Should return escaped HTML, - # or '' if the source string is not changed and should be escaped externally. - # If result starts with cross-attention - else: - n = 3 # resnet -> self-attention -> cross-attention) - - for resnet_idx, resnet in enumerate(block.resnets): - # diffusers_resnet_prefix = f"{diffusers_up_block_prefix}.resnets.{resnet_idx}" - diffusers_resnet_prefix = f"{block_type}_blocks.{block_idx}.resnets.{resnet_idx}" - idx = n * resnet_idx if block_type == "up" else n * resnet_idx + 1 - resnet_prefix = f"{block_prefix}.{idx}" if block_type == "up" else f"{block_prefix}.{idx}" - - diffusers_checkpoint.update( - resnet_to_diffusers_checkpoint( - resnet, checkpoint, diffusers_resnet_prefix=diffusers_resnet_prefix, resnet_prefix=resnet_prefix - ) - ) - - if hasattr(block, "attentions"): - for attention_idx, attention in enumerate(block.attentions): - diffusers_attention_prefix = f"{block_type}_blocks.{block_idx}.attentions.{attention_idx}" - idx = n * attention_idx + 1 if block_type == "up" else n * attention_idx + 2 - self_attention_prefix = f"{block_prefix}.{idx}" - cross_attention_prefix = f"{block_prefix}.{idx }" - cross_attention_index = 1 if not attention.add_self_attention else 2 - idx = ( - n * attention_idx + cross_attention_index - if block_type == "up" - else n * attention_idx + cross_attention_index + 1 - ) - cross_attention_prefix = f"{block_prefix}.{idx }" - - diffusers_checkpoint.update( - cross_attn_to_diffusers_checkpoint( - checkpoint, - diffusers_attention_prefix=diffusers_attention_prefix, - diffusers_attention_index=2, - attention_prefix=cross_attention_prefix, - ) - ) - - if attention.add_self_attention is True: - diffusers_checkpoint.update( - self_attn_to_diffusers_checkpoint( - checkpoint, - diffusers_attention_prefix=diffusers_attention_prefix, - attention_prefix=self_attention_prefix, - ) - ) - - return diffusers_checkpoint - - -def unet_to_diffusers_checkpoint(model, checkpoint): - diffusers_checkpoint = {} - - # pre-processing - diffusers_checkpoint.update( - { - "conv_in.weight": checkpoint["inner_model.proj_in.weight"], - "conv_in.bias": checkpoint["inner_model.proj_in.bias"], - } - ) - - # timestep and class embedding - diffusers_checkpoint.update( - { - "time_proj.weight": checkpoint["inner_model.timestep_embed.weight"].squeeze(-1), - "time_embedding.linear_1.weight": checkpoint["inner_model.mapping.0.weight"], - "time_embedding.linear_1.bias": checkpoint["inner_model.mapping.0.bias"], - "time_embedding.linear_2.weight": checkpoint["inner_model.mapping.2.weight"], - "time_embedding.linear_2.bias": checkpoint["inner_model.mapping.2.bias"], - "time_embedding.cond_proj.weight": checkpoint["inner_model.mapping_cond.weight"], - } - ) - - # down_blocks - for down_block_idx, down_block in enumerate(model.down_blocks): - diffusers_checkpoint.update(block_to_diffusers_checkpoint(down_block, checkpoint, down_block_idx, "down")) - - # up_blocks - for up_block_idx, up_block in enumerate(model.up_blocks): - diffusers_checkpoint.update(block_to_diffusers_checkpoint(up_block, checkpoint, up_block_idx, "up")) - - # post-processing - diffusers_checkpoint.update( - { - "conv_out.weight": checkpoint["inner_model.proj_out.weight"], - "conv_out.bias": checkpoint["inner_model.proj_out.bias"], - } - ) - - return diffusers_checkpoint - - -def unet_model_from_original_config(original_config): - in_channels = original_config["input_channels"] + original_config["unet_cond_dim"] - out_channels = original_config["input_channels"] + (1 if original_config["has_variance"] else 0) - - block_out_channels = original_config["channels"] - - assert ( - len(set(original_config["depths"])) == 1 - ), "UNet2DConditionModel currently do not support blocks with different number of layers" - layers_per_block = original_config["depths"][0] - - class_labels_dim = original_config["mapping_cond_dim"] - cross_attention_dim = original_config["cross_cond_dim"] - - attn1_types = [] - attn2_types = [] - for s, c in zip(original_config["self_attn_depths"], original_config["cross_attn_depths"]): - if s: - a1 = "self" - a2 = "cross" if c else None - elif c: - a1 = "cross" - a2 = None - else: - a1 = None - a2 = None - attn1_types.append(a1) - attn2_types.append(a2) - - unet = UNet2DConditionModel( - in_channels=in_channels, - out_channels=out_channels, - down_block_types=("KDownBlock2D", "KCrossAttnDownBlock2D", "KCrossAttnDownBlock2D", "KCrossAttnDownBlock2D"), - mid_block_type=None, - up_block_types=("KCrossAttnUpBlock2D", "KCrossAttnUpBlock2D", "KCrossAttnUpBlock2D", "KUpBlock2D"), - block_out_channels=block_out_channels, - layers_per_block=layers_per_block, - act_fn="gelu", - norm_num_groups=None, - cross_attention_dim=cross_attention_dim, - attention_head_dim=64, - time_cond_proj_dim=class_labels_dim, - resnet_time_scale_shift="scale_shift", - time_embedding_type="fourier", - timestep_post_act="gelu", - conv_in_kernel=1, - conv_out_kernel=1, - ) - - return unet - - -def main(args): - device = torch.device("cuda" if torch.cuda.is_available() else "cpu") - - orig_config_path = huggingface_hub.hf_hub_download(UPSCALER_REPO, "config_laion_text_cond_latent_upscaler_2.json") - orig_weights_path = huggingface_hub.hf_hub_download( - UPSCALER_REPO, "laion_text_cond_latent_upscaler_2_1_00470000_slim.pth" - ) - print(f"loading original model configuration from {orig_config_path}") - print(f"loading original model checkpoint from {orig_weights_path}") - - print("converting to diffusers unet") - orig_config = K.config.load_config(open(orig_config_path))["model"] - model = unet_model_from_original_config(orig_config) - - orig_checkpoint = torch.load(orig_weights_path, map_location=device)["model_ema"] - converted_checkpoint = unet_to_diffusers_checkpoint(model, orig_checkpoint) - - model.load_state_dict(converted_checkpoint, strict=True) - model.save_pretrained(args.dump_path) - print(f"saving converted unet model in {args.dump_path}") - - -if __name__ == "__main__": - parser = argparse.ArgumentParser() - - parser.add_argument("--dump_path", default=None, type=str, required=True, help="Path to the output model.") - args = parser.parse_args() - - main(args) diff --git a/spaces/deelerb/3dselfie/PIFu/lib/options.py b/spaces/deelerb/3dselfie/PIFu/lib/options.py deleted file mode 100644 index 3c76097b6a0b2a312c161bd634a4a137b5929bf3..0000000000000000000000000000000000000000 --- a/spaces/deelerb/3dselfie/PIFu/lib/options.py +++ /dev/null @@ -1,161 +0,0 @@ -import argparse -import os - - -class BaseOptions(): - def __init__(self): - self.initialized = False - argparse - def initialize(self, parser): - # Datasets related - g_data = parser.add_argument_group('Data') - g_data.add_argument('--dataroot', type=str, default='./data', - help='path to images (data folder)') - - g_data.add_argument('--loadSize', type=int, default=512, help='load size of input image') - - # Experiment related - g_exp = parser.add_argument_group('Experiment') - g_exp.add_argument('--name', type=str, default='example', - help='name of the experiment. It decides where to store samples and models') - g_exp.add_argument('--debug', action='store_true', help='debug mode or not') - - g_exp.add_argument('--num_views', type=int, default=1, help='How many views to use for multiview network.') - g_exp.add_argument('--random_multiview', action='store_true', help='Select random multiview combination.') - - # Training related - g_train = parser.add_argument_group('Training') - g_train.add_argument('--gpu_id', type=int, default=0, help='gpu id for cuda') - g_train.add_argument('--gpu_ids', type=str, default='0', help='gpu ids: e.g. 0 0,1,2, 0,2, -1 for CPU mode') - - g_train.add_argument('--num_threads', default=1, type=int, help='# sthreads for loading data') - g_train.add_argument('--serial_batches', action='store_true', - help='if true, takes images in order to make batches, otherwise takes them randomly') - g_train.add_argument('--pin_memory', action='store_true', help='pin_memory') - - g_train.add_argument('--batch_size', type=int, default=2, help='input batch size') - g_train.add_argument('--learning_rate', type=float, default=1e-3, help='adam learning rate') - g_train.add_argument('--learning_rateC', type=float, default=1e-3, help='adam learning rate') - g_train.add_argument('--num_epoch', type=int, default=100, help='num epoch to train') - - g_train.add_argument('--freq_plot', type=int, default=10, help='freqency of the error plot') - g_train.add_argument('--freq_save', type=int, default=50, help='freqency of the save_checkpoints') - g_train.add_argument('--freq_save_ply', type=int, default=100, help='freqency of the save ply') - - g_train.add_argument('--no_gen_mesh', action='store_true') - g_train.add_argument('--no_num_eval', action='store_true') - - g_train.add_argument('--resume_epoch', type=int, default=-1, help='epoch resuming the training') - g_train.add_argument('--continue_train', action='store_true', help='continue training: load the latest model') - - # Testing related - g_test = parser.add_argument_group('Testing') - g_test.add_argument('--resolution', type=int, default=256, help='# of grid in mesh reconstruction') - g_test.add_argument('--test_folder_path', type=str, default=None, help='the folder of test image') - - # Sampling related - g_sample = parser.add_argument_group('Sampling') - g_sample.add_argument('--sigma', type=float, default=5.0, help='perturbation standard deviation for positions') - - g_sample.add_argument('--num_sample_inout', type=int, default=5000, help='# of sampling points') - g_sample.add_argument('--num_sample_color', type=int, default=0, help='# of sampling points') - - g_sample.add_argument('--z_size', type=float, default=200.0, help='z normalization factor') - - # Model related - g_model = parser.add_argument_group('Model') - # General - g_model.add_argument('--norm', type=str, default='group', - help='instance normalization or batch normalization or group normalization') - g_model.add_argument('--norm_color', type=str, default='instance', - help='instance normalization or batch normalization or group normalization') - - # hg filter specify - g_model.add_argument('--num_stack', type=int, default=4, help='# of hourglass') - g_model.add_argument('--num_hourglass', type=int, default=2, help='# of stacked layer of hourglass') - g_model.add_argument('--skip_hourglass', action='store_true', help='skip connection in hourglass') - g_model.add_argument('--hg_down', type=str, default='ave_pool', help='ave pool || conv64 || conv128') - g_model.add_argument('--hourglass_dim', type=int, default='256', help='256 | 512') - - # Classification General - g_model.add_argument('--mlp_dim', nargs='+', default=[257, 1024, 512, 256, 128, 1], type=int, - help='# of dimensions of mlp') - g_model.add_argument('--mlp_dim_color', nargs='+', default=[513, 1024, 512, 256, 128, 3], - type=int, help='# of dimensions of color mlp') - - g_model.add_argument('--use_tanh', action='store_true', - help='using tanh after last conv of image_filter network') - - # for train - parser.add_argument('--random_flip', action='store_true', help='if random flip') - parser.add_argument('--random_trans', action='store_true', help='if random flip') - parser.add_argument('--random_scale', action='store_true', help='if random flip') - parser.add_argument('--no_residual', action='store_true', help='no skip connection in mlp') - parser.add_argument('--schedule', type=int, nargs='+', default=[60, 80], - help='Decrease learning rate at these epochs.') - parser.add_argument('--gamma', type=float, default=0.1, help='LR is multiplied by gamma on schedule.') - parser.add_argument('--color_loss_type', type=str, default='l1', help='mse | l1') - - # for eval - parser.add_argument('--val_test_error', action='store_true', help='validate errors of test data') - parser.add_argument('--val_train_error', action='store_true', help='validate errors of train data') - parser.add_argument('--gen_test_mesh', action='store_true', help='generate test mesh') - parser.add_argument('--gen_train_mesh', action='store_true', help='generate train mesh') - parser.add_argument('--all_mesh', action='store_true', help='generate meshs from all hourglass output') - parser.add_argument('--num_gen_mesh_test', type=int, default=1, - help='how many meshes to generate during testing') - - # path - parser.add_argument('--checkpoints_path', type=str, default='./checkpoints', help='path to save checkpoints') - parser.add_argument('--load_netG_checkpoint_path', type=str, default=None, help='path to save checkpoints') - parser.add_argument('--load_netC_checkpoint_path', type=str, default=None, help='path to save checkpoints') - parser.add_argument('--results_path', type=str, default='./results', help='path to save results ply') - parser.add_argument('--load_checkpoint_path', type=str, help='path to save results ply') - parser.add_argument('--single', type=str, default='', help='single data for training') - # for single image reconstruction - parser.add_argument('--mask_path', type=str, help='path for input mask') - parser.add_argument('--img_path', type=str, help='path for input image') - - # aug - group_aug = parser.add_argument_group('aug') - group_aug.add_argument('--aug_alstd', type=float, default=0.0, help='augmentation pca lighting alpha std') - group_aug.add_argument('--aug_bri', type=float, default=0.0, help='augmentation brightness') - group_aug.add_argument('--aug_con', type=float, default=0.0, help='augmentation contrast') - group_aug.add_argument('--aug_sat', type=float, default=0.0, help='augmentation saturation') - group_aug.add_argument('--aug_hue', type=float, default=0.0, help='augmentation hue') - group_aug.add_argument('--aug_blur', type=float, default=0.0, help='augmentation blur') - - # special tasks - self.initialized = True - return parser - - def gather_options(self): - # initialize parser with basic options - if not self.initialized: - parser = argparse.ArgumentParser( - formatter_class=argparse.ArgumentDefaultsHelpFormatter) - parser = self.initialize(parser) - - self.parser = parser - - return parser.parse_args() - - def print_options(self, opt): - message = '' - message += '----------------- Options ---------------\n' - for k, v in sorted(vars(opt).items()): - comment = '' - default = self.parser.get_default(k) - if v != default: - comment = '\t[default: %s]' % str(default) - message += '{:>25}: {:<30}{}\n'.format(str(k), str(v), comment) - message += '----------------- End -------------------' - print(message) - - def parse(self): - opt = self.gather_options() - return opt - - def parse_to_dict(self): - opt = self.gather_options() - return opt.__dict__ \ No newline at end of file diff --git a/spaces/deeplearning/audioldm-text-to-audio-generation/audioldm/clap/training/params.py b/spaces/deeplearning/audioldm-text-to-audio-generation/audioldm/clap/training/params.py deleted file mode 100644 index 0cc1a0e2d982e900988cf5a4b24b2e59b093537b..0000000000000000000000000000000000000000 --- a/spaces/deeplearning/audioldm-text-to-audio-generation/audioldm/clap/training/params.py +++ /dev/null @@ -1,563 +0,0 @@ -import argparse - - -def get_default_params(model_name): - # Params from paper (https://arxiv.org/pdf/2103.00020.pdf) - model_name = model_name.lower() - if "vit" in model_name: - return {"lr": 5.0e-4, "beta1": 0.9, "beta2": 0.98, "eps": 1.0e-6} - else: - return {"lr": 5.0e-4, "beta1": 0.9, "beta2": 0.999, "eps": 1.0e-8} - - -def parse_args(): - parser = argparse.ArgumentParser() - parser.add_argument( - "--train-data", - type=str, - default=None, - help="Path to h5 filewith training data", - ) - parser.add_argument( - "--val-data", - type=str, - default=None, - help="Path to h5 file with validation data", - ) - parser.add_argument( - "--freeze-text", - default=False, - action="store_true", - help="if you need to freeze the text encoder, make this True", - ) - parser.add_argument( - "--freeze-text-after", - type=int, - default=-1, - help="if you need to freeze the text encoder after (include) epoch x, set this param to x. Set -1 to disable it", - ) - parser.add_argument( - "--train-ipc", - type=str, - default=None, - help="Path to npy file of the number of instance per class in training data", - ) - parser.add_argument( - "--val-ipc", - type=str, - default=None, - help="Path to npy file of the number of instance per class in validation data", - ) - parser.add_argument( - "--train-num-samples", - type=int, - default=None, - help="Number of samples in dataset. Required for webdataset if not available in info file.", - ) - parser.add_argument( - "--val-num-samples", - type=int, - default=None, - help="Number of samples in dataset. Useful for webdataset if not available in info file.", - ) - parser.add_argument( - "--dataset-type", - choices=["webdataset", "csv", "auto", "toy"], - default="auto", - help="Which type of dataset to process.", - ) - parser.add_argument( - "--csv-separator", - type=str, - default="\t", - help="For csv-like datasets, which separator to use.", - ) - parser.add_argument( - "--csv-img-key", - type=str, - default="filepath", - help="For csv-like datasets, the name of the key for the image paths.", - ) - parser.add_argument( - "--csv-caption-key", - type=str, - default="title", - help="For csv-like datasets, the name of the key for the captions.", - ) - parser.add_argument( - "--imagenet-val", - type=str, - default=None, - help="Path to imagenet val set for conducting zero shot evaluation.", - ) - parser.add_argument( - "--imagenet-v2", - type=str, - default=None, - help="Path to imagenet v2 for conducting zero shot evaluation.", - ) - parser.add_argument( - "--datasetnames", - nargs="+", - default=None, - help="If loading webdataset, spedify the dataset names to load. Can be some of these: Clotho, audioset, audiocaps, BBCSoundEffects", - ) - parser.add_argument( - "--full-train-dataset", - nargs="+", - default=None, - help="Which dataset will be trained with all the subsets. (train+test)", - ) - parser.add_argument( - "--exclude-eval-dataset", - nargs="+", - default=None, - help="Which dataset will be excluded with evaluation", - ) - parser.add_argument( - "--datasetinfos", - nargs="+", - default=None, - help="If loading webdataset, spedify the dataset types to load. Can be some of these: train, test, valid, unbalanced_train, balanced_train, eval", - ) - parser.add_argument( - "--dataset-proportion", - type=float, - default=1.0, - help="How much proportion of dataset we want to train.", - ) - parser.add_argument( - "--remotedata", - default=False, - action="store_true", - help="if the dataset is remote, set this flag", - ) - parser.add_argument( - "--class-label-path", - type=str, - default=None, - help="The path of the class label pickle or csv.", - ) - parser.add_argument( - "--datasetpath", - type=str, - default="/mnt/audio_clip/webdataset_tar", - help="The path to the dataset", - ) - parser.add_argument( - "--logs", - type=str, - default="./logs/", - help="Where to store tensorboard logs. Use None to avoid storing logs.", - ) - parser.add_argument( - "--log-local", - action="store_true", - default=False, - help="log files on local master, otherwise global master only.", - ) - parser.add_argument( - "--name", - type=str, - default=None, - help="Optional identifier for the experiment when storing logs. Otherwise use current time.", - ) - parser.add_argument( - "--workers", type=int, default=1, help="Number of workers per GPU." - ) - parser.add_argument( - "--batch-size", type=int, default=64, help="Batch size per GPU." - ) - parser.add_argument( - "--epochs", type=int, default=32, help="Number of epochs to train for." - ) - parser.add_argument("--lr", type=float, default=None, help="Learning rate.") - parser.add_argument("--beta1", type=float, default=None, help="Adam beta 1.") - parser.add_argument("--beta2", type=float, default=None, help="Adam beta 2.") - parser.add_argument("--eps", type=float, default=None, help="Adam epsilon.") - parser.add_argument("--momentum", type=float, default=None, help="SGD epsilon.") - parser.add_argument("--wd", type=float, default=0.2, help="Weight decay.") - - parser.add_argument( - "--split-opt", - action="store_true", - default=False, - help="Use this flag to skip the learning rate decay.", - ) - parser.add_argument( - "--lr-pretrained", type=float, default=None, help="Learning rate for text." - ) - parser.add_argument( - "--beta1-pretrained", type=float, default=None, help="Adam beta 1 for text." - ) - parser.add_argument( - "--beta2-pretrained", type=float, default=None, help="Adam beta 2 for text." - ) - parser.add_argument( - "--eps-pretrained", type=float, default=None, help="Adam epsilon for text." - ) - parser.add_argument( - "--wd-pretrained", type=float, default=0.2, help="Weight decay for text." - ) - parser.add_argument( - "--momentum-pretrained", type=float, default=0.9, help="Momentum for text." - ) - parser.add_argument( - "--lr-new", type=float, default=None, help="Learning rate for audio." - ) - parser.add_argument( - "--beta1-new", type=float, default=None, help="Adam beta 1 for audio." - ) - parser.add_argument( - "--beta2-new", type=float, default=None, help="Adam beta 2 for audio." - ) - parser.add_argument( - "--eps-new", type=float, default=None, help="Adam epsilon for audio." - ) - parser.add_argument( - "--wd-new", type=float, default=0.2, help="Weight decay for audio." - ) - parser.add_argument( - "--momentum-new", type=float, default=0.9, help="Momentum for audio." - ) - parser.add_argument( - "--warmup", type=int, default=10000, help="Number of steps to warmup for." - ) - parser.add_argument( - "--use-bn-sync", - default=False, - action="store_true", - help="Whether to use batch norm sync.", - ) - parser.add_argument( - "--skip-scheduler", - action="store_true", - default=False, - help="Use this flag to skip the learning rate decay.", - ) - parser.add_argument( - "--save-frequency", type=int, default=1, help="How often to save checkpoints." - ) - parser.add_argument( - "--save-top-performance", - type=int, - default=0, - help="Save the top x performance weights if the value >0", - ) - parser.add_argument( - "--save-most-recent", - action="store_true", - default=False, - help="Always save the most recent model trained to epoch_latest.pt.", - ) - parser.add_argument( - "--zeroshot-frequency", type=int, default=2, help="How often to run zero shot." - ) - parser.add_argument( - "--val-frequency", - type=int, - default=1, - help="How often to run evaluation with val data.", - ) - parser.add_argument( - "--resume", - default=None, - type=str, - help="path to latest checkpoint (default: none)", - ) - parser.add_argument( - "--precision", - choices=["amp", "fp16", "fp32"], - default="amp", - help="Floating point precision.", - ) - parser.add_argument( - "--amodel", - type=str, - default="RN50", - help="Name of the audio backbone to use.", - ) - parser.add_argument( - "--tmodel", - type=str, - default="transformer", - help="Name of the text backbone to use. Can be [transformer, bert, roberta, bart]", - ) - parser.add_argument( - "--pretrained-audio", - default="", - type=str, - help="Use a pretrained audio model weights for the audio encoder of CLAP", - ) - parser.add_argument( - "--pretrained-text", - default="", - type=str, - help="Use a pretrained text model weights for the text encoder of CLAP", - ) - parser.add_argument( - "--pretrained", - default="", - type=str, - help="Use a pretrained CLIP model weights with the specified tag or file path.", - ) - parser.add_argument( - "--pretrained-image", - default=False, - action="store_true", - help="Load imagenet pretrained weights for image tower backbone if available.", - ) - parser.add_argument( - "--lock-image", - default=False, - action="store_true", - help="Lock full image tower by disabling gradients.", - ) - parser.add_argument( - "--lock-image-unlocked-groups", - type=int, - default=0, - help="Leave last n image tower layer groups unlocked.", - ) - parser.add_argument( - "--lock-image-freeze-bn-stats", - default=False, - action="store_true", - help="Freeze BatchNorm running stats in image tower for any locked layers.", - ) - parser.add_argument( - "--local-loss", - default=False, - action="store_true", - help="calculate loss w/ local features @ global (instead of realizing full global @ global matrix)", - ) - parser.add_argument( - "--gather-with-grad", - default=False, - action="store_true", - help="enable full distributed gradient for feature gather", - ) - parser.add_argument( - "--force-quick-gelu", - default=False, - action="store_true", - help="Force use of QuickGELU activation for non-OpenAI transformer models.", - ) - parser.add_argument( - "--torchscript", - default=False, - action="store_true", - help="torch.jit.script the model, also uses jit version of OpenAI models if pretrained=='openai'", - ) - parser.add_argument( - "--trace", - default=False, - action="store_true", - help="torch.jit.trace the model for inference / eval only", - ) - # arguments for distributed training - parser.add_argument( - "--dist-url", - default="env://", - type=str, - help="url used to set up distributed training", - ) - parser.add_argument( - "--dist-backend", default="nccl", type=str, help="distributed backend" - ) - parser.add_argument( - "--report-to", - default="", - type=str, - help="Options are ['wandb', 'tensorboard', 'wandb,tensorboard']", - ) - parser.add_argument( - "--wandb-notes", default="", type=str, help="Notes if logging with wandb" - ) - parser.add_argument( - "--C", type=float, default=3.16, help="inverse regularizer for logistic reg." - ) - parser.add_argument( - "--debug", - default=False, - action="store_true", - help="If true, more information is logged.", - ) - parser.add_argument( - "--copy-codebase", - default=False, - action="store_true", - help="If true, we copy the entire base on the log diretory, and execute from there.", - ) - parser.add_argument( - "--horovod", - default=False, - action="store_true", - help="Use horovod for distributed training.", - ) - parser.add_argument( - "--ddp-static-graph", - default=False, - action="store_true", - help="Enable static graph optimization for DDP in PyTorch >= 1.11.", - ) - parser.add_argument( - "--no-set-device-rank", - default=False, - action="store_true", - help="Don't set device index from local rank (when CUDA_VISIBLE_DEVICES restricted to one per proc).", - ) - parser.add_argument("--seed", type=int, default=4242, help="Default random seed.") - - parser.add_argument( - "--top-k-checkpoint-select-dataset", - type=str, - default="all", - help="The dataset of selecting top-k checkpoint.", - ) - - # @R10, @R@5, @R1, mAP@10 - parser.add_argument( - "--top-k-checkpoint-select-metric", - type=str, - default="_R@10", - help="The metric for selecting top-k checkpoint.", - ) - parser.add_argument( - "--openai-model-cache-dir", - type=str, - default="~/.cache/clip", - help="Directory to download OpenAI models.", - ) - parser.add_argument( - "--optimizer", - type=str, - default="adamw", - help="can be AdamW or SGD", - ) - parser.add_argument( - "--parallel-eval", - default=False, - action="store_true", - help="Eval in parallel (multi-GPU, multi-node).", - ) - - parser.add_argument( - "--no-eval", - default=False, - action="store_true", - help="Training without evaluation.", - ) - - parser.add_argument( - "--lp-mlp", - default=False, - action="store_true", - help="Linear Probe using MLP layer or not.", - ) - - parser.add_argument( - "--lp-freeze", - default=False, - action="store_true", - help="Linear Probe using Freeze CLAP or not", - ) - - parser.add_argument( - "--lp-act", - default="None", - type=str, - help="Options are ['relu','elu','prelu','softmax','sigmoid']", - ) - - parser.add_argument( - "--lp-loss", type=str, default="bce", help="Loss func of Linear Probe." - ) - - parser.add_argument( - "--lp-metrics", - type=str, - default="map,mauc,acc", - help="Metrics of Linear Probe.", - ) - - parser.add_argument( - "--lp-lr", type=float, default=1e-4, help="learning rate of linear probe" - ) - parser.add_argument( - "--kappa", - type=float, - default=0, - help="the kappa in the weighted contrastive loss, default is to turn off the weighted contrastive loss", - ) - - parser.add_argument( - "--data-filling", - type=str, - default="pad", - help="type of data filling when the audio length is shorter than the max length." - "Can be one of the following: repeat, repeatpad, pad", - ) - parser.add_argument( - "--data-truncating", - type=str, - default="rand_trunc", - help="type of data truncation when the audio length is longer than the max length." - "Can be one of the following: rand_trunc, fusion", - ) - - parser.add_argument( - "--clap-mlploss", - default=False, - action="store_true", - help="Using MLP loss for CLAP model or not", - ) - - parser.add_argument( - "--wandb-id", - type=str, - default=None, - help="the id of wandb experiment to restore.", - ) - - parser.add_argument( - "--sleep", type=float, default=0, help="sleep n seconds before start training" - ) - - # variable length processing - parser.add_argument( - "--enable-fusion", - default=False, - action="store_true", - help="Enable feature funsion for variable-length data", - ) - - parser.add_argument( - "--fusion-type", - type=str, - default="None", - help="Type is among ['channel_map', 'daf_1d','aff_1d','iaff_1d','daf_2d','aff_2d','iaff_2d']", - ) - - parser.add_argument( - "--mixup", - default=False, - action="store_true", - help="Enable mixup in finetuning training.", - ) - parser.add_argument( - "--text-augment-selection", - type=str, - default=None, - help="For selecting levels of augmented text. Type is among ['all', 'augment_only', 'none']", - ) - - args = parser.parse_args() - - # If some params are not passed, we use the default values based on model name. - default_params = get_default_params(args.amodel) - for name, val in default_params.items(): - if getattr(args, name) is None: - setattr(args, name, val) - - return args diff --git a/spaces/diacanFperku/AutoGPT/Download Goz Zbrush 4r4 Crack !!BETTER!!.md b/spaces/diacanFperku/AutoGPT/Download Goz Zbrush 4r4 Crack !!BETTER!!.md deleted file mode 100644 index e559beb3f4092e547df5a9fb5691cfe65419e5d2..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/Download Goz Zbrush 4r4 Crack !!BETTER!!.md +++ /dev/null @@ -1,154 +0,0 @@ -
      -

      Download GoZ ZBrush 4R4 Crack and Create Amazing 3D Sculptures

      - -

      ZBrush is one of the most popular and powerful digital sculpting software in the world, used by millions of artists, designers, and enthusiasts to create stunning 3D models and artworks. ZBrush allows you to use customizable brushes to shape, texture, and paint virtual clay in a real-time environment that provides instant feedback.

      -

      download goz zbrush 4r4 crack


      Download File · https://gohhs.com/2uFVza



      - -

      However, ZBrush is not a cheap software, and you may not be able to afford it or get a legal license for it. That's why some people look for cracked versions of ZBrush, such as GoZ ZBrush 4R4 Crack, which is a modified version of ZBrush that includes a keygen that can generate a serial number and activate the software without requiring an iLok dongle or an internet connection.

      - -

      In this article, we will show you what GoZ ZBrush 4R4 Crack is, how to download and install it, and what advantages and disadvantages it has for your 3D projects.

      - -

      What is GoZ ZBrush 4R4 Crack?

      - -

      GoZ ZBrush 4R4 Crack is a cracked version of ZBrush 4R4, which is an older version of ZBrush that was released in 2012. GoZ stands for GoZbrush, which is a feature of ZBrush that allows you to seamlessly transfer your models between ZBrush and other 3D applications, such as Maya, Modo, Cinema 4D, etc.

      -

      - -

      GoZ ZBrush 4R4 Crack includes a keygen that can generate a serial number for the software and activate it without requiring an iLok dongle or an internet connection. The keygen was created by neviens, who is a hacker and a member of various torrent sites and forums.

      - -

      GoZ ZBrush 4R4 Crack can be downloaded from various sources, such as RuTracker.org, AudioZ, or Sway. However, these sources are not reliable or safe, as they may contain viruses, malware, or spyware that can harm your computer or steal your personal information.

      - -

      How to download and install GoZ ZBrush 4R4 Crack?

      - -

      If you want to download and install GoZ ZBrush 4R4 Crack, you can follow these steps:

      - -
        -
      1. Download: You can download the software from one of the sources mentioned above, such as RuTracker.org, AudioZ, or Sway. Make sure you download the correct version for your operating system (Windows or Mac OS X) and that the file includes the keygen.
      2. -
      3. Extract: You can extract the downloaded file using a program like WinRAR or 7-Zip. You will get a folder with the software and the keygen.
      4. -
      5. Install: You can run the setup file and follow the instructions to install the software on your computer. You will need to have Sun Java 2 Standard Edition Runtime Environment, 32-bit Version 6 or 5 installed on your computer.
      6. -
      7. Register: You can run the keygen and generate a serial number for the software. You will need to have an iLok dongle connected to your computer. You can enter the serial number when prompted by the software and activate it.
      8. -
      - -

      What are the advantages of GoZ ZBrush 4R4 Crack?

      - -

      GoZ ZBrush 4R4 Crack offers some advantages for 3D artists who want to use ZBrush for their projects. For example:

      - -
        -
      • Free access: You can use ZBrush without paying for it or having a legal license.
      • -
      • Full features: You can use all the features of ZBrush 4R4, such as Dynamesh, Curve Mode, Liquid Mode, Equidistant Gizmo Mesh Duplication, etc.
      • -
      • GoZ functionality: You can use GoZ to transfer your models between ZBrush and other 3D applications easily and quickly.
      • -
      • No internet connection required: You can use ZBrush offline without needing an internet connection or an iLok dongle.
      • -
      - -

      What are the disadvantages of GoZ ZBrush 4R4 Crack?

      - -

      While GoZ ZBrush 4R4 Crack may seem tempting for some users, it also has some disadvantages that you should be aware of before using it. For example:

      - -
        -
      • Compatibility issues: GoZ ZBrush 4R4 Crack may not be compatible with some devices and platforms that do not support ZBrush formats or require specific codecs or decoders.
      • -
      • Legal issues: GoZ ZBrush 4R4 Crack may not be legal to use in some countries or regions that have strict copyright laws or regulations regarding software piracy and distribution.
      • -
      • Ethical issues: GoZ ZBrush 4R4 Crack may not be ethical to use as it violates the intellectual property rights of Pixologic, the developer of ZBrush, and deprives them of their deserved revenue and recognition.
      • -
      • Technical issues: GoZ ZBrush 4R4 Crack may not work properly on some computers or systems that do not meet the minimum requirements or have incompatible hardware or software.
      • -
      • Security issues: GoZ ZBrush 4R4 Crack may contain viruses, malware, or spyware that can harm your computer or steal your personal information.
      • -
      • Moral issues: GoZ ZBrush 4R4 Crack may affect your reputation and credibility as a professional 3D artist who respects quality standards and ethical principles.
      • -
      - -

      Conclusion

      - -

      GoZ ZBrush 4R4 Crack is a cracked version of ZBrush 4R4 that includes a keygen that can generate a serial number and activate the software without requiring an iLok dongle or an internet connection.

      - -

      It offers some advantages, such as free access, full features, GoZ functionality, and no internet connection required.

      - -

      However, it also has some disadvantages, such as compatibility issues, -legal issues, -ethical issues, -technical issues, -security issues, -and moral issues.

      - -

      If you are interested in using GoZ ZBrush 4R4 Crack for your 3D projects, -you should download and install it from a reliable source, -register it with a valid serial number, -and use it with caution and responsibility.

      - -

      You should also consider some alternatives to GoZ ZBrush

      -

      How to use GoZ ZBrush 4R4 Crack for your 3D projects?

      - -

      Once you have downloaded and installed GoZ ZBrush 4R4 Crack, you can use it for your 3D projects in various ways.

      - -

      For example, you can use it to:

      - -
        -
      • Create digital sculptures and models: You can use ZBrush to sculpt, texture, and paint virtual clay with customizable brushes and tools. You can create organic or hard surface models, characters, creatures, environments, props, etc.
      • -
      • Transfer models between ZBrush and other 3D applications: You can use GoZ to send your models from ZBrush to other 3D applications, such as Maya, Modo, Cinema 4D, etc., and vice versa. You can also update your models in both directions without losing any changes or details.
      • -
      • Render and export your models: You can use ZBrush to render your models with various materials, lights, and effects. You can also export your models in various formats, such as OBJ, STL, FBX, etc., for further editing or printing.
      • -
      - -

      What are some tips and tricks for using GoZ ZBrush 4R4 Crack?

      - -

      If you want to get the most out of GoZ ZBrush 4R4 Crack, you should follow some tips and tricks that can help you improve your workflow and results.

      - -

      For example, you should:

      - -
        -
      • Choose the right sculpting mode and settings: Depending on your project requirements and preferences, you should choose the sculpting mode and settings that suit your needs and expectations. You should consider factors such as resolution, subdivision levels, symmetry, masking, polygroups, etc.
      • -
      • Use presets and brushes: If you are not sure about the sculpting mode and settings, you can use presets and brushes that are provided by ZBrush or created by yourself or other users. You can also save your own presets and brushes for future use.
      • -
      • Use layers and history: If you want to make changes or corrections to your models without losing any progress or details, you can use layers and history to record and edit your sculpting actions. You can also use layers and history to create variations or animations of your models.
      • -
      • Use polypaint and materials: If you want to add color and texture to your models without using UV maps or external programs, you can use polypaint and materials to paint your models with pixel-by-pixel control. You can also use polypaint and materials to create masks or selections for your models.
      • -
      • Use GoZ options and preferences: If you want to customize the way GoZ works between ZBrush and other 3D applications, you can use GoZ options and preferences to adjust various settings, such as file format, scale, orientation, smoothing groups, etc.
      • -
      -

      How to update GoZ ZBrush 4R4 Crack to the latest version of ZBrush?

      - -

      If you want to update GoZ ZBrush 4R4 Crack to the latest version of ZBrush, you may face some difficulties and risks.

      - -

      For example, you may:

      - -
        -
      • Not be able to find a cracked version of the latest version of ZBrush: Since ZBrush is constantly updated and improved by Pixologic, it may be hard to find a cracked version of the latest version of ZBrush that works properly and safely.
      • -
      • Lose your models and projects: If you update GoZ ZBrush 4R4 Crack to the latest version of ZBrush, you may lose your models and projects that you created with GoZ ZBrush 4R4 Crack, as they may not be compatible or transferable with the new version.
      • -
      • Damage your computer or compromise your security: If you update GoZ ZBrush 4R4 Crack to the latest version of ZBrush, you may damage your computer or compromise your security, as the new cracked version may contain viruses, malware, or spyware that can harm your computer or steal your personal information.
      • -
      - -

      Therefore, if you want to update GoZ ZBrush 4R4 Crack to the latest version of ZBrush, you should be careful and cautious, and backup your models and projects before updating.

      - -

      What are some resources and tutorials for using GoZ ZBrush 4R4 Crack?

      - -

      If you want to learn more about using GoZ ZBrush 4R4 Crack for your 3D projects, you can find some resources and tutorials that can help you improve your skills and knowledge.

      - -

      For example, you can find:

      - -
        -
      • ZBrush Documentation: This is the official documentation of ZBrush that provides information and instructions on how to use ZBrush and its features. You can access it online or offline from within ZBrush.
      • -
      • ZClassroom: This is the official learning portal of ZBrush that provides free video tutorials on how to use ZBrush and its features. You can access it online or offline from within ZBrush.
      • -
      • ZBrushCentral: This is the official community forum of ZBrush that provides a platform for users to share their works, ask questions, give feedback, and get support. You can access it online from any browser.
      • -
      • Pixologic YouTube Channel: This is the official YouTube channel of Pixologic that provides video demonstrations, interviews, live streams, and events related to ZBrush and its features. You can access it online from any browser or device.
      • -
      • ZBrushLIVE: This is the official live streaming platform of Pixologic that provides live broadcasts of artists using ZBrush and its features. You can access it online from any browser or device.
      • -
      -

      Conclusion

      - -

      GoZ ZBrush 4R4 Crack is a cracked version of ZBrush 4R4 that includes a keygen that can generate a serial number and activate the software without requiring an iLok dongle or an internet connection.

      - -

      It offers some advantages, such as free access, full features, GoZ functionality, and no internet connection required.

      - -

      However, it also has some disadvantages, such as compatibility issues, -legal issues, -ethical issues, -technical issues, -security issues, -and moral issues.

      - -

      If you are interested in using GoZ ZBrush 4R4 Crack for your 3D projects, -you should download and install it from a reliable source, -register it with a valid serial number, -and use it with caution and responsibility.

      - -

      You should also consider some alternatives to GoZ ZBrush 4R4 Crack, -such as buying a legal license of ZBrush, -using an older version of ZBrush that is free or cheaper, -or using another digital sculpting software that is more affordable or accessible.

      - -

      You should also find some resources and tutorials for using GoZ ZBrush 4R4 Crack, -such as ZBrush Documentation, ZClassroom, ZBrushCentral, Pixologic YouTube Channel, and ZBrushLIVE.

      - -

      We hope this article has helped you understand what GoZ ZBrush 4R4 Crack is, how to download and install it, and what benefits and drawbacks it has for your 3D projects.

      3cee63e6c2
      -
      -
      \ No newline at end of file diff --git a/spaces/diffusers/sdxl-to-diffusers/convert.py b/spaces/diffusers/sdxl-to-diffusers/convert.py deleted file mode 100644 index fcdb0114d785f69bcd7120d2bd6ff7c9d6709ffb..0000000000000000000000000000000000000000 --- a/spaces/diffusers/sdxl-to-diffusers/convert.py +++ /dev/null @@ -1,69 +0,0 @@ -import gradio as gr -import requests -import os -import shutil -from pathlib import Path -from typing import Any -from tempfile import TemporaryDirectory -from typing import Optional - -import torch -from io import BytesIO - -from huggingface_hub import CommitInfo, Discussion, HfApi, hf_hub_download -from huggingface_hub.file_download import repo_folder_name -from diffusers import StableDiffusionXLPipeline -from transformers import CONFIG_MAPPING - - -COMMIT_MESSAGE = " This PR adds fp32 and fp16 weights in safetensors format to {}" - - -def convert_single(model_id: str, filename: str, folder: str, progress: Any, token: str): - progress(0, desc="Downloading model") - local_file = os.path.join(model_id, filename) - ckpt_file = local_file if os.path.isfile(local_file) else hf_hub_download(repo_id=model_id, filename=filename, token=token) - - pipeline = StableDiffusionXLPipeline.from_single_file(ckpt_file) - - pipeline.save_pretrained(folder, safe_serialization=True) - pipeline = pipeline.to(torch_dtype=torch.float16) - pipeline.save_pretrained(folder, safe_serialization=True, variant="fp16") - - return folder - - -def previous_pr(api: "HfApi", model_id: str, pr_title: str) -> Optional["Discussion"]: - try: - discussions = api.get_repo_discussions(repo_id=model_id) - except Exception: - return None - for discussion in discussions: - if discussion.status == "open" and discussion.is_pull_request and discussion.title == pr_title: - details = api.get_discussion_details(repo_id=model_id, discussion_num=discussion.num) - if details.target_branch == "refs/heads/main": - return discussion - - -def convert(token: str, model_id: str, filename: str, progress=gr.Progress()): - api = HfApi() - - pr_title = "Adding `diffusers` weights of this model" - - with TemporaryDirectory() as d: - folder = os.path.join(d, repo_folder_name(repo_id=model_id, repo_type="models")) - os.makedirs(folder) - new_pr = None - try: - folder = convert_single(model_id, filename, folder, progress, token) - progress(0.7, desc="Uploading to Hub") - new_pr = api.upload_folder(folder_path=folder, path_in_repo="./", repo_id=model_id, repo_type="model", token=token, commit_message=pr_title, commit_description=COMMIT_MESSAGE.format(model_id), create_pr=True) - pr_number = new_pr.split("%2F")[-1].split("/")[0] - link = f"Pr created at: {'https://huggingface.co/' + os.path.join(model_id, 'discussions', pr_number)}" - progress(1, desc="Done") - except Exception as e: - raise gr.exceptions.Error(str(e)) - finally: - shutil.rmtree(folder) - - return link \ No newline at end of file diff --git a/spaces/digitalxingtong/Bufeiyan-c-Bert-VITS2/text/symbols.py b/spaces/digitalxingtong/Bufeiyan-c-Bert-VITS2/text/symbols.py deleted file mode 100644 index 9dfae4e633829f20c4fd767b1c7a9198911ed801..0000000000000000000000000000000000000000 --- a/spaces/digitalxingtong/Bufeiyan-c-Bert-VITS2/text/symbols.py +++ /dev/null @@ -1,51 +0,0 @@ -punctuation = ['!', '?', '…', ",", ".", "'", '-'] -pu_symbols = punctuation + ["SP", "UNK"] -pad = '_' - -# chinese -zh_symbols = ['E', 'En', 'a', 'ai', 'an', 'ang', 'ao', 'b', 'c', 'ch', 'd', 'e', 'ei', 'en', 'eng', 'er', 'f', 'g', 'h', - 'i', 'i0', 'ia', 'ian', 'iang', 'iao', 'ie', 'in', 'ing', 'iong', 'ir', 'iu', 'j', 'k', 'l', 'm', 'n', 'o', - 'ong', - 'ou', 'p', 'q', 'r', 's', 'sh', 't', 'u', 'ua', 'uai', 'uan', 'uang', 'ui', 'un', 'uo', 'v', 'van', 've', 'vn', - 'w', 'x', 'y', 'z', 'zh', - "AA", "EE", "OO"] -num_zh_tones = 6 - -# japanese -ja_symbols = ['I', 'N', 'U', 'a', 'b', 'by', 'ch', 'cl', 'd', 'dy', 'e', 'f', 'g', 'gy', 'h', 'hy', 'i', 'j', 'k', 'ky', - 'm', 'my', 'n', 'ny', 'o', 'p', 'py', 'r', 'ry', 's', 'sh', 't', 'ts', 'u', 'V', 'w', 'y', 'z'] -num_ja_tones = 1 - -# English -en_symbols = ['aa', 'ae', 'ah', 'ao', 'aw', 'ay', 'b', 'ch', 'd', 'dh', 'eh', 'er', 'ey', 'f', 'g', 'hh', 'ih', 'iy', - 'jh', 'k', 'l', 'm', 'n', 'ng', 'ow', 'oy', 'p', 'r', 's', - 'sh', 't', 'th', 'uh', 'uw', 'V', 'w', 'y', 'z', 'zh'] -num_en_tones = 4 - -# combine all symbols -normal_symbols = sorted(set(zh_symbols + ja_symbols + en_symbols)) -symbols = [pad] + normal_symbols + pu_symbols -sil_phonemes_ids = [symbols.index(i) for i in pu_symbols] - -# combine all tones -num_tones = num_zh_tones + num_ja_tones + num_en_tones - -# language maps -language_id_map = { - 'ZH': 0, - "JA": 1, - "EN": 2 -} -num_languages = len(language_id_map.keys()) - -language_tone_start_map = { - 'ZH': 0, - "JA": num_zh_tones, - "EN": num_zh_tones + num_ja_tones -} - -if __name__ == '__main__': - a = set(zh_symbols) - b = set(en_symbols) - print(sorted(a&b)) - diff --git a/spaces/digitalxingtong/Xingtong-Read-Bert-VITS2/losses.py b/spaces/digitalxingtong/Xingtong-Read-Bert-VITS2/losses.py deleted file mode 100644 index fb22a0e834dd87edaa37bb8190eee2c3c7abe0d5..0000000000000000000000000000000000000000 --- a/spaces/digitalxingtong/Xingtong-Read-Bert-VITS2/losses.py +++ /dev/null @@ -1,61 +0,0 @@ -import torch -from torch.nn import functional as F - -import commons - - -def feature_loss(fmap_r, fmap_g): - loss = 0 - for dr, dg in zip(fmap_r, fmap_g): - for rl, gl in zip(dr, dg): - rl = rl.float().detach() - gl = gl.float() - loss += torch.mean(torch.abs(rl - gl)) - - return loss * 2 - - -def discriminator_loss(disc_real_outputs, disc_generated_outputs): - loss = 0 - r_losses = [] - g_losses = [] - for dr, dg in zip(disc_real_outputs, disc_generated_outputs): - dr = dr.float() - dg = dg.float() - r_loss = torch.mean((1-dr)**2) - g_loss = torch.mean(dg**2) - loss += (r_loss + g_loss) - r_losses.append(r_loss.item()) - g_losses.append(g_loss.item()) - - return loss, r_losses, g_losses - - -def generator_loss(disc_outputs): - loss = 0 - gen_losses = [] - for dg in disc_outputs: - dg = dg.float() - l = torch.mean((1-dg)**2) - gen_losses.append(l) - loss += l - - return loss, gen_losses - - -def kl_loss(z_p, logs_q, m_p, logs_p, z_mask): - """ - z_p, logs_q: [b, h, t_t] - m_p, logs_p: [b, h, t_t] - """ - z_p = z_p.float() - logs_q = logs_q.float() - m_p = m_p.float() - logs_p = logs_p.float() - z_mask = z_mask.float() - - kl = logs_p - logs_q - 0.5 - kl += 0.5 * ((z_p - m_p)**2) * torch.exp(-2. * logs_p) - kl = torch.sum(kl * z_mask) - l = kl / torch.sum(z_mask) - return l diff --git a/spaces/dineshreddy/WALT/mmdet/models/dense_heads/paa_head.py b/spaces/dineshreddy/WALT/mmdet/models/dense_heads/paa_head.py deleted file mode 100644 index e067b0121cf8b8230c0c9c6b8cfd41f56be4e298..0000000000000000000000000000000000000000 --- a/spaces/dineshreddy/WALT/mmdet/models/dense_heads/paa_head.py +++ /dev/null @@ -1,671 +0,0 @@ -import numpy as np -import torch -from mmcv.runner import force_fp32 - -from mmdet.core import multi_apply, multiclass_nms -from mmdet.core.bbox.iou_calculators import bbox_overlaps -from mmdet.models import HEADS -from mmdet.models.dense_heads import ATSSHead - -EPS = 1e-12 -try: - import sklearn.mixture as skm -except ImportError: - skm = None - - -def levels_to_images(mlvl_tensor): - """Concat multi-level feature maps by image. - - [feature_level0, feature_level1...] -> [feature_image0, feature_image1...] - Convert the shape of each element in mlvl_tensor from (N, C, H, W) to - (N, H*W , C), then split the element to N elements with shape (H*W, C), and - concat elements in same image of all level along first dimension. - - Args: - mlvl_tensor (list[torch.Tensor]): list of Tensor which collect from - corresponding level. Each element is of shape (N, C, H, W) - - Returns: - list[torch.Tensor]: A list that contains N tensors and each tensor is - of shape (num_elements, C) - """ - batch_size = mlvl_tensor[0].size(0) - batch_list = [[] for _ in range(batch_size)] - channels = mlvl_tensor[0].size(1) - for t in mlvl_tensor: - t = t.permute(0, 2, 3, 1) - t = t.view(batch_size, -1, channels).contiguous() - for img in range(batch_size): - batch_list[img].append(t[img]) - return [torch.cat(item, 0) for item in batch_list] - - -@HEADS.register_module() -class PAAHead(ATSSHead): - """Head of PAAAssignment: Probabilistic Anchor Assignment with IoU - Prediction for Object Detection. - - Code is modified from the `official github repo - `_. - - More details can be found in the `paper - `_ . - - Args: - topk (int): Select topk samples with smallest loss in - each level. - score_voting (bool): Whether to use score voting in post-process. - covariance_type : String describing the type of covariance parameters - to be used in :class:`sklearn.mixture.GaussianMixture`. - It must be one of: - - - 'full': each component has its own general covariance matrix - - 'tied': all components share the same general covariance matrix - - 'diag': each component has its own diagonal covariance matrix - - 'spherical': each component has its own single variance - Default: 'diag'. From 'full' to 'spherical', the gmm fitting - process is faster yet the performance could be influenced. For most - cases, 'diag' should be a good choice. - """ - - def __init__(self, - *args, - topk=9, - score_voting=True, - covariance_type='diag', - **kwargs): - # topk used in paa reassign process - self.topk = topk - self.with_score_voting = score_voting - self.covariance_type = covariance_type - super(PAAHead, self).__init__(*args, **kwargs) - - @force_fp32(apply_to=('cls_scores', 'bbox_preds', 'iou_preds')) - def loss(self, - cls_scores, - bbox_preds, - iou_preds, - gt_bboxes, - gt_labels, - img_metas, - gt_bboxes_ignore=None): - """Compute losses of the head. - - Args: - cls_scores (list[Tensor]): Box scores for each scale level - Has shape (N, num_anchors * num_classes, H, W) - bbox_preds (list[Tensor]): Box energies / deltas for each scale - level with shape (N, num_anchors * 4, H, W) - iou_preds (list[Tensor]): iou_preds for each scale - level with shape (N, num_anchors * 1, H, W) - gt_bboxes (list[Tensor]): Ground truth bboxes for each image with - shape (num_gts, 4) in [tl_x, tl_y, br_x, br_y] format. - gt_labels (list[Tensor]): class indices corresponding to each box - img_metas (list[dict]): Meta information of each image, e.g., - image size, scaling factor, etc. - gt_bboxes_ignore (list[Tensor] | None): Specify which bounding - boxes can be ignored when are computing the loss. - - Returns: - dict[str, Tensor]: A dictionary of loss gmm_assignment. - """ - - featmap_sizes = [featmap.size()[-2:] for featmap in cls_scores] - assert len(featmap_sizes) == self.anchor_generator.num_levels - - device = cls_scores[0].device - anchor_list, valid_flag_list = self.get_anchors( - featmap_sizes, img_metas, device=device) - label_channels = self.cls_out_channels if self.use_sigmoid_cls else 1 - cls_reg_targets = self.get_targets( - anchor_list, - valid_flag_list, - gt_bboxes, - img_metas, - gt_bboxes_ignore_list=gt_bboxes_ignore, - gt_labels_list=gt_labels, - label_channels=label_channels, - ) - (labels, labels_weight, bboxes_target, bboxes_weight, pos_inds, - pos_gt_index) = cls_reg_targets - cls_scores = levels_to_images(cls_scores) - cls_scores = [ - item.reshape(-1, self.cls_out_channels) for item in cls_scores - ] - bbox_preds = levels_to_images(bbox_preds) - bbox_preds = [item.reshape(-1, 4) for item in bbox_preds] - iou_preds = levels_to_images(iou_preds) - iou_preds = [item.reshape(-1, 1) for item in iou_preds] - pos_losses_list, = multi_apply(self.get_pos_loss, anchor_list, - cls_scores, bbox_preds, labels, - labels_weight, bboxes_target, - bboxes_weight, pos_inds) - - with torch.no_grad(): - reassign_labels, reassign_label_weight, \ - reassign_bbox_weights, num_pos = multi_apply( - self.paa_reassign, - pos_losses_list, - labels, - labels_weight, - bboxes_weight, - pos_inds, - pos_gt_index, - anchor_list) - num_pos = sum(num_pos) - # convert all tensor list to a flatten tensor - cls_scores = torch.cat(cls_scores, 0).view(-1, cls_scores[0].size(-1)) - bbox_preds = torch.cat(bbox_preds, 0).view(-1, bbox_preds[0].size(-1)) - iou_preds = torch.cat(iou_preds, 0).view(-1, iou_preds[0].size(-1)) - labels = torch.cat(reassign_labels, 0).view(-1) - flatten_anchors = torch.cat( - [torch.cat(item, 0) for item in anchor_list]) - labels_weight = torch.cat(reassign_label_weight, 0).view(-1) - bboxes_target = torch.cat(bboxes_target, - 0).view(-1, bboxes_target[0].size(-1)) - - pos_inds_flatten = ((labels >= 0) - & - (labels < self.num_classes)).nonzero().reshape(-1) - - losses_cls = self.loss_cls( - cls_scores, - labels, - labels_weight, - avg_factor=max(num_pos, len(img_metas))) # avoid num_pos=0 - if num_pos: - pos_bbox_pred = self.bbox_coder.decode( - flatten_anchors[pos_inds_flatten], - bbox_preds[pos_inds_flatten]) - pos_bbox_target = bboxes_target[pos_inds_flatten] - iou_target = bbox_overlaps( - pos_bbox_pred.detach(), pos_bbox_target, is_aligned=True) - losses_iou = self.loss_centerness( - iou_preds[pos_inds_flatten], - iou_target.unsqueeze(-1), - avg_factor=num_pos) - losses_bbox = self.loss_bbox( - pos_bbox_pred, - pos_bbox_target, - iou_target.clamp(min=EPS), - avg_factor=iou_target.sum()) - else: - losses_iou = iou_preds.sum() * 0 - losses_bbox = bbox_preds.sum() * 0 - - return dict( - loss_cls=losses_cls, loss_bbox=losses_bbox, loss_iou=losses_iou) - - def get_pos_loss(self, anchors, cls_score, bbox_pred, label, label_weight, - bbox_target, bbox_weight, pos_inds): - """Calculate loss of all potential positive samples obtained from first - match process. - - Args: - anchors (list[Tensor]): Anchors of each scale. - cls_score (Tensor): Box scores of single image with shape - (num_anchors, num_classes) - bbox_pred (Tensor): Box energies / deltas of single image - with shape (num_anchors, 4) - label (Tensor): classification target of each anchor with - shape (num_anchors,) - label_weight (Tensor): Classification loss weight of each - anchor with shape (num_anchors). - bbox_target (dict): Regression target of each anchor with - shape (num_anchors, 4). - bbox_weight (Tensor): Bbox weight of each anchor with shape - (num_anchors, 4). - pos_inds (Tensor): Index of all positive samples got from - first assign process. - - Returns: - Tensor: Losses of all positive samples in single image. - """ - if not len(pos_inds): - return cls_score.new([]), - anchors_all_level = torch.cat(anchors, 0) - pos_scores = cls_score[pos_inds] - pos_bbox_pred = bbox_pred[pos_inds] - pos_label = label[pos_inds] - pos_label_weight = label_weight[pos_inds] - pos_bbox_target = bbox_target[pos_inds] - pos_bbox_weight = bbox_weight[pos_inds] - pos_anchors = anchors_all_level[pos_inds] - pos_bbox_pred = self.bbox_coder.decode(pos_anchors, pos_bbox_pred) - - # to keep loss dimension - loss_cls = self.loss_cls( - pos_scores, - pos_label, - pos_label_weight, - avg_factor=self.loss_cls.loss_weight, - reduction_override='none') - - loss_bbox = self.loss_bbox( - pos_bbox_pred, - pos_bbox_target, - pos_bbox_weight, - avg_factor=self.loss_cls.loss_weight, - reduction_override='none') - - loss_cls = loss_cls.sum(-1) - pos_loss = loss_bbox + loss_cls - return pos_loss, - - def paa_reassign(self, pos_losses, label, label_weight, bbox_weight, - pos_inds, pos_gt_inds, anchors): - """Fit loss to GMM distribution and separate positive, ignore, negative - samples again with GMM model. - - Args: - pos_losses (Tensor): Losses of all positive samples in - single image. - label (Tensor): classification target of each anchor with - shape (num_anchors,) - label_weight (Tensor): Classification loss weight of each - anchor with shape (num_anchors). - bbox_weight (Tensor): Bbox weight of each anchor with shape - (num_anchors, 4). - pos_inds (Tensor): Index of all positive samples got from - first assign process. - pos_gt_inds (Tensor): Gt_index of all positive samples got - from first assign process. - anchors (list[Tensor]): Anchors of each scale. - - Returns: - tuple: Usually returns a tuple containing learning targets. - - - label (Tensor): classification target of each anchor after - paa assign, with shape (num_anchors,) - - label_weight (Tensor): Classification loss weight of each - anchor after paa assign, with shape (num_anchors). - - bbox_weight (Tensor): Bbox weight of each anchor with shape - (num_anchors, 4). - - num_pos (int): The number of positive samples after paa - assign. - """ - if not len(pos_inds): - return label, label_weight, bbox_weight, 0 - label = label.clone() - label_weight = label_weight.clone() - bbox_weight = bbox_weight.clone() - num_gt = pos_gt_inds.max() + 1 - num_level = len(anchors) - num_anchors_each_level = [item.size(0) for item in anchors] - num_anchors_each_level.insert(0, 0) - inds_level_interval = np.cumsum(num_anchors_each_level) - pos_level_mask = [] - for i in range(num_level): - mask = (pos_inds >= inds_level_interval[i]) & ( - pos_inds < inds_level_interval[i + 1]) - pos_level_mask.append(mask) - pos_inds_after_paa = [label.new_tensor([])] - ignore_inds_after_paa = [label.new_tensor([])] - for gt_ind in range(num_gt): - pos_inds_gmm = [] - pos_loss_gmm = [] - gt_mask = pos_gt_inds == gt_ind - for level in range(num_level): - level_mask = pos_level_mask[level] - level_gt_mask = level_mask & gt_mask - value, topk_inds = pos_losses[level_gt_mask].topk( - min(level_gt_mask.sum(), self.topk), largest=False) - pos_inds_gmm.append(pos_inds[level_gt_mask][topk_inds]) - pos_loss_gmm.append(value) - pos_inds_gmm = torch.cat(pos_inds_gmm) - pos_loss_gmm = torch.cat(pos_loss_gmm) - # fix gmm need at least two sample - if len(pos_inds_gmm) < 2: - continue - device = pos_inds_gmm.device - pos_loss_gmm, sort_inds = pos_loss_gmm.sort() - pos_inds_gmm = pos_inds_gmm[sort_inds] - pos_loss_gmm = pos_loss_gmm.view(-1, 1).cpu().numpy() - min_loss, max_loss = pos_loss_gmm.min(), pos_loss_gmm.max() - means_init = np.array([min_loss, max_loss]).reshape(2, 1) - weights_init = np.array([0.5, 0.5]) - precisions_init = np.array([1.0, 1.0]).reshape(2, 1, 1) # full - if self.covariance_type == 'spherical': - precisions_init = precisions_init.reshape(2) - elif self.covariance_type == 'diag': - precisions_init = precisions_init.reshape(2, 1) - elif self.covariance_type == 'tied': - precisions_init = np.array([[1.0]]) - if skm is None: - raise ImportError('Please run "pip install sklearn" ' - 'to install sklearn first.') - gmm = skm.GaussianMixture( - 2, - weights_init=weights_init, - means_init=means_init, - precisions_init=precisions_init, - covariance_type=self.covariance_type) - gmm.fit(pos_loss_gmm) - gmm_assignment = gmm.predict(pos_loss_gmm) - scores = gmm.score_samples(pos_loss_gmm) - gmm_assignment = torch.from_numpy(gmm_assignment).to(device) - scores = torch.from_numpy(scores).to(device) - - pos_inds_temp, ignore_inds_temp = self.gmm_separation_scheme( - gmm_assignment, scores, pos_inds_gmm) - pos_inds_after_paa.append(pos_inds_temp) - ignore_inds_after_paa.append(ignore_inds_temp) - - pos_inds_after_paa = torch.cat(pos_inds_after_paa) - ignore_inds_after_paa = torch.cat(ignore_inds_after_paa) - reassign_mask = (pos_inds.unsqueeze(1) != pos_inds_after_paa).all(1) - reassign_ids = pos_inds[reassign_mask] - label[reassign_ids] = self.num_classes - label_weight[ignore_inds_after_paa] = 0 - bbox_weight[reassign_ids] = 0 - num_pos = len(pos_inds_after_paa) - return label, label_weight, bbox_weight, num_pos - - def gmm_separation_scheme(self, gmm_assignment, scores, pos_inds_gmm): - """A general separation scheme for gmm model. - - It separates a GMM distribution of candidate samples into three - parts, 0 1 and uncertain areas, and you can implement other - separation schemes by rewriting this function. - - Args: - gmm_assignment (Tensor): The prediction of GMM which is of shape - (num_samples,). The 0/1 value indicates the distribution - that each sample comes from. - scores (Tensor): The probability of sample coming from the - fit GMM distribution. The tensor is of shape (num_samples,). - pos_inds_gmm (Tensor): All the indexes of samples which are used - to fit GMM model. The tensor is of shape (num_samples,) - - Returns: - tuple[Tensor]: The indices of positive and ignored samples. - - - pos_inds_temp (Tensor): Indices of positive samples. - - ignore_inds_temp (Tensor): Indices of ignore samples. - """ - # The implementation is (c) in Fig.3 in origin paper instead of (b). - # You can refer to issues such as - # https://github.com/kkhoot/PAA/issues/8 and - # https://github.com/kkhoot/PAA/issues/9. - fgs = gmm_assignment == 0 - pos_inds_temp = fgs.new_tensor([], dtype=torch.long) - ignore_inds_temp = fgs.new_tensor([], dtype=torch.long) - if fgs.nonzero().numel(): - _, pos_thr_ind = scores[fgs].topk(1) - pos_inds_temp = pos_inds_gmm[fgs][:pos_thr_ind + 1] - ignore_inds_temp = pos_inds_gmm.new_tensor([]) - return pos_inds_temp, ignore_inds_temp - - def get_targets( - self, - anchor_list, - valid_flag_list, - gt_bboxes_list, - img_metas, - gt_bboxes_ignore_list=None, - gt_labels_list=None, - label_channels=1, - unmap_outputs=True, - ): - """Get targets for PAA head. - - This method is almost the same as `AnchorHead.get_targets()`. We direct - return the results from _get_targets_single instead map it to levels - by images_to_levels function. - - Args: - anchor_list (list[list[Tensor]]): Multi level anchors of each - image. The outer list indicates images, and the inner list - corresponds to feature levels of the image. Each element of - the inner list is a tensor of shape (num_anchors, 4). - valid_flag_list (list[list[Tensor]]): Multi level valid flags of - each image. The outer list indicates images, and the inner list - corresponds to feature levels of the image. Each element of - the inner list is a tensor of shape (num_anchors, ) - gt_bboxes_list (list[Tensor]): Ground truth bboxes of each image. - img_metas (list[dict]): Meta info of each image. - gt_bboxes_ignore_list (list[Tensor]): Ground truth bboxes to be - ignored. - gt_labels_list (list[Tensor]): Ground truth labels of each box. - label_channels (int): Channel of label. - unmap_outputs (bool): Whether to map outputs back to the original - set of anchors. - - Returns: - tuple: Usually returns a tuple containing learning targets. - - - labels (list[Tensor]): Labels of all anchors, each with - shape (num_anchors,). - - label_weights (list[Tensor]): Label weights of all anchor. - each with shape (num_anchors,). - - bbox_targets (list[Tensor]): BBox targets of all anchors. - each with shape (num_anchors, 4). - - bbox_weights (list[Tensor]): BBox weights of all anchors. - each with shape (num_anchors, 4). - - pos_inds (list[Tensor]): Contains all index of positive - sample in all anchor. - - gt_inds (list[Tensor]): Contains all gt_index of positive - sample in all anchor. - """ - - num_imgs = len(img_metas) - assert len(anchor_list) == len(valid_flag_list) == num_imgs - concat_anchor_list = [] - concat_valid_flag_list = [] - for i in range(num_imgs): - assert len(anchor_list[i]) == len(valid_flag_list[i]) - concat_anchor_list.append(torch.cat(anchor_list[i])) - concat_valid_flag_list.append(torch.cat(valid_flag_list[i])) - - # compute targets for each image - if gt_bboxes_ignore_list is None: - gt_bboxes_ignore_list = [None for _ in range(num_imgs)] - if gt_labels_list is None: - gt_labels_list = [None for _ in range(num_imgs)] - results = multi_apply( - self._get_targets_single, - concat_anchor_list, - concat_valid_flag_list, - gt_bboxes_list, - gt_bboxes_ignore_list, - gt_labels_list, - img_metas, - label_channels=label_channels, - unmap_outputs=unmap_outputs) - - (labels, label_weights, bbox_targets, bbox_weights, valid_pos_inds, - valid_neg_inds, sampling_result) = results - - # Due to valid flag of anchors, we have to calculate the real pos_inds - # in origin anchor set. - pos_inds = [] - for i, single_labels in enumerate(labels): - pos_mask = (0 <= single_labels) & ( - single_labels < self.num_classes) - pos_inds.append(pos_mask.nonzero().view(-1)) - - gt_inds = [item.pos_assigned_gt_inds for item in sampling_result] - return (labels, label_weights, bbox_targets, bbox_weights, pos_inds, - gt_inds) - - def _get_targets_single(self, - flat_anchors, - valid_flags, - gt_bboxes, - gt_bboxes_ignore, - gt_labels, - img_meta, - label_channels=1, - unmap_outputs=True): - """Compute regression and classification targets for anchors in a - single image. - - This method is same as `AnchorHead._get_targets_single()`. - """ - assert unmap_outputs, 'We must map outputs back to the original' \ - 'set of anchors in PAAhead' - return super(ATSSHead, self)._get_targets_single( - flat_anchors, - valid_flags, - gt_bboxes, - gt_bboxes_ignore, - gt_labels, - img_meta, - label_channels=1, - unmap_outputs=True) - - def _get_bboxes(self, - cls_scores, - bbox_preds, - iou_preds, - mlvl_anchors, - img_shapes, - scale_factors, - cfg, - rescale=False, - with_nms=True): - """Transform outputs for a single batch item into labeled boxes. - - This method is almost same as `ATSSHead._get_bboxes()`. - We use sqrt(iou_preds * cls_scores) in NMS process instead of just - cls_scores. Besides, score voting is used when `` score_voting`` - is set to True. - """ - assert with_nms, 'PAA only supports "with_nms=True" now' - assert len(cls_scores) == len(bbox_preds) == len(mlvl_anchors) - batch_size = cls_scores[0].shape[0] - - mlvl_bboxes = [] - mlvl_scores = [] - mlvl_iou_preds = [] - for cls_score, bbox_pred, iou_preds, anchors in zip( - cls_scores, bbox_preds, iou_preds, mlvl_anchors): - assert cls_score.size()[-2:] == bbox_pred.size()[-2:] - - scores = cls_score.permute(0, 2, 3, 1).reshape( - batch_size, -1, self.cls_out_channels).sigmoid() - bbox_pred = bbox_pred.permute(0, 2, 3, - 1).reshape(batch_size, -1, 4) - iou_preds = iou_preds.permute(0, 2, 3, 1).reshape(batch_size, - -1).sigmoid() - - nms_pre = cfg.get('nms_pre', -1) - if nms_pre > 0 and scores.shape[1] > nms_pre: - max_scores, _ = (scores * iou_preds[..., None]).sqrt().max(-1) - _, topk_inds = max_scores.topk(nms_pre) - batch_inds = torch.arange(batch_size).view( - -1, 1).expand_as(topk_inds).long() - anchors = anchors[topk_inds, :] - bbox_pred = bbox_pred[batch_inds, topk_inds, :] - scores = scores[batch_inds, topk_inds, :] - iou_preds = iou_preds[batch_inds, topk_inds] - else: - anchors = anchors.expand_as(bbox_pred) - - bboxes = self.bbox_coder.decode( - anchors, bbox_pred, max_shape=img_shapes) - mlvl_bboxes.append(bboxes) - mlvl_scores.append(scores) - mlvl_iou_preds.append(iou_preds) - - batch_mlvl_bboxes = torch.cat(mlvl_bboxes, dim=1) - if rescale: - batch_mlvl_bboxes /= batch_mlvl_bboxes.new_tensor( - scale_factors).unsqueeze(1) - batch_mlvl_scores = torch.cat(mlvl_scores, dim=1) - # Add a dummy background class to the backend when using sigmoid - # remind that we set FG labels to [0, num_class-1] since mmdet v2.0 - # BG cat_id: num_class - padding = batch_mlvl_scores.new_zeros(batch_size, - batch_mlvl_scores.shape[1], 1) - batch_mlvl_scores = torch.cat([batch_mlvl_scores, padding], dim=-1) - batch_mlvl_iou_preds = torch.cat(mlvl_iou_preds, dim=1) - batch_mlvl_nms_scores = (batch_mlvl_scores * - batch_mlvl_iou_preds[..., None]).sqrt() - - det_results = [] - for (mlvl_bboxes, mlvl_scores) in zip(batch_mlvl_bboxes, - batch_mlvl_nms_scores): - det_bbox, det_label = multiclass_nms( - mlvl_bboxes, - mlvl_scores, - cfg.score_thr, - cfg.nms, - cfg.max_per_img, - score_factors=None) - if self.with_score_voting and len(det_bbox) > 0: - det_bbox, det_label = self.score_voting( - det_bbox, det_label, mlvl_bboxes, mlvl_scores, - cfg.score_thr) - det_results.append(tuple([det_bbox, det_label])) - - return det_results - - def score_voting(self, det_bboxes, det_labels, mlvl_bboxes, - mlvl_nms_scores, score_thr): - """Implementation of score voting method works on each remaining boxes - after NMS procedure. - - Args: - det_bboxes (Tensor): Remaining boxes after NMS procedure, - with shape (k, 5), each dimension means - (x1, y1, x2, y2, score). - det_labels (Tensor): The label of remaining boxes, with shape - (k, 1),Labels are 0-based. - mlvl_bboxes (Tensor): All boxes before the NMS procedure, - with shape (num_anchors,4). - mlvl_nms_scores (Tensor): The scores of all boxes which is used - in the NMS procedure, with shape (num_anchors, num_class) - mlvl_iou_preds (Tensor): The predictions of IOU of all boxes - before the NMS procedure, with shape (num_anchors, 1) - score_thr (float): The score threshold of bboxes. - - Returns: - tuple: Usually returns a tuple containing voting results. - - - det_bboxes_voted (Tensor): Remaining boxes after - score voting procedure, with shape (k, 5), each - dimension means (x1, y1, x2, y2, score). - - det_labels_voted (Tensor): Label of remaining bboxes - after voting, with shape (num_anchors,). - """ - candidate_mask = mlvl_nms_scores > score_thr - candidate_mask_nonzeros = candidate_mask.nonzero() - candidate_inds = candidate_mask_nonzeros[:, 0] - candidate_labels = candidate_mask_nonzeros[:, 1] - candidate_bboxes = mlvl_bboxes[candidate_inds] - candidate_scores = mlvl_nms_scores[candidate_mask] - det_bboxes_voted = [] - det_labels_voted = [] - for cls in range(self.cls_out_channels): - candidate_cls_mask = candidate_labels == cls - if not candidate_cls_mask.any(): - continue - candidate_cls_scores = candidate_scores[candidate_cls_mask] - candidate_cls_bboxes = candidate_bboxes[candidate_cls_mask] - det_cls_mask = det_labels == cls - det_cls_bboxes = det_bboxes[det_cls_mask].view( - -1, det_bboxes.size(-1)) - det_candidate_ious = bbox_overlaps(det_cls_bboxes[:, :4], - candidate_cls_bboxes) - for det_ind in range(len(det_cls_bboxes)): - single_det_ious = det_candidate_ious[det_ind] - pos_ious_mask = single_det_ious > 0.01 - pos_ious = single_det_ious[pos_ious_mask] - pos_bboxes = candidate_cls_bboxes[pos_ious_mask] - pos_scores = candidate_cls_scores[pos_ious_mask] - pis = (torch.exp(-(1 - pos_ious)**2 / 0.025) * - pos_scores)[:, None] - voted_box = torch.sum( - pis * pos_bboxes, dim=0) / torch.sum( - pis, dim=0) - voted_score = det_cls_bboxes[det_ind][-1:][None, :] - det_bboxes_voted.append( - torch.cat((voted_box[None, :], voted_score), dim=1)) - det_labels_voted.append(cls) - - det_bboxes_voted = torch.cat(det_bboxes_voted, dim=0) - det_labels_voted = det_labels.new_tensor(det_labels_voted) - return det_bboxes_voted, det_labels_voted diff --git a/spaces/docs-demos/albert-base-v2/README.md b/spaces/docs-demos/albert-base-v2/README.md deleted file mode 100644 index 7f2d27dd63fc414a252e75a5403714fabc55c605..0000000000000000000000000000000000000000 --- a/spaces/docs-demos/albert-base-v2/README.md +++ /dev/null @@ -1,37 +0,0 @@ ---- -title: ALBERT -emoji: 🌖 -colorFrom: green -colorTo: gray -sdk: gradio -app_file: app.py -pinned: false ---- - -# Configuration - -`title`: _string_ -Display title for the Space - -`emoji`: _string_ -Space emoji (emoji-only character allowed) - -`colorFrom`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`colorTo`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`sdk`: _string_ -Can be either `gradio` or `streamlit` - -`sdk_version` : _string_ -Only applicable for `streamlit` SDK. -See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions. - -`app_file`: _string_ -Path to your main application file (which contains either `gradio` or `streamlit` Python code). -Path is relative to the root of the repository. - -`pinned`: _boolean_ -Whether the Space stays on top of your list. diff --git a/spaces/duchaba/yml_hackathon_img_maggie/README.md b/spaces/duchaba/yml_hackathon_img_maggie/README.md deleted file mode 100644 index 696e811539dfa2963a5989ea0d50f1449cddbb55..0000000000000000000000000000000000000000 --- a/spaces/duchaba/yml_hackathon_img_maggie/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Yml Hackathon Img Maggie -emoji: 📚 -colorFrom: purple -colorTo: gray -sdk: gradio -sdk_version: 3.29.0 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/egvpprojects/Text-2-Speech/app.py b/spaces/egvpprojects/Text-2-Speech/app.py deleted file mode 100644 index b6eec63e7f1943f548683ec025f943bbf0d5ed39..0000000000000000000000000000000000000000 --- a/spaces/egvpprojects/Text-2-Speech/app.py +++ /dev/null @@ -1,95 +0,0 @@ -""" -Copyright 2022 Balacoon - -TTS interactive demo -""" - -import logging -from typing import cast - -import gradio as gr -from balacoon_tts import TTS -from huggingface_hub import hf_hub_download, list_repo_files - -# global tts module, initialized from a model selected -tts = None - - -def main(): - logging.basicConfig(level=logging.INFO) - - with gr.Blocks() as demo: - gr.Markdown( - """ -

      Text-to-Speech

      - - 1. Write an utterance to generate, - 2. Select the model to synthesize with - 3. Select the speaker - 4. Hit "Generate" and listen to the result! - - When you select a Model for the first time, - it will take a little time to download it. - """ - ) - with gr.Row(variant="panel"): - text = gr.Textbox(label="Text", placeholder="Insert your article here...") - - with gr.Row(): - with gr.Column(variant="panel"): - repo_files = list_repo_files(repo_id="balacoon/tts") - model_files = [x for x in repo_files if x.endswith("_cpu.addon")] - model_name = gr.Dropdown( - label="Model", - choices=model_files, - ) - with gr.Column(variant="panel"): - speaker = gr.Dropdown(label="Speaker", choices=[]) - - def set_model(model_name_str: str): - """ - gets value from `model_name`, loads model, - re-initializes tts object, gets list of - speakers that model supports and set them to `speaker` - """ - model_path = hf_hub_download( - repo_id="balacoon/tts", filename=model_name_str - ) - global tts - tts = TTS(model_path) - speakers = tts.get_speakers() - value = speakers[-1] - return gr.Dropdown.update( - choices=speakers, value=value, visible=True - ) - - model_name.change(set_model, inputs=model_name, outputs=speaker) - - with gr.Row(variant="panel"): - generate = gr.Button("Generate") - with gr.Row(variant="panel"): - audio = gr.Audio() - - def synthesize_audio(text_str: str, speaker_str: str = ""): - """ - gets utterance to synthesize from `text` Textbox - and speaker name from `speaker` dropdown list. - speaker name might be empty for single-speaker models. - Synthesizes the waveform and updates `audio` with it. - """ - if not text_str: - logging.info("text or speaker are not provided") - return None - global tts - if len(text_str) > 10024: - text_str = text_str[:10024] - samples = cast(TTS, tts).synthesize(text_str, speaker_str) - return gr.Audio.update(value=(cast(TTS, tts).get_sampling_rate(), samples)) - - generate.click(synthesize_audio, inputs=[text, speaker], outputs=audio) - - demo.launch() - - -if __name__ == "__main__": - main() diff --git a/spaces/ehristoforu/Ultrasdspace/app.py b/spaces/ehristoforu/Ultrasdspace/app.py deleted file mode 100644 index 35265f88d3d89930c4931dbd58ff6dfd5786df12..0000000000000000000000000000000000000000 --- a/spaces/ehristoforu/Ultrasdspace/app.py +++ /dev/null @@ -1,18 +0,0 @@ -#import library -import gradio as gr - -#title of app -title = "Stable Diffusion 1.5" - -#description of app -description = "This space is for generating images from text with the Stable Diffusion 1.5 model!" - -#article of app -article = """ -

      - CofAI Group -

      -""" - -gr.Interface.load("models/stabilityai/stable-diffusion-2-1", title=title, description=description, article=article).launch() -gr.Interface.load("models/runwayml/stable-diffusion-v1-5").launch() \ No newline at end of file diff --git a/spaces/esencb/web/README.md b/spaces/esencb/web/README.md deleted file mode 100644 index aaa989e88d235107d238b16130e4aad1f168b83c..0000000000000000000000000000000000000000 --- a/spaces/esencb/web/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Web -emoji: ⚡ -colorFrom: purple -colorTo: yellow -sdk: gradio -sdk_version: 3.14.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/eson/tokenizer-arena/vocab/gpt_nexo_20b/__init__.py b/spaces/eson/tokenizer-arena/vocab/gpt_nexo_20b/__init__.py deleted file mode 100644 index 8aa834fd80c937b10978b06d2d752dfe90320e7b..0000000000000000000000000000000000000000 --- a/spaces/eson/tokenizer-arena/vocab/gpt_nexo_20b/__init__.py +++ /dev/null @@ -1,24 +0,0 @@ - - - -def SimpleTokenizer(): - import os - from tokenizers import Tokenizer - - CURRENT_DIR = os.path.dirname(os.path.abspath(__file__)) - TOKENIZER_DIR = os.path.join(CURRENT_DIR, "20B_tokenizer.json") - tokenizer = Tokenizer.from_file(TOKENIZER_DIR) - tokenizer.vocab_size = tokenizer.get_vocab_size(with_added_tokens=True) - # vocab_size = len(tokenizer.get_vocab()) - # vocab_size = tokenizer.vocab_size - -from transformers import AutoTokenizer - -tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-neox-20b") - -# "tokenizer_type": "HFTokenizer", # https://github.com/EleutherAI/gpt-neox/blob/v2.0/configs/20B.yml#L107 - -# tokenizer.vocab_size = tokenizer.get_vocab_size(with_added_tokens=True) - - - diff --git a/spaces/eson/tokenizer-arena/vocab/kplug/bpe_oov2.py b/spaces/eson/tokenizer-arena/vocab/kplug/bpe_oov2.py deleted file mode 100644 index 7071b53a8dc7bfdd510e984a196c964316b88ae4..0000000000000000000000000000000000000000 --- a/spaces/eson/tokenizer-arena/vocab/kplug/bpe_oov2.py +++ /dev/null @@ -1,9 +0,0 @@ -from zhon.hanzi import punctuation as zh_punc - -""" -这些zh_punc都是oov - -['"', '#', '$', '%', '&', ''', '*', '+', '-', '/', '<', '=', '>', '@', '[', '\', ']', - '^', '_', '`', '{', '}', '⦅', '⦆', '\u3000', '〘', '〙', '〚', '〛', '〟', '〰', '〾', '〿', '‛', '‟'] -""" - diff --git a/spaces/falterWliame/Face_Mask_Detection/If My Heart Had Wings -Flight Diary- - New Wings Akari Download Blackbox.md b/spaces/falterWliame/Face_Mask_Detection/If My Heart Had Wings -Flight Diary- - New Wings Akari Download Blackbox.md deleted file mode 100644 index a25d3c74abe9d240a42cf1860bdb62dad47aeb8b..0000000000000000000000000000000000000000 --- a/spaces/falterWliame/Face_Mask_Detection/If My Heart Had Wings -Flight Diary- - New Wings Akari Download Blackbox.md +++ /dev/null @@ -1,12 +0,0 @@ -

      If My Heart Had Wings -Flight Diary- - New Wings: Akari download blackbox


      Download Zip >>> https://urlca.com/2uDdYX



      -
      -... You MUST be a registered or logged-in user in order to view all comments. View this comment. Mar 20, 2011... Memorial Day is on Monday, May 28, 2011. Here is a list of soldiers that are now deceased. - -Rest in Peace.... CentCom,All Staff,Pace,The Family, CO, and all the others who are no longer with us. "No personnel should die out here." - Lt. Col. Chris Raines. Looking at this list I feel better. The ones that have been killed of cause. I will remember them. If you aren't a military person, or have never served this is the time to thank our service people for all that they do for us. There are service people that have families. The families of those that are left have no one to talk to about their loved one. I will do my part to help those families that need to talk. RIP, Brat. I will remember you.... 10 Army Staff Officers Killed in Helicopter Crash in Germany. By Sergeant Brent M. Runnels, US Army. '"˜Because of the severity of this crash, - -It is believed that all 10 involved personnel perished. The C-17 airlifted all Soldiers from Landstuhl Regional Medical Center in Germany to Ramstein Air Base. The crash site is in "southern Germany."... C-17 crash in Germany kills 10.... The crew of 10 people aboard the C-17 aircraft were killed in a crash of a US Army helicopter carrying soldiers in southern Germany, authorities said Friday, in a crash believed to be a result of a mechanical failure. The... Army CH-47 Chinook aircraft was flown from Ramstein Air Base in Germany to Landstuhl Regional Medical Center on Thursday, "and crashed near Landstuhl on the... Mar 29, 2011... C-17 crash in Germany kills 10.... The crew of 10 people aboard the C-17 aircraft were killed in a crash of a US Army helicopter carrying soldiers in southern Germany, authorities said Friday, in a crash believed to be a result of a mechanical failure. The... CH-47 Chinook aircraft was flown from Ramstein Air Base in Germany to Landstuhl Regional Medical Center on Thursday, "and crashed near Landstuhl on the... - -Chinook Airfield Hill -- Having spent the past four years deploying to the Middle East, I have seen the Army fly a few different types of aircraft. During 4fefd39f24
      -
      -
      -

      diff --git a/spaces/falterWliame/Face_Mask_Detection/Lennar Digital Sylenth1 VSTi 2.2.1.2 X86.md b/spaces/falterWliame/Face_Mask_Detection/Lennar Digital Sylenth1 VSTi 2.2.1.2 X86.md deleted file mode 100644 index ec3f4c31f8822da05df1d5cf621cf9b09ad1dcd5..0000000000000000000000000000000000000000 --- a/spaces/falterWliame/Face_Mask_Detection/Lennar Digital Sylenth1 VSTi 2.2.1.2 X86.md +++ /dev/null @@ -1,6 +0,0 @@ -

      Lennar Digital Sylenth1 VSTi 2.2.1.2 x86


      DOWNLOAD ✒ ✒ ✒ https://urlca.com/2uDcpQ



      -
      -Aaron Windholtz. Lennar Digital Sylenth1 VSTi 2.2.1.2 X86.rar. Fundraising for the youth of Great Britain. 0%. Give NowDonations cannot currently be made to this page. Share. Share with others if they do. Share for the youth of the UK. 0%. Give NowDonations cannot currently be made to this page. Share . Share with others if they do. Share for the youth of the UK. 0%. Give NowDonations cannot currently be made to this page. Share . Share with others if they do. Share for the youth of the UK. 8a78ff9644
      -
      -
      -

      diff --git a/spaces/fatiXbelha/sd/Disfruta de My Talking Angela 2 con dinero infinito y todo desbloqueado en este APK.md b/spaces/fatiXbelha/sd/Disfruta de My Talking Angela 2 con dinero infinito y todo desbloqueado en este APK.md deleted file mode 100644 index 9384f7fd79d17512529154a0c3ac2f2305b40fb0..0000000000000000000000000000000000000000 --- a/spaces/fatiXbelha/sd/Disfruta de My Talking Angela 2 con dinero infinito y todo desbloqueado en este APK.md +++ /dev/null @@ -1,116 +0,0 @@ -
      -

      My Talking Angela 2 APK Dinero Infinito: How to Get Unlimited Coins and Diamonds in 2023

      -

      Do you love playing with Angela the cat in My Talking Angela 2? Do you want to make her happy and stylish with the best clothes, accessories, furniture, food, and more? If yes, then you may need a lot of coins and diamonds in the game. But how can you get them easily and quickly? In this article, we will tell you everything you need to know about My Talking Angela 2 APK Dinero Infinito, a modded version of the game that gives you unlimited coins and diamonds. Read on to find out how to download and install it, what are its pros and cons, and some FAQs.

      -

      my talking angela 2 apk dinero infinito


      Download ————— https://urllie.com/2uNBEM



      -

      What is My Talking Angela 2?

      -

      A virtual pet game with Angela the cat

      -

      My Talking Angela 2 is a virtual pet game developed by Outfit7 Limited. It is the sequel to the popular My Talking Angela game. In this game, you can adopt Angela as your own virtual pet and take care of her. You can feed her, bathe her, dress her up, play with her, sing with her, dance with her, and more. You can also explore different locations with her, such as her home, the city, the beach, the studio, etc.

      -

      Features and gameplay of My Talking Angela 2

      -

      My Talking Angela 2 has many features and gameplay options that make it fun and engaging. Some of them are:

      -
        -
      • You can customize Angela's appearance with hundreds of outfits, hairstyles, makeup, etc.
      • -
      • You can decorate Angela's home with various furniture, wallpapers, carpets, etc.
      • -
      • You can cook delicious food for Angela and feed her.
      • -
      • You can play mini-games with Angela and earn coins and diamonds.
      • -
      • You can watch videos with Angela and get rewards.
      • -
      • You can chat with Angela and she will repeat what you say in a cute voice.
      • -
      • You can record videos of your interactions with Angela and share them with your friends.
      • -
      • You can join events and challenges and win prizes.
      • -
      -

      Why do you need coins and diamonds in My Talking Angela 2?

      -

      Coins and diamonds are the main currencies in the game

      -

      In My Talking Angela 2, coins and diamonds are the main currencies that you can use to buy various items in the game. Coins are the basic currency that you can use to buy clothes, accessories, furniture, food, etc. Diamonds are the premium currency that you can use to buy special items or unlock new features.

      -

      You can use them to buy clothes, accessories, furniture, food, etc.

      -

      As mentioned above, you can use coins and diamonds to buy different items in the game. For example:

      -
      NameStyleDescription
      Jin KazamaKarateThe main protagonist of the game and the son of Kazuya Mishima and Jun Kazama. He seeks revenge against his grandfather Heihachi Mishima for betraying him.
      Ling Xiaoyu
      - - - - - - - - - -
      Price
      DressCoins100-500
      HatCoins50-200
      SofaCoins300-800
      PizzaCoins20-50
      SunglassesDiamonds10-50
      GuitarDiamonds100-200
      Beach HouseDiamonds500-1000
      CakeDiamonds20-50
      -

      You can see that some items are more expensive than others, and some items require diamonds instead of coins. Therefore, you may need a lot of coins and diamonds to buy everything you want in the game.

      -

      How to get coins and diamonds in My Talking Angela 2?

      -

      You can earn them by playing mini-games, watching ads, completing tasks, etc.

      -

      The game offers you several ways to earn coins and diamonds for free. Some of them are:

      -

      my talking angela 2 mod apk monedas infinitas
      -descargar my talking angela 2 hackeado dinero ilimitado
      -my talking angela 2 apk mod diamantes infinitos
      -como tener dinero infinito en my talking angela 2
      -my talking angela 2 apk full dinero y diamantes
      -my talking angela 2 mod apk todo desbloqueado
      -my talking angela 2 hack apk monedas y diamantes
      -descargar my talking angela 2 gratis con dinero infinito
      -my talking angela 2 apk mod compras gratis
      -trucos para my talking angela 2 dinero infinito
      -my talking angela 2 mod apk ultima version dinero infinito
      -my talking angela 2 apk hackeado sin anuncios
      -descargar my talking angela 2 mod apk android
      -my talking angela 2 apk mod menu dinero infinito
      -como hackear my talking angela 2 para tener dinero infinito
      -my talking angela 2 mod apk unlimited money and diamonds
      -download my talking angela 2 mod apk money infinite
      -how to get infinite money in my talking angela 2
      -my talking angela 2 mod apk full money and diamonds
      -how to hack my talking angela 2 for infinite money
      -my talking angela 2 apk free download with infinite money
      -my talking angela 2 mod apk all unlocked
      -my talking angela 2 hack apk money and diamonds
      -download my talking angela 2 free with infinite money
      -my talking angela 2 apk mod free shopping
      -cheats for my talking angela 2 infinite money
      -my talking angela 2 mod apk latest version infinite money
      -my talking angela 2 apk hacked no ads
      -download my talking angela 2 mod apk android
      -my talking angela 2 apk mod menu infinite money

      -
        -
      • Playing mini-games: You can play various mini-games with Angela, such as dancing, baking, painting, etc. Each mini-game will reward you with coins and sometimes diamonds.
      • -
      • Watching ads: You can watch short video ads to get coins and diamonds. You can also get double rewards for watching ads after playing mini-games.
      • -
      • Completing tasks: You can complete daily tasks and achievements to get coins and diamonds. The tasks include feeding Angela, dressing her up, playing with her, etc.
      • -
      • Collecting stickers: You can collect stickers by opening gift boxes or buying them with coins. Each sticker will give you coins and sometimes diamonds.
      • -
      • Leveling up: You can level up by taking care of Angela and playing with her. Each level up will give you coins and sometimes diamonds.
      • -
      • Logging in: You can get coins and diamonds by logging in every day. The more days you log in consecutively, the more rewards you get.
      • -
      • Inviting friends: You can invite your friends to play the game and get coins and diamonds for each friend who joins.
      • -
      -

      You can also buy them with real money or use a modded version of the game

      -

      If you don't want to spend time or effort to earn coins and diamonds, you have two other options. You can either buy them with real money or use a modded version of the game. Buying them with real money is the official and legal way to get them. You can go to the shop in the game and choose the amount of coins and diamonds you want to buy. However, this may cost you a lot of money, especially if you want to buy a lot of them.

      -

      The other option is to use a modded version of the game, which is unofficial and illegal. A modded version of the game is a modified version that gives you unlimited coins and diamonds for free. You don't need to buy anything or do anything to get them. You just need to download and install the modded version of the game on your device. However, this may come with some risks, such as malware, ban, or loss of data.

      -

      What is My Talking Angela 2 MOD APK?

      -

      A modified version of the game that gives you unlimited coins and diamonds

      -

      My Talking Angela 2 MOD APK is a modified version of the game that gives you unlimited coins and diamonds for free. It is also known as My Talking Angela 2 APK Dinero Infinito, which means infinite money in Spanish. With this modded version of the game, you can enjoy unlimited shopping, customization, and fun without spending any money or doing any work.

      -

      How to download and install My Talking Angela 2 MOD APK

      -

      To download and install My Talking Angela 2 MOD APK, you need to follow these steps:

      -
        -
      1. Download the My Talking Angela 2 MOD APK file from the website. You may need to allow unknown sources in your device settings to download it.
      2. -
      3. Locate the downloaded file in your device storage and tap on it to install it. You may need to uninstall the original version of the game first.
      4. -
      5. Wait for the installation to finish and then launch the game. You should see unlimited coins and diamonds in your account.
      6. -
      -

      Congratulations, you have successfully installed My Talking Angela 2 MOD APK on your device. Now you can enjoy the game with unlimited resources and have fun with Angela.

      -

      Pros and cons of using My Talking Angela 2 MOD APK

      -

      Pros: You can enjoy unlimited shopping, customization, and fun

      -

      The main advantage of using My Talking Angela 2 MOD APK is that you can enjoy unlimited shopping, customization, and fun in the game. You can buy any item you want, dress up Angela in any way you like, decorate her home with any furniture you prefer, feed her any food you desire, and more. You can also play any mini-game without worrying about running out of coins or diamonds. You can make Angela happy and stylish with ease and have a great time with her.

      -

      Cons: You may face some risks such as malware, ban, or loss of data

      -

      The main disadvantage of using My Talking Angela 2 MOD APK is that you may face some risks such as malware, ban, or loss of data. Since the modded version of the game is not official or legal, it may contain viruses or malware that can harm your device or steal your personal information. You may also get banned from the game or lose your progress if the developers detect that you are using a modded version. Moreover, you may not be able to update the game or access some features that require an internet connection. Therefore, you should be careful and aware of the potential dangers of using My Talking Angela 2 MOD APK.

      -

      Conclusion

      -

      My Talking Angela 2 is a fun and entertaining virtual pet game that lets you adopt Angela as your own pet and take care of her. You can customize her appearance, decorate her home, play with her, chat with her, and more. However, you may need a lot of coins and diamonds to buy everything you want in the game. That's why some people use My Talking Angela 2 MOD APK, a modded version of the game that gives you unlimited coins and diamonds for free. However, this modded version also comes with some risks such as malware, ban, or loss of data. Therefore, you should weigh the pros and cons before deciding to use it.

      -

      FAQs

      -

      Here are some frequently asked questions about My Talking Angela 2 MOD APK:

      -
        -
      • Is My Talking Angela 2 MOD APK safe to use?
        -There is no definitive answer to this question, as different sources may provide different versions of the modded game. Some versions may be safe and clean, while others may be infected with malware or viruses. Therefore, you should always download My Talking Angela 2 MOD APK from a trusted and reliable website. You should also scan the file with an antivirus program before installing it on your device.
      • -
      • Is My Talking Angela 2 MOD APK legal to use?
        -No, My Talking Angela 2 MOD APK is not legal to use, as it violates the terms and conditions of the original game. By using a modded version of the game, you are breaking the rules and cheating the system. This may result in legal actions or penalties from the developers or authorities.
      • -
      • Will I get banned for using My Talking Angela 2 MOD APK?
        -There is a possibility that you may get banned for using My Talking Angela 2 MOD APK, as the developers may detect that you are using a modded version of the game. They may suspend or terminate your account or block your access to the game. Therefore, you should use My Talking Angela 2 MOD APK at your own risk and discretion.
      • -
      • Will I lose my progress if I use My Talking Angela 2 MOD APK?
        -There is a possibility that you may lose your progress if you use My Talking Angela 2 MOD APK, as the modded version of the game may not be compatible with the original version or the latest updates. You may also lose your progress if you uninstall the original version of the game or switch devices. Therefore, you should backup your data before using My Talking Angela 2 MOD APK.
      • -Can I update My Talking Angela 2 MOD APK?
        -It depends on the source and the version of the modded game. Some sources may provide updates for My Talking Angela 2 MOD APK, while others may not. Some versions may be compatible with the latest updates of the original game, while others may not. Therefore, you should check the website where you downloaded My Talking Angela 2 MOD APK for any updates or information. You should also be careful when updating the modded game, as it may overwrite your data or cause errors. -
      -

      I hope this article has helped you understand more about My Talking Angela 2 MOD APK and how to get unlimited coins and diamonds in the game. If you have any questions or comments, feel free to leave them below. Thank you for reading and have a wonderful day!

      401be4b1e0
      -
      -
      \ No newline at end of file diff --git a/spaces/fatiXbelha/sd/Download ES Truck Simulator ID Mod APK and Drive Your Dream Truck in Indonesia.md b/spaces/fatiXbelha/sd/Download ES Truck Simulator ID Mod APK and Drive Your Dream Truck in Indonesia.md deleted file mode 100644 index 1665b46bdf0380b6b934f5488e4c615d4726145f..0000000000000000000000000000000000000000 --- a/spaces/fatiXbelha/sd/Download ES Truck Simulator ID Mod APK and Drive Your Dream Truck in Indonesia.md +++ /dev/null @@ -1,89 +0,0 @@ -
      -

      Download ES Truck Simulator ID Mod APK: A Fun and Realistic Truck Driving Game

      -

      Do you love driving trucks and exploring different places? Do you want to experience the thrill and challenge of transporting cargo across various terrains and roads? If yes, then you should try ES Truck Simulator ID, a simulation game that lets you drive various trucks and trailers in Indonesia. And if you want to enjoy the game with more features and benefits, you should download ES Truck Simulator ID Mod APK, a modified version of the game that gives you unlimited money, unlocked trucks and trailers, and no ads. In this article, we will tell you more about ES Truck Simulator ID and ES Truck Simulator ID Mod APK, and how to download and install them on your device.

      -

      download es truck simulator id mod apk


      Download »»» https://urllie.com/2uNIog



      -

      What is ES Truck Simulator ID?

      -

      ES Truck Simulator ID is a simulation game developed by Esproject, a game studio based in Indonesia. The game allows you to drive different types of trucks and trailers, such as container trucks, dump trucks, flatbed trucks, fuel tankers, and more. You can customize your truck with various accessories, such as horns, lights, exhausts, bumpers, and stickers. You can also choose from different control options, such as steering wheel, buttons, or tilt. You can adjust the camera angle to suit your preference, whether you want to see the road from the driver's seat, the front bumper, or the top view.

      -

      The game features realistic graphics and physics that make you feel like you are driving a real truck. You can see the details of the truck's interior and exterior, as well as the environment around you. You can also hear the sound of the engine, the brakes, the horn, and the traffic. The game also simulates the weather conditions, such as rain, fog, night, and day. You can see how your truck reacts to different road surfaces, such as asphalt, dirt, gravel, or mud.

      -

      The game offers various missions and routes that challenge your driving skills and knowledge. You have to transport different kinds of cargo from one place to another, following the traffic rules and avoiding accidents. You have to deal with narrow roads, sharp turns, steep hills, bridges, tolls, traffic jams, and more. You have to manage your fuel level, your speed, your cargo weight, and your delivery time. You can earn money by completing missions and use it to buy new trucks or upgrade your existing ones.

      -

      The game also has an online multiplayer mode where you can compete with other players around the world. You can join or create a room with up to 10 players and race against each other on various maps. You can chat with other players using voice or text messages. You can also see the leaderboard and rankings of other players.

      -

      Features of ES Truck Simulator ID

      -

      - Realistic graphics and physics

      -

      The game has high-quality graphics that show the details of the trucks and trailers, as well as the environment around them. The game also has realistic physics that simulate how the trucks behave on different road surfaces and weather conditions.

      -

      - Various trucks and trailers to choose from

      -

      The game has a wide range of trucks and trailers that you can drive in the game. You can choose from container trucks, dump trucks, flatbed trucks, fuel tankers, and more. Each truck has its own characteristics, such as speed, power, fuel consumption, cargo capacity, etc.

      -

      - Customizable controls and camera angles

      The game allows you to customize your controls and camera angles according to your preference. You can choose from different control options, such as steering wheel, buttons, or tilt. You can also adjust the camera angle to suit your view, whether you want to see the road from the driver's seat, the front bumper, or the top view.

      -

      - Challenging missions and routes

      -

      The game offers various missions and routes that challenge your driving skills and knowledge. You have to transport different kinds of cargo from one place to another, following the traffic rules and avoiding accidents. You have to deal with narrow roads, sharp turns, steep hills, bridges, tolls, traffic jams, and more. You have to manage your fuel level, your speed, your cargo weight, and your delivery time. You can earn money by completing missions and use it to buy new trucks or upgrade your existing ones.

      -

      How to download es truck simulator id mod apk for free
      -Es truck simulator id mod apk latest version download
      -Es truck simulator id mod apk unlimited money and fuel
      -Es truck simulator id mod apk gameplay and review
      -Es truck simulator id mod apk offline mode
      -Es truck simulator id mod apk features and tips
      -Es truck simulator id mod apk download link and installation guide
      -Es truck simulator id mod apk best trucks and routes
      -Es truck simulator id mod apk cheats and hacks
      -Es truck simulator id mod apk vs real truck driving
      -Download es truck simulator id mod apk for android and ios
      -Download es truck simulator id mod apk for pc and laptop
      -Download es truck simulator id mod apk from google play store
      -Download es truck simulator id mod apk from pdscustom.com[^1^]
      -Download es truck simulator id mod apk no root required
      -Download es truck simulator id mod apk with obb data file
      -Download es truck simulator id mod apk with bahasa indonesia
      -Download es truck simulator id mod apk with multiplayer mode
      -Download es truck simulator id mod apk with custom skins and mods
      -Download es truck simulator id mod apk with realistic graphics and physics
      -Benefits of downloading es truck simulator id mod apk
      -Risks of downloading es truck simulator id mod apk
      -Alternatives to download es truck simulator id mod apk
      -Comparison of download es truck simulator id mod apk with other truck simulators
      -Testimonials of download es truck simulator id mod apk users

      -

      - Online multiplayer mode

      -

      The game also has an online multiplayer mode where you can compete with other players around the world. You can join or create a room with up to 10 players and race against each other on various maps. You can chat with other players using voice or text messages. You can also see the leaderboard and rankings of other players.

      -

      What is ES Truck Simulator ID Mod APK?

      -

      ES Truck Simulator ID Mod APK is a modified version of the original game that gives you more features and benefits. By downloading and installing this mod apk, you can enjoy the game with unlimited money, unlocked trucks and trailers, and no ads. This means that you can buy and upgrade any truck or trailer you want without worrying about the cost. You can also access all the trucks and trailers in the game without having to unlock them by completing missions. And you can play the game without any interruptions from annoying ads.

      -

      Benefits of ES Truck Simulator ID Mod APK

      -

      - Unlimited money to buy and upgrade trucks

      -

      With ES Truck Simulator ID Mod APK, you will have unlimited money in your account that you can use to buy and upgrade any truck or trailer you want. You can customize your truck with various accessories, such as horns, lights, exhausts, bumpers, and stickers. You can also improve your truck's performance by upgrading its engine, transmission, brakes, tires, suspension, etc.

      -

      - All trucks and trailers unlocked

      -

      With ES Truck Simulator ID Mod APK, you will have access to all the trucks and trailers in the game without having to unlock them by completing missions. You can choose from container trucks, dump trucks, flatbed trucks, fuel tankers, and more. Each truck has its own characteristics, such as speed, power, fuel consumption, cargo capacity, etc.

      -

      - No ads to interrupt your gameplay

      -

      With ES Truck Simulator ID Mod APK, you will not see any ads in the game that might distract you from your gameplay. You can play the game without any interruptions or delays caused by ads.

      -

      How to download and install ES Truck Simulator ID Mod APK?

      -

      If you want to download and install ES Truck Simulator ID Mod APK on your device, you need to follow some simple steps. Here are the step-by-step guides for Android devices and PC devices.

      -

      Step-by-step guide for Android devices

      -
        -
      1. First of all, you need to enable the installation of apps from unknown sources on your device. To do this, go to Settings > Security > Unknown Sources and toggle it on.
      2. -
      3. Next, you need to download the ES Truck Simulator ID Mod APK file from a trusted source. You can use this link to download it directly.
      4. -
      5. Once the download is complete, locate the file in your device's file manager and tap on it to start the installation process.
      6. -
      7. Follow the instructions on the screen and wait for the installation to finish.
      8. -
      9. After the installation is done, you can launch the game from your app drawer or home screen and enjoy it with unlimited money, unlocked trucks and trailers, and no ads.
      10. -
      -

      Step-by-step guide for PC devices

      -
        -
      1. If you want to play ES Truck Simulator ID Mod APK on your PC device, you need to install an Android emulator first. An Android emulator is a software that allows you to run Android apps on your PC device. There are many Android emulators available online, such as BlueStacks, NoxPlayer, LDPlayer, etc. You can choose any one of them according to your preference.
      2. -
      3. After installing an Android emulator on your PC device, you need to download the ES Truck Simulator ID Mod APK file from a trusted source. You can use this link to download it directly.
      4. -
      5. Once the download is complete, locate the file in your PC device and right-click on it. Then, choose the option to open it with your Android emulator.
      6. -
      7. The Android emulator will launch and install the ES Truck Simulator ID Mod APK file on your PC device.
      8. -
      9. After the installation is done, you can open the game from your Android emulator and enjoy it with unlimited money, unlocked trucks and trailers, and no ads.
      10. -
      -

      Conclusion

      -

      ES Truck Simulator ID is a fun and realistic truck driving game that lets you drive various trucks and trailers in Indonesia. You can customize your truck, choose your control and camera options, complete challenging missions and routes, and compete with other players online. If you want to enjoy the game with more features and benefits, you should download ES Truck Simulator ID Mod APK, a modified version of the game that gives you unlimited money, unlocked trucks and trailers, and no ads. You can download and install ES Truck Simulator ID Mod APK on your Android or PC device by following the step-by-step guides above. So, what are you waiting for? Download ES Truck Simulator ID Mod APK now and start your truck driving adventure!

      -

      FAQs

      -

      Here are some frequently asked questions about ES Truck Simulator ID and ES Truck Simulator ID Mod APK.

      -
        -
      1. Is ES Truck Simulator ID free to play?
      2. -

        Yes, ES Truck Simulator ID is free to play. However, it contains in-app purchases that allow you to buy more money or remove ads. If you want to enjoy the game without spending any money or seeing any ads, you should download ES Truck Simulator ID Mod APK.

        -
      3. Is ES Truck Simulator ID safe to play?
      4. -

        Yes, ES Truck Simulator ID is safe to play. It does not contain any viruses or malware that might harm your device. However, you should always download the game from a trusted source, such as Google Play Store or the official website of Esproject. If you want to download ES Truck Simulator ID Mod APK, you should also use a trusted source, such as the link we provided above.

        -
      5. Is ES Truck Simulator ID offline or online?
      6. -

        ES Truck Simulator ID can be played both offline and online. You can play the game offline without an internet connection, but you will not be able to access the online multiplayer mode or update the game. You can play the game online with an internet connection, but you will need to watch ads or buy money to unlock more trucks and trailers. If you want to play the game online without any limitations, you should download ES Truck Simulator ID Mod APK.

        -
      7. How can I contact the developer of ES Truck Simulator ID?
      8. -

        If you have any questions, suggestions, or feedback about ES Truck Simulator ID, you can contact the developer of the game by sending an email to esproject.id@gmail.com. You can also follow their social media accounts on Facebook, Instagram, and YouTube.

        -
      9. What are some similar games to ES Truck Simulator ID?
      10. -

        If you like ES Truck Simulator ID, you might also like some similar games, such as Euro Truck Simulator 2, American Truck Simulator, Grand Truck Simulator 2, World Truck Driving Simulator, Heavy Bus Simulator, etc.

        -

      401be4b1e0
      -
      -
      \ No newline at end of file diff --git a/spaces/fatiXbelha/sd/Download Goa Songs - Enjoy the Fun and Festive Music of Goa with Goan Bands and Comedians.md b/spaces/fatiXbelha/sd/Download Goa Songs - Enjoy the Fun and Festive Music of Goa with Goan Bands and Comedians.md deleted file mode 100644 index 989929b62cc40e936966fc211a5befa637ec156a..0000000000000000000000000000000000000000 --- a/spaces/fatiXbelha/sd/Download Goa Songs - Enjoy the Fun and Festive Music of Goa with Goan Bands and Comedians.md +++ /dev/null @@ -1,90 +0,0 @@ - -

      Download Goa Songs: A Guide for Music Lovers

      -

      Goa is a small state in India that is famous for its beaches, nightlife, and culture. But Goa is also known for its unique and diverse music scene that attracts millions of tourists and locals every year. Goa songs are a blend of Indian and Western influences that create a vibrant and eclectic sound. Whether you are looking for psychedelic trance, catchy pop, or rock anthems, Goa has something for everyone.

      -

      Types of Goa Songs

      -

      Goa songs can be broadly classified into three categories: Goa trance, Goa pop, and Goa rock. Each category has its own characteristics and appeal.

      -

      download goa songs


      Download File » https://urllie.com/2uNxOu



      -

      Goa Trance

      -

      Goa trance is an electronic dance music genre that originated in the early 1990s in Goa. It is characterized by fast tempo, hypnotic melodies, complex rhythms, and psychedelic effects. Goa trance often incorporates elements of Indian classical music, such as sitar, tabla, and flute. Some of the famous artists of Goa trance are Astral Projection, Infected Mushroom, Juno Reactor, and Shpongle.

      -

      Goa Pop

      -

      Goa pop is a fusion of Indian and Western pop music that emerged in the late 1990s and early 2000s. It is influenced by genres such as reggae, hip hop, R&B, and disco. Goa pop is catchy, upbeat, and fun. Some of the popular artists of Goa pop are Remo Fernandes, Lucky Ali, Shaan, and Sunidhi Chauhan.

      -

      Goa Rock

      -

      Goa rock is a style of rock music that combines elements of Indian folk music, Western rock music, and Portuguese music. It is influenced by bands such as The Beatles, Led Zeppelin, Pink Floyd, and U2. Goa rock is energetic, rebellious, and expressive. Some of the notable bands of Goa rock are Indus Creed, Pentagram, Parikrama, and Indian Ocean.

      -

      download goa songs mp3
      -download goa songs video
      -download goa songs free
      -download goa songs online
      -download goa songs 2023
      -download goa songs yuvan shankar raja
      -download goa songs tamil
      -download goa songs telugu
      -download goa songs hindi
      -download goa songs malayalam
      -download goa songs remix
      -download goa songs dj
      -download goa songs zip file
      -download goa songs from youtube
      -download goa songs from spotify
      -download goa songs from gaana
      -download goa songs from saavn
      -download goa songs from wynk
      -download goa songs from hungama
      -download goa songs from jiosaavn
      -download goa songs idhu varai
      -download goa songs kadhal endral
      -download goa songs ooru nalla ooru
      -download goa songs yezhezhu thalaimuraikkum
      -download goa songs adida nayaandiye
      -download goa songs vaaliba vaa vaa
      -download goa songs idhu varai male version
      -download goa songs idhu varai female version
      -download goa songs idhu varai karaoke
      -download goa songs idhu varai lyrics
      -download goa trance songs
      -download goa psytrance songs
      -download goa beach party songs
      -download best goa songs of all time
      -download latest goa songs 2023
      -download old goa songs 1990s
      -download new goa songs 2023 release date
      -download top 10 goa songs #1 youtube video
      -download yuvanshankar raja | goa - idhu varai video song mp4 hd quality
      -download korean nuclear fusion reactor achieves 100 million°C for 30 seconds song mp3 high quality
      -how to download goa songs on android phone
      -how to download goa songs on iphone
      -how to download goa songs on laptop
      -how to download goa songs on pc
      -how to download goa songs on mac
      -how to download goa songs on ipad
      -how to download goa songs on windows 10
      -how to download goa songs on linux
      -how to download goa songs legally
      -how to download goa songs for free without virus

      -

      How to Download Goa Songs

      -

      If you want to enjoy Goa songs on your devices, you have two options: download them legally or illegally. Both options have their advantages and disadvantages.

      -

      Legal Downloading

      -

      Legal downloading means paying for the songs or streaming them from authorized platforms. This way, you can support the artists and respect their intellectual property rights. You can also get high-quality audio files and avoid viruses or malware. However, legal downloading can be expensive and time-consuming. You may also face geo-restrictions or limited availability of some songs.

      -

      Illegal Downloading

      -

      Illegal downloading means getting the songs for free from unauthorized sources such as torrent sites or file-sharing networks. This way, you can save money and time. You can also access a wider range of songs from different regions and genres. However, illegal downloading can be risky and unethical. You may face legal consequences or fines for violating the copyright laws. You may also get low-quality audio files or infected files that can harm your devices.

      -

      Best Sources for Downloading Goa Songs

      -

      If you decide to download Goa songs legally, here are some of the best sources that you can use:

      -
        -
      • Websites: There are many websites that offer Goa songs for download or streaming. Some of them are Gaana.com, Saavn.com, Hungama.com, Spotify.com, etc. You can browse through their collections or search by artist name or song title.
      • -
      • Apps: There are also many apps that allow you to download or stream Goa songs on your mobile devices. Some of them are Gaana, Saavn, Hungama, Spotify, etc. You can download these apps from Google Play Store or Apple App Store.
      • Podcasts: Podcasts are audio shows that you can listen to online or offline. Some podcasts feature Goa songs or interviews with Goa artists. Some of them are Goa Trance Radio, Goa Pop Podcast, Goa Rock Show, etc. You can subscribe to these podcasts on platforms such as iTunes, Stitcher, SoundCloud, etc.
      • -
      -

      Conclusion

      -

      Goa songs are a great way to experience the rich and diverse musical culture of Goa. They can make you feel happy, relaxed, or energized. You can download Goa songs legally or illegally, depending on your preference and budget. However, you should always be careful and respectful of the artists and their rights. You can use various sources such as websites, apps, or podcasts to find and download Goa songs of your choice.

      -

      FAQs

      -

      Here are some frequently asked questions about downloading Goa songs:

      -

      What is the best format for downloading Goa songs?

      -

      The best format for downloading Goa songs depends on your device and your preference. Generally, MP3 is the most common and compatible format that works on most devices. However, if you want higher quality and smaller file size, you can opt for formats such as FLAC, AAC, or OGG.

      -

      How can I download Goa songs for free legally?

      -

      There are some ways to download Goa songs for free legally. For example, you can use streaming services that offer free trials or ad-supported plans, such as Spotify or Gaana. You can also use websites that offer free downloads of songs that are in the public domain or under creative commons licenses, such as Jamendo.com or FreeMusicArchive.org. However, you should always check the terms and conditions of these services before downloading.

      -

      How can I avoid viruses or malware when downloading Goa songs?

      -

      You can avoid viruses or malware when downloading Goa songs by following some precautions. For example, you should always use a reliable antivirus software and scan your files before opening them. You should also avoid clicking on suspicious links or pop-ups that may redirect you to malicious sites. You should also only download from trusted sources and avoid peer-to-peer networks or torrent sites.

      -

      How can I support the Goa artists when downloading their songs?

      -

      You can support the Goa artists when downloading their songs by paying for their music or streaming it from authorized platforms. You can also follow them on social media, share their music with your friends, or attend their live shows. You can also buy their merchandise or donate to their causes.

      -

      How can I discover new Goa songs or artists?

      -

      You can discover new Goa songs or artists by using various methods. For example, you can use music discovery tools such as Shazam or SoundHound to identify the songs or artists that you hear. You can also use music recommendation services such as Pandora or Last.fm to find similar songs or artists based on your taste. You can also read music blogs, magazines, or reviews that feature Goa music.

      197e85843d
      -
      -
      \ No newline at end of file diff --git a/spaces/fatmacankara/ASCARIS/code/uniprotSequenceMatch.py b/spaces/fatmacankara/ASCARIS/code/uniprotSequenceMatch.py deleted file mode 100644 index 53e34029f09fa76faab78a61a1416f773a0861ac..0000000000000000000000000000000000000000 --- a/spaces/fatmacankara/ASCARIS/code/uniprotSequenceMatch.py +++ /dev/null @@ -1,40 +0,0 @@ -from add_sequence import * -import pandas as pd -import numpy as np - -def uniprotSequenceMatch(data): - print('Retrieving UniProt sequences...\n') - - canonical_fasta = pd.DataFrame(columns=['uniprotID', 'uniprotSequence']) - up_list = list(set(data['uniprotID'].to_list())) - for i in range(len(up_list)): - canonical_fasta.at[i, 'uniprotSequence'] = get_uniprot_seq(up_list[i]) - canonical_fasta.at[i, 'uniprotID'] = up_list[i] - - canonical_fasta = canonical_fasta.drop_duplicates() - isoform_fasta = pd.DataFrame(columns=['uniprotID', 'isoformSequence']) - iso_dict = [] - for i in range(len(up_list)): - iso_dict.append(get_isoforms(up_list[i])) - - index = 0 - for i in iso_dict: - for key, val in i.items(): - isoform_fasta.at[index, 'uniprotID'] = key - isoform_fasta.at[index, 'isoformSequence'] = val - index += 1 - isoform_fasta = isoform_fasta.drop_duplicates() - - for i in isoform_fasta.index: - isoform_fasta.at[i, 'whichIsoform'] = isoform_fasta.at[i, 'uniprotID'][7:10].strip() - isoform_fasta.at[i, 'uniprotID'] = isoform_fasta.at[i, 'uniprotID'][0:6] - print('Sequence files created...\n') - - data = data.merge(canonical_fasta, on='uniprotID', how='left') - data = data.replace({'': np.NaN, 'nan': np.NaN}) - data['whichIsoform'] = np.NaN - data['wt_sequence_match'] = np.NaN - not_match_in_uniprot = data[data.uniprotSequence.isna()] - uniprot_matched = data[~data.uniprotSequence.isna()] - - return not_match_in_uniprot, uniprot_matched, canonical_fasta, isoform_fasta diff --git a/spaces/faunxs233/zidunuer-bing/Dockerfile b/spaces/faunxs233/zidunuer-bing/Dockerfile deleted file mode 100644 index 8b9a421edc253855a8adc05229eb8981a3e047c1..0000000000000000000000000000000000000000 --- a/spaces/faunxs233/zidunuer-bing/Dockerfile +++ /dev/null @@ -1,33 +0,0 @@ -# Build Stage -# 使用 golang:alpine 作为构建阶段的基础镜像 -FROM golang:alpine AS builder - -# 添加 git,并且清除缓存🧹 -RUN apk --no-cache add git && \ - git clone https://github.com/Harry-zklcdc/go-proxy-bingai.git /workspace/app && \ - apk del git - -# 设置工作目录 -WORKDIR /workspace/app - -# 编译 go 项目 -RUN go build -ldflags="-s -w" -tags netgo -trimpath -o go-proxy-bingai main.go - -# Runtime Stage -# 使用轻量级的 alpine 镜像🪞 -FROM alpine - -# 设置工作目录💼 -WORKDIR /workspace/app - -# 从构建阶段复制编译后的二进制文件👔 -COPY --from=builder /workspace/app/go-proxy-bingai . - -# (可选)设置环境变量✍️ -ENV Go_Proxy_BingAI_USER_TOKEN_1="VhdyhGjbdfvhdtixkgxit7Hcigct353" - -# 端口 -EXPOSE 8080 - -# 容器运行✅ -CMD ["/workspace/app/go-proxy-bingai"] diff --git a/spaces/fclong/summary/fengshen/examples/pegasus/data_utils.py b/spaces/fclong/summary/fengshen/examples/pegasus/data_utils.py deleted file mode 100644 index 879798749bc06d6857c01ec101baf5f3fb61d012..0000000000000000000000000000000000000000 --- a/spaces/fclong/summary/fengshen/examples/pegasus/data_utils.py +++ /dev/null @@ -1,319 +0,0 @@ -# -*- coding: utf-8 -*- - -import re -import six -import unicodedata -import torch -import rouge -import numpy as np -import random -# from fengshen.examples.pegasus.pegasus_utils import text_segmentate -import sys - -sys.path.append('../../../') - -rouge = rouge.Rouge() - - -is_py2 = six.PY2 - -if not is_py2: - basestring = str - - -def _is_chinese_char(cp): - """Checks whether CP is the codepoint of a CJK character.""" - # This defines a "chinese character" as anything in the CJK Unicode block: - # https://en.wikipedia.org/wiki/CJK_Unified_Ideographs_(Unicode_block) - # - # Note that the CJK Unicode block is NOT all Japanese and Korean characters, - # despite its name. The modern Korean Hangul alphabet is a different block, - # as is Japanese Hiragana and Katakana. Those alphabets are used to write - # space-separated words, so they are not treated specially and handled - # like the all of the other languages. - if ((cp >= 0x4E00 and cp <= 0x9FFF) or (cp >= 0x3400 and cp <= 0x4DBF) - or (cp >= 0x20000 and cp <= 0x2A6DF) - or (cp >= 0x2A700 and cp <= 0x2B73F) - or (cp >= 0x2B740 and cp <= 0x2B81F) - or (cp >= 0x2B820 and cp <= 0x2CEAF) - or (cp >= 0xF900 and cp <= 0xFAFF) - or (cp >= 0x2F800 and cp <= 0x2FA1F)): - return True - - return False - - -def _is_whitespace(char): - """Checks whether `char` is a whitespace character.""" - # \t, \n, and \r are technically control characters but we treat them - # as whitespace since they are generally considered as such. - if char == " " or char == "\t" or char == "\n" or char == "\r": - return True - cat = unicodedata.category(char) - if cat == "Zs": - return True - return False - - -def _is_control(char): - """Checks whether `char` is a control character.""" - # These are technically control characters but we count them as whitespace - # characters. - if char == "\t" or char == "\n" or char == "\r": - return False - cat = unicodedata.category(char) - if cat.startswith("C"): - return True - return False - - -def _is_punctuation(char): - """Checks whether `char` is a punctuation character.""" - cp = ord(char) - # We treat all non-letter/number ASCII as punctuation. - # Characters such as "^", "$", and "`" are not in the Unicode - # Punctuation class but we treat them as punctuation anyways, for - # consistency. - if (cp >= 33 and cp <= 47) or (cp >= 58 and cp <= 64) or ( - cp >= 91 and cp <= 96) or (cp >= 123 and cp <= 126): - return True - cat = unicodedata.category(char) - if cat.startswith("P"): - return True - return False - - -def is_string(s): - """判断是否是字符串 - """ - return isinstance(s, basestring) - - -def is_stopwords(word, stopwords): - if word in stopwords: - return True - else: - return False - - -def text_segmentate(text): - en_seg_pattern = '((?:\\!|\\?|\\.|\\n)+(?:\\s)+)' - ch_seg_pattern = '((?:?|!|。|\\n)+)' - try: - text = re.sub(en_seg_pattern, r'\1[SEP]', text) - # print("sub text: ", text) - except Exception as e: - print("input: ", text) - raise e - text = re.sub(ch_seg_pattern, r'\1[SEP]', text) - # print("sub ch text: ", text) - text_list = text.split("[SEP]") - text_list = list(filter(lambda x: len(x) != 0, text_list)) - return text_list - - -def load_stopwords(stopwords_path): - stopwords_dict = {} - with open(stopwords_path, "r") as rf: - for line in rf: - line = line.strip() - if line not in stopwords_dict: - stopwords_dict[line] = 0 - else: - pass - return stopwords_dict - - -def text_process(text, max_length): - """分割文本 - """ - texts = text_segmentate(text) - - result, length = [], 0 - for text in texts: - if length + len(text) > max_length * 1.3 and len(result) >= 3: - yield result - result, length = [], 0 - result.append(text) - length += len(text) - if result and len(result) >= 3: - yield result - - -def text_process_split_long_content(text, max_length): - """分割长文本 - """ - texts = text_segmentate(text) - - result, sentence_num = "", 0 - for text in texts: - if len(text) > 500: - if len(result) > 300 and sentence_num >= 3: - yield result - result, sentence_num = "", 0 - else: - result, sentence_num = "", 0 - continue - else: - if len(result) + len(text) > max_length * 1.1 and sentence_num >= 3: - yield result - result, sentence_num = "", 0 - result += text - sentence_num += 1 - - if result and sentence_num >= 3: - yield result - - -def gather_join(texts, idxs): - """取出对应的text,然后拼接起来 - """ - return ''.join([texts[i] for i in idxs]) - - -def gather_join_f1(texts_token, idsx): - join_texts = [] - for id in idsx: - join_texts.extend(texts_token[id]) - return join_texts - - -def compute_rouge(source, target): - """计算rouge-1、rouge-2、rouge-l - """ - source, target = ' '.join(source), ' '.join(target) - try: - scores = rouge.get_scores(hyps=source, refs=target) - return { - 'rouge-1': scores[0]['rouge-1']['f'], - 'rouge-2': scores[0]['rouge-2']['f'], - 'rouge-l': scores[0]['rouge-l']['f'], - } - except ValueError: - return { - 'rouge-1': 0.0, - 'rouge-2': 0.0, - 'rouge-l': 0.0, - } - - -def remove_stopwords(texts, stopwords_dict): - for i, text in enumerate(texts): - texts[i] = list(filter(lambda x: x not in stopwords_dict, text)) - return texts - - -def pseudo_summary_f1(texts, - stopwords, - tokenizer, - max_length, - rouge_strategy="rouge-l"): - """构建伪标签摘要数据集 - """ - summary_rate = 0.25 - max_length = max_length - 1 - texts_tokens = [] - sentece_idxs_vec = [] - for text in texts: - if len(texts) == 0: - continue - try: - ids = tokenizer.encode(text.strip())[:-1] - except ValueError: - print("error, input : ", text) - raise ValueError - sentece_idxs_vec.append(ids) - tokens = [tokenizer._convert_id_to_token(token) for token in ids] - texts_tokens.append(tokens) - - texts_tokens_rm = remove_stopwords(texts_tokens, stopwords) - source_idxs, target_idxs = list(range(len(texts))), [] - - assert len(texts_tokens) == len(texts) - # truncate_index = 0 - while True: - sims = [] - for i in source_idxs: - new_source_idxs = [j for j in source_idxs if j != i] - new_target_idxs = sorted(target_idxs + [i]) - new_source = gather_join_f1(texts_tokens_rm, new_source_idxs) - new_target = gather_join_f1(texts_tokens_rm, new_target_idxs) - sim = compute_rouge(new_source, new_target)[rouge_strategy] - sims.append(sim) - new_idx = source_idxs[np.argmax(sims)] - del sims - source_idxs.remove(new_idx) - target_idxs = sorted(target_idxs + [new_idx]) - source = gather_join(texts, source_idxs) - target = gather_join(texts, target_idxs) - try: - if (len(source_idxs) == 1 - or 1.0 * len(target) / len(source) > summary_rate): - break - except ZeroDivisionError as e: - print(e.meesage) - print(texts) - print("source: ", source) - print("target: ", target) - - if len(source) < len(target): - source, target = target, source - source_idxs, target_idxs = target_idxs, source_idxs - - return sentece_idxs_vec, source, target, source_idxs, target_idxs - - -def get_input_mask(sentence_id_vec, indexs): - target_idxs = [] - input_idxs = [] - kMaskSentenceTokenId = 2 - kEosTokenId = 1 - mask_sentence_options_cumulative_prob = [0.9, 0.9, 1, 1] - for index in indexs: - target_idxs.extend(sentence_id_vec[index]) - choice = random.uniform(0, 1) - if choice < mask_sentence_options_cumulative_prob[0]: - # print("mask index: ", index) - sentence_id_vec[index] = [kMaskSentenceTokenId] - elif choice < mask_sentence_options_cumulative_prob[1]: - # print("replace index: ", index) - replace_id = random.randint(0, len(sentence_id_vec)) - sentence_id_vec[index] = sentence_id_vec[replace_id] - elif choice < mask_sentence_options_cumulative_prob[2]: - pass - else: - sentence_id_vec[index] = [] - - target_idxs.append(kEosTokenId) - # print(sentence_id_vec) - for index, sentence_id in enumerate(sentence_id_vec): - # print(index, sentence_id) - if len(sentence_id) == 0: - continue - input_idxs.extend(sentence_id_vec[index]) - - input_idxs.append(kEosTokenId) - return input_idxs, target_idxs - - -def shift_tokens_right(input_ids: torch.Tensor, pad_token_id: int, - decoder_start_token_id: int): - """ - Shift input ids one token to the right. - """ - shifted_input_ids = input_ids.new_zeros(input_ids.shape) - shifted_input_ids[:, 1:] = input_ids[:, :-1].clone() - shifted_input_ids[:, 0] = decoder_start_token_id - - if pad_token_id is None: - raise ValueError("self.model.config.pad_token_id has to be defined.") - # replace possible -100 values in labels by `pad_token_id` - shifted_input_ids.masked_fill_(shifted_input_ids == -100, pad_token_id) - - return shifted_input_ids - - -def padding_to_maxlength(ids, max_length, pad_id): - cur_len = len(ids) - len_diff = max_length - cur_len - return ids + [pad_id] * len_diff, [1] * cur_len + [0] * len_diff diff --git a/spaces/fclong/summary/fengshen/examples/zen2_finetune/ner_zen2_large_msra.sh b/spaces/fclong/summary/fengshen/examples/zen2_finetune/ner_zen2_large_msra.sh deleted file mode 100644 index cef8f1f70babc94ed77dc585fbba47f5b45ff7a5..0000000000000000000000000000000000000000 --- a/spaces/fclong/summary/fengshen/examples/zen2_finetune/ner_zen2_large_msra.sh +++ /dev/null @@ -1,91 +0,0 @@ -#!/bin/bash -#SBATCH --job-name=zen2_large_msra # create a short name for your job -#SBATCH --nodes=1 # node count -#SBATCH --ntasks=1 # total number of tasks across all nodes -#SBATCH --cpus-per-task=30 # cpu-cores per task (>1 if multi-threaded tasks) -#SBATCH --gres=gpu:1 # number of gpus per node -#SBATCH --mail-type=ALL # send email when job begins, ends or failed etc. -#SBATCH -o /cognitive_comp/ganruyi/experiments/ner_finetune/zen2_large_msra/%x-%j.log # output and error file name (%x=job name, %j=job id) - - -# export CUDA_VISIBLE_DEVICES='2' -export TORCH_EXTENSIONS_DIR=/cognitive_comp/ganruyi/tmp/torch_extendsions - -MODEL_NAME=zen2_large - -TASK=msra - -ZERO_STAGE=1 -STRATEGY=deepspeed_stage_${ZERO_STAGE} - -ROOT_DIR=/cognitive_comp/ganruyi/experiments/ner_finetune/${MODEL_NAME}_${TASK} -if [ ! -d ${ROOT_DIR} ];then - mkdir -p ${ROOT_DIR} - echo ${ROOT_DIR} created!!!!!!!!!!!!!! -else - echo ${ROOT_DIR} exist!!!!!!!!!!!!!!! -fi - -DATA_DIR=/cognitive_comp/lujunyu/data_zh/NER_Aligned/MSRA/ -PRETRAINED_MODEL_PATH=/cognitive_comp/ganruyi/hf_models/zen/zh_zen_large_2.0 - -CHECKPOINT_PATH=${ROOT_DIR}/ckpt/ -OUTPUT_PATH=${ROOT_DIR}/predict.json - -DATA_ARGS="\ - --data_dir $DATA_DIR \ - --train_data train_dev.char.bmes \ - --valid_data test.char.bmes \ - --test_data test.char.bmes \ - --train_batchsize 16 \ - --valid_batchsize 16 \ - --max_seq_length 256 \ - --task_name msra \ - " - -MODEL_ARGS="\ - --learning_rate 3e-5 \ - --weight_decay 0.1 \ - --warmup_ratio 0.01 \ - --markup bioes \ - --middle_prefix M- \ - " - -MODEL_CHECKPOINT_ARGS="\ - --monitor val_f1 \ - --save_top_k 3 \ - --mode max \ - --every_n_train_steps 800 \ - --save_weights_only True \ - --dirpath $CHECKPOINT_PATH \ - --filename model-{epoch:02d}-{val_f1:.4f} \ - " - -TRAINER_ARGS="\ - --max_epochs 30 \ - --gpus 1 \ - --check_val_every_n_epoch 1 \ - --val_check_interval 800 \ - --default_root_dir $ROOT_DIR \ - " - - -options=" \ - --pretrained_model_path $PRETRAINED_MODEL_PATH \ - --vocab_file $PRETRAINED_MODEL_PATH/vocab.txt \ - --do_lower_case \ - --output_save_path $OUTPUT_PATH \ - $DATA_ARGS \ - $MODEL_ARGS \ - $MODEL_CHECKPOINT_ARGS \ - $TRAINER_ARGS \ -" -SCRIPT_PATH=/cognitive_comp/ganruyi/Fengshenbang-LM/fengshen/examples/zen2_finetune/fengshen_token_level_ft_task.py -/home/ganruyi/anaconda3/bin/python $SCRIPT_PATH $options - -# SINGULARITY_PATH=/cognitive_comp/ganruyi/pytorch21_06_py3_docker_image_v2.sif -# python3 $SCRIPT_PATH $options -# source activate base -# singularity exec --nv -B /cognitive_comp/:/cognitive_comp/ $SINGULARITY_PATH /home/ganruyi/anaconda3/bin/python $SCRIPT_PATH $options -# /home/ganruyi/anaconda3/bin/python $SCRIPT_PATH $options - diff --git a/spaces/feregVcuzo/sanity-test-midi/checkpoint/1 Beat 100 Songs MP3 Download - The Ultimate Mashup Collection.md b/spaces/feregVcuzo/sanity-test-midi/checkpoint/1 Beat 100 Songs MP3 Download - The Ultimate Mashup Collection.md deleted file mode 100644 index 1f4f9cbe7660b132a83b0d178f1a27c4c8badc73..0000000000000000000000000000000000000000 --- a/spaces/feregVcuzo/sanity-test-midi/checkpoint/1 Beat 100 Songs MP3 Download - The Ultimate Mashup Collection.md +++ /dev/null @@ -1,108 +0,0 @@ - -

      1 Beat 100 Songs MP3 Download: How to Enjoy Music for Free

      -

      Music is one of the best ways to relax, entertain, and express yourself. But sometimes, finding and downloading your favorite songs can be a hassle. You may have to pay for a subscription, deal with annoying ads, or risk getting viruses from shady websites. That's why many music lovers are looking for a better way to enjoy music for free. One of the most popular options is 1 beat 100 songs MP3 download.

      -

      1 beat 100 songs mp3 download


      Download Ziphttps://gohhs.com/2uPv8y



      -

      Introduction

      -

      In this article, we will explain what 1 beat 100 songs MP3 download is, why people love it, and how you can get it for free. We will also share some tips and tricks to make the most of your music experience. By the end of this article, you will be able to download and listen to hundreds of songs with just one beat.

      -

      What is 1 beat 100 songs MP3 download?

      -

      1 beat 100 songs MP3 download is a type of music mashup that combines different songs into one track using the same beat. The result is a seamless blend of melodies, lyrics, and rhythms that create a unique and fun musical journey. You can find 1 beat 100 songs MP3 download for various genres, such as pop, rock, hip hop, EDM, Bollywood, and more.

      -

      Why do people love 1 beat 100 songs MP3 download?

      -

      There are many reasons why people love 1 beat 100 songs MP3 download. Here are some of them:

      -
        -
      • It saves time and space. You don't have to download multiple files or switch between different apps or playlists. You can enjoy a lot of music with just one click.
      • -
      • It challenges your musical knowledge. You can test yourself on how many songs you can recognize and name from the mashup. You can also discover new songs or artists that you may not have heard before.
      • -
      • It adds variety and excitement to your music listening. You never know what song will come next or how it will sound with the beat. You can also find different versions or remixes of the same mashup that suit your mood or taste.
      • -
      -

      How to download 1 beat 100 songs MP3 for free

      -

      Now that you know what 1 beat 100 songs MP3 download is and why people love it, you may be wondering how you can get it for free. The good news is that there are many websites and apps that offer free MP3 music downloads, including 1 beat 100 songs MP3 download. Here are some of the best ones:

      -

      Use a reliable MP3 downloader website or app

      -

      OKmusi MP3 Downloader

      -

      OKmusi is a free music downloader with no ad, virus, and 100% free to download MP3 music. It is not only an online music downloader, but also a best free music downloader app for android and supports both keywords and URL music download. You can find any music you want on the Internet and download music free online in OKmusi.

      -

      Apple Music

      -

      Apple Music is one of the most popular music streaming services that also offers free MP3 music downloads. You can browse through millions of songs, albums, playlists, and radio stations on Apple Music and download them for offline listening. You can also create your own 1 beat 100 songs MP3 download by using the built-in GarageBand app or other third-party apps.

      -

      PCMag

      -

      PCMag is a trusted source of technology news, reviews, and tips. It also provides a list of the best free MP3 music download sites and apps that you can use to get 1 beat 100 songs MP3 download. Some of the sites and apps that PCMag recommends are Jamendo, SoundCloud, Bandcamp, Audiomack, and YouTube.

      -

      1 beat 100 songs free mp3 download
      -Download 100 songs in one beat mp3
      -How to download 100 songs on one beat mp3
      -1 beat 100 songs mp3 download site
      -Best site to download 1 beat 100 songs mp3
      -1 beat 100 songs mp3 download online
      -Online mp3 downloader for 1 beat 100 songs
      -Free online mp3 converter for 1 beat 100 songs
      -Convert 1 beat 100 songs to mp3 online free
      -1 beat 100 songs mp3 download app
      -Best app to download 1 beat 100 songs mp3
      -Download 1 beat 100 songs mp3 on android
      -Download 1 beat 100 songs mp3 on iphone
      -Download 1 beat 100 songs mp3 on pc
      -Download 1 beat 100 songs mp3 on mac
      -Top 10 sites to download 1 beat 100 songs mp3
      -Top 10 apps to download 1 beat 100 songs mp3
      -Top 10 youtube videos of 1 beat 100 songs mp3
      -Youtube to mp3 converter for 1 beat 100 songs
      -Download youtube videos of 1 beat 100 songs as mp3
      -Download youtube playlist of 1 beat 100 songs as mp3
      -Download full album of 1 beat 100 songs as mp3
      -Download zip file of 1 beat 100 songs as mp3
      -Download rar file of 1 beat 100 songs as mp3
      -Download torrent file of 1 beat 100 songs as mp3
      -Download music video of 1 beat 100 songs as mp4
      -Download music video of 1 beat 100 songs as mkv
      -Download music video of 1 beat 100 songs as webm
      -Download music video of 1 beat 100 songs as avi
      -Download music video of 1 beat 100 songs as mov
      -Cut audio from video of 1 beat 100 songs as mp3
      -Cut video from video of 1 beat 100 songs as mp4
      -Edit audio of video of 1 beat 100 songs as mp3
      -Edit video of video of 1 beat 100 songs as mp4
      -Add lyrics to audio of video of 1 beat 100 songs as mp3
      -Add subtitles to video of video of 1 beat 100 songs as mp4
      -Create ringtone from audio of video of 1 beat 100 songs as mp3
      -Create gif from video of video of one beat hundred songs as gif
      -Create meme from image of video of one beat hundred songs as jpg
      -Create wallpaper from image of video of one beat hundred songs as png

      -

      Follow the simple steps to get MP3 download

      -

      Once you have chosen a reliable MP3 downloader website or app, you can follow these simple steps to get your 1 beat 100 songs MP3 download:

      -

      Enter the keywords or URL

      -

      The first step is to enter the keywords or URL of the 1 beat 100 songs MP3 download that you want to download. For example, you can type "1 beat 100 songs MP3 download" or "1 beat mashup" in the search box or paste the link of a YouTube video that contains the mashup.

      -

      Choose the format and quality

      -

      The next step is to choose the format and quality of the MP3 download. Most MP3 downloader websites and apps will offer you different options, such as MP3, M4A, WAV, FLAC, etc. You can also select the bitrate, such as 128kbps, 192kbps, 320kbps, etc. The higher the bitrate, the better the sound quality, but also the larger the file size.

      -

      Download and enjoy

      -

      The final step is to download and enjoy your 1 beat 100 songs MP3 download. You can click on the download button or scan the QR code to save the file to your device. You can also preview the file before downloading it. After downloading it, you can play it with any MP3 player or converter that you like.

      -

      Tips and tricks for 1 beat 100 songs MP3 download

      -

      To make the most of your 1 beat 100 songs MP3 download experience, here are some tips and tricks that you can use:

      -

      Check the legality and safety of the source

      -

      Before downloading any MP3 file from any website or app, make sure that it is legal and safe. Some sources may contain copyrighted or pirated content that may violate the law or harm your device. You can check the reviews, ratings, and feedback of other users to see if they have any issues with the source. You can also use antivirus software or VPN services to protect your device from malware or hackers.

      -

      Use a good MP3 player or converter

      -

      To enjoy your 1 beat 100 songs MP3 download in the best way possible, you need a good MP3 player or converter that can handle the file format and quality. Some MP3 players or converters may not support certain formats or bitrates, or may distort or skip some parts of the file. You can look for MP3 players or converters that have features such as equalizer, playlist, shuffle, repeat, etc. Some examples of good MP3 players or converters are VLC Media Player, Winamp, iTunes, Audacity, etc.

      -

      Explore different genres and artists

      -

      One of the best things about 1 beat 100 songs MP3 download is that it allows you to explore different genres and artists that you may not have listened to before. You can find 1 beat 100 songs MP3 download for various genres such as pop, rock, hip hop, EDM, Bollywood, etc. You can also find 1 beat 100 songs MP3 download for various artists such as Taylor Swift, Ed Sheeran, Drake, Ariana Grande, etc. You can also mix and match different genres and artists to create your own 1 beat 100 songs MP3 download. You may be surprised by how well some songs go together or how different they sound with the same beat.

      -

      Conclusion

      -

      1 beat 100 songs MP3 download is a great way to enjoy music for free. It is a type of music mashup that combines different songs into one track using the same beat. It saves time and space, challenges your musical knowledge, and adds variety and excitement to your music listening. You can download 1 beat 100 songs MP3 for free by using a reliable MP3 downloader website or app, such as OKmusi, Apple Music, or PCMag. You can also follow some tips and tricks to check the legality and safety of the source, use a good MP3 player or converter, and explore different genres and artists. We hope that this article has helped you understand and appreciate 1 beat 100 songs MP3 download better. Now, go ahead and download your favorite 1 beat 100 songs MP3 and enjoy the music!

      -

      FAQs

      -

      Here are some frequently asked questions about 1 beat 100 songs MP3 download:

      -
        -
      1. What is the difference between 1 beat 100 songs MP3 download and remix?
      2. -

        A remix is a song that has been altered or modified by adding, removing, or changing some elements, such as vocals, instruments, effects, etc. A 1 beat 100 songs MP3 download is a type of remix that uses the same beat for different songs.

        -
      3. How can I make my own 1 beat 100 songs MP3 download?
      4. -

        You can make your own 1 beat 100 songs MP3 download by using a music editing software or app, such as GarageBand, Audacity, FL Studio, etc. You can choose a beat that you like and then add different songs that match the tempo and key of the beat. You can also adjust the volume, pitch, fade, etc. of each song to make them blend well together.

        -
      5. Where can I find more 1 beat 100 songs MP3 download?
      6. -

        You can find more 1 beat 100 songs MP3 download on various websites and apps that offer free MP3 music downloads, such as OKmusi, Apple Music, PCMag, etc. You can also search for them on YouTube, SoundCloud, Bandcamp, etc. You can also follow some popular music mashup artists or channels, such as DJ Earworm, DJ Chetas, Mashup-Germany, etc.

        -
      7. What are some of the benefits of listening to 1 beat 100 songs MP3 download?
      8. -

        Some of the benefits of listening to 1 beat 100 songs MP3 download are:

        -
          -
        • It improves your mood and reduces stress. Listening to music can release dopamine and serotonin in your brain, which are neurotransmitters that make you feel happy and relaxed.
        • -
        • It enhances your memory and learning. Listening to music can stimulate your brain and improve your cognitive functions, such as attention, memory, language, etc.
        • -
        • It boosts your creativity and productivity. Listening to music can inspire you and motivate you to work on your tasks or projects.
        • -
        -
      9. What are some of the challenges or drawbacks of listening to 1 beat 100 songs MP3 download?
      10. -

        Some of the challenges or drawbacks of listening to 1 beat 100 songs MP3 download are:

        -
          -
        • It may be distracting or annoying for some people. Listening to music with lyrics or too many changes may interfere with your concentration or focus.
        • -
        • It may be illegal or unsafe for some sources. Downloading music from unauthorized or unsecured sources may violate the law or harm your device.
        • -
        • It may be boring or repetitive for some people. Listening to music with the same beat may lose its appeal or interest over time.
        • -
        -

      197e85843d
      -
      -
      \ No newline at end of file diff --git a/spaces/fffiloni/sdxl-control-loras/README.md b/spaces/fffiloni/sdxl-control-loras/README.md deleted file mode 100644 index c57d42def1d39e014bc702f164e2b2951c9772b6..0000000000000000000000000000000000000000 --- a/spaces/fffiloni/sdxl-control-loras/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: SD-XL + Control LoRas -emoji: 🦀 -colorFrom: green -colorTo: indigo -sdk: gradio -sdk_version: 3.44.4 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference \ No newline at end of file diff --git a/spaces/fffiloni/video2mmpose/README.md b/spaces/fffiloni/video2mmpose/README.md deleted file mode 100644 index 5a40263b0eee2cf10904ad433f64145c2cde8e4e..0000000000000000000000000000000000000000 --- a/spaces/fffiloni/video2mmpose/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Video To MMPose -emoji: 💃 -colorFrom: yellow -colorTo: blue -sdk: gradio -sdk_version: 3.34.0 -app_file: app.py -pinned: false -duplicated_from: fffiloni/video2openpose2 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/flowers-team/SocialAISchool/torch-ac/torch_ac/model.py b/spaces/flowers-team/SocialAISchool/torch-ac/torch_ac/model.py deleted file mode 100644 index 6a6351e9e581dca3ce3c6b164c7e3ec291c20cf7..0000000000000000000000000000000000000000 --- a/spaces/flowers-team/SocialAISchool/torch-ac/torch_ac/model.py +++ /dev/null @@ -1,26 +0,0 @@ -from abc import abstractmethod, abstractproperty -import torch.nn as nn -import torch.nn.functional as F - -class ACModel: - recurrent = False - - @abstractmethod - def __init__(self, obs_space, action_space): - pass - - @abstractmethod - def forward(self, obs): - pass - -class RecurrentACModel(ACModel): - recurrent = True - - @abstractmethod - def forward(self, obs, memory): - pass - - @property - @abstractmethod - def memory_size(self): - pass \ No newline at end of file diff --git a/spaces/freddyaboulton/gradio_folium/src/backend/gradio_folium/templates/component/index.js b/spaces/freddyaboulton/gradio_folium/src/backend/gradio_folium/templates/component/index.js deleted file mode 100644 index 35c15dee7e49a740cd68d86bdfed5b4149a55ff4..0000000000000000000000000000000000000000 --- a/spaces/freddyaboulton/gradio_folium/src/backend/gradio_folium/templates/component/index.js +++ /dev/null @@ -1,5501 +0,0 @@ -const { - SvelteComponent: Jn, - assign: Yn, - create_slot: Kn, - detach: $n, - element: er, - get_all_dirty_from_scope: tr, - get_slot_changes: nr, - get_spread_update: rr, - init: ir, - insert: sr, - safe_not_equal: lr, - set_dynamic_element_data: Et, - set_style: L, - toggle_class: z, - transition_in: _n, - transition_out: mn, - update_slot_base: or -} = window.__gradio__svelte__internal; -function ar(e) { - let t, n, r; - const i = ( - /*#slots*/ - e[17].default - ), s = Kn( - i, - e, - /*$$scope*/ - e[16], - null - ); - let l = [ - { "data-testid": ( - /*test_id*/ - e[7] - ) }, - { id: ( - /*elem_id*/ - e[2] - ) }, - { - class: n = "block " + /*elem_classes*/ - e[3].join(" ") + " svelte-1t38q2d" - } - ], o = {}; - for (let a = 0; a < l.length; a += 1) - o = Yn(o, l[a]); - return { - c() { - t = er( - /*tag*/ - e[14] - ), s && s.c(), Et( - /*tag*/ - e[14] - )(t, o), z( - t, - "hidden", - /*visible*/ - e[10] === !1 - ), z( - t, - "padded", - /*padding*/ - e[6] - ), z( - t, - "border_focus", - /*border_mode*/ - e[5] === "focus" - ), z(t, "hide-container", !/*explicit_call*/ - e[8] && !/*container*/ - e[9]), L(t, "height", typeof /*height*/ - e[0] == "number" ? ( - /*height*/ - e[0] + "px" - ) : void 0), L(t, "width", typeof /*width*/ - e[1] == "number" ? `calc(min(${/*width*/ - e[1]}px, 100%))` : void 0), L( - t, - "border-style", - /*variant*/ - e[4] - ), L( - t, - "overflow", - /*allow_overflow*/ - e[11] ? "visible" : "hidden" - ), L( - t, - "flex-grow", - /*scale*/ - e[12] - ), L(t, "min-width", `calc(min(${/*min_width*/ - e[13]}px, 100%))`), L(t, "border-width", "var(--block-border-width)"); - }, - m(a, u) { - sr(a, t, u), s && s.m(t, null), r = !0; - }, - p(a, u) { - s && s.p && (!r || u & /*$$scope*/ - 65536) && or( - s, - i, - a, - /*$$scope*/ - a[16], - r ? nr( - i, - /*$$scope*/ - a[16], - u, - null - ) : tr( - /*$$scope*/ - a[16] - ), - null - ), Et( - /*tag*/ - a[14] - )(t, o = rr(l, [ - (!r || u & /*test_id*/ - 128) && { "data-testid": ( - /*test_id*/ - a[7] - ) }, - (!r || u & /*elem_id*/ - 4) && { id: ( - /*elem_id*/ - a[2] - ) }, - (!r || u & /*elem_classes*/ - 8 && n !== (n = "block " + /*elem_classes*/ - a[3].join(" ") + " svelte-1t38q2d")) && { class: n } - ])), z( - t, - "hidden", - /*visible*/ - a[10] === !1 - ), z( - t, - "padded", - /*padding*/ - a[6] - ), z( - t, - "border_focus", - /*border_mode*/ - a[5] === "focus" - ), z(t, "hide-container", !/*explicit_call*/ - a[8] && !/*container*/ - a[9]), u & /*height*/ - 1 && L(t, "height", typeof /*height*/ - a[0] == "number" ? ( - /*height*/ - a[0] + "px" - ) : void 0), u & /*width*/ - 2 && L(t, "width", typeof /*width*/ - a[1] == "number" ? `calc(min(${/*width*/ - a[1]}px, 100%))` : void 0), u & /*variant*/ - 16 && L( - t, - "border-style", - /*variant*/ - a[4] - ), u & /*allow_overflow*/ - 2048 && L( - t, - "overflow", - /*allow_overflow*/ - a[11] ? "visible" : "hidden" - ), u & /*scale*/ - 4096 && L( - t, - "flex-grow", - /*scale*/ - a[12] - ), u & /*min_width*/ - 8192 && L(t, "min-width", `calc(min(${/*min_width*/ - a[13]}px, 100%))`); - }, - i(a) { - r || (_n(s, a), r = !0); - }, - o(a) { - mn(s, a), r = !1; - }, - d(a) { - a && $n(t), s && s.d(a); - } - }; -} -function ur(e) { - let t, n = ( - /*tag*/ - e[14] && ar(e) - ); - return { - c() { - n && n.c(); - }, - m(r, i) { - n && n.m(r, i), t = !0; - }, - p(r, [i]) { - /*tag*/ - r[14] && n.p(r, i); - }, - i(r) { - t || (_n(n, r), t = !0); - }, - o(r) { - mn(n, r), t = !1; - }, - d(r) { - n && n.d(r); - } - }; -} -function fr(e, t, n) { - let { $$slots: r = {}, $$scope: i } = t, { height: s = void 0 } = t, { width: l = void 0 } = t, { elem_id: o = "" } = t, { elem_classes: a = [] } = t, { variant: u = "solid" } = t, { border_mode: f = "base" } = t, { padding: c = !0 } = t, { type: h = "normal" } = t, { test_id: _ = void 0 } = t, { explicit_call: d = !1 } = t, { container: E = !0 } = t, { visible: w = !0 } = t, { allow_overflow: N = !0 } = t, { scale: b = null } = t, { min_width: m = 0 } = t, T = h === "fieldset" ? "fieldset" : "div"; - return e.$$set = (g) => { - "height" in g && n(0, s = g.height), "width" in g && n(1, l = g.width), "elem_id" in g && n(2, o = g.elem_id), "elem_classes" in g && n(3, a = g.elem_classes), "variant" in g && n(4, u = g.variant), "border_mode" in g && n(5, f = g.border_mode), "padding" in g && n(6, c = g.padding), "type" in g && n(15, h = g.type), "test_id" in g && n(7, _ = g.test_id), "explicit_call" in g && n(8, d = g.explicit_call), "container" in g && n(9, E = g.container), "visible" in g && n(10, w = g.visible), "allow_overflow" in g && n(11, N = g.allow_overflow), "scale" in g && n(12, b = g.scale), "min_width" in g && n(13, m = g.min_width), "$$scope" in g && n(16, i = g.$$scope); - }, [ - s, - l, - o, - a, - u, - f, - c, - _, - d, - E, - w, - N, - b, - m, - T, - h, - i, - r - ]; -} -class hr extends Jn { - constructor(t) { - super(), ir(this, t, fr, ur, lr, { - height: 0, - width: 1, - elem_id: 2, - elem_classes: 3, - variant: 4, - border_mode: 5, - padding: 6, - type: 15, - test_id: 7, - explicit_call: 8, - container: 9, - visible: 10, - allow_overflow: 11, - scale: 12, - min_width: 13 - }); - } -} -const { - SvelteComponent: cr, - append: qe, - attr: Pe, - create_component: _r, - destroy_component: mr, - detach: dr, - element: wt, - init: br, - insert: gr, - mount_component: pr, - safe_not_equal: vr, - set_data: yr, - space: Er, - text: wr, - toggle_class: Z, - transition_in: xr, - transition_out: Tr -} = window.__gradio__svelte__internal; -function Hr(e) { - let t, n, r, i, s, l; - return r = new /*Icon*/ - e[1]({}), { - c() { - t = wt("label"), n = wt("span"), _r(r.$$.fragment), i = Er(), s = wr( - /*label*/ - e[0] - ), Pe(n, "class", "svelte-9gxdi0"), Pe(t, "for", ""), Pe(t, "data-testid", "block-label"), Pe(t, "class", "svelte-9gxdi0"), Z(t, "hide", !/*show_label*/ - e[2]), Z(t, "sr-only", !/*show_label*/ - e[2]), Z( - t, - "float", - /*float*/ - e[4] - ), Z( - t, - "hide-label", - /*disable*/ - e[3] - ); - }, - m(o, a) { - gr(o, t, a), qe(t, n), pr(r, n, null), qe(t, i), qe(t, s), l = !0; - }, - p(o, [a]) { - (!l || a & /*label*/ - 1) && yr( - s, - /*label*/ - o[0] - ), (!l || a & /*show_label*/ - 4) && Z(t, "hide", !/*show_label*/ - o[2]), (!l || a & /*show_label*/ - 4) && Z(t, "sr-only", !/*show_label*/ - o[2]), (!l || a & /*float*/ - 16) && Z( - t, - "float", - /*float*/ - o[4] - ), (!l || a & /*disable*/ - 8) && Z( - t, - "hide-label", - /*disable*/ - o[3] - ); - }, - i(o) { - l || (xr(r.$$.fragment, o), l = !0); - }, - o(o) { - Tr(r.$$.fragment, o), l = !1; - }, - d(o) { - o && dr(t), mr(r); - } - }; -} -function Br(e, t, n) { - let { label: r = null } = t, { Icon: i } = t, { show_label: s = !0 } = t, { disable: l = !1 } = t, { float: o = !0 } = t; - return e.$$set = (a) => { - "label" in a && n(0, r = a.label), "Icon" in a && n(1, i = a.Icon), "show_label" in a && n(2, s = a.show_label), "disable" in a && n(3, l = a.disable), "float" in a && n(4, o = a.float); - }, [r, i, s, l, o]; -} -class Sr extends cr { - constructor(t) { - super(), br(this, t, Br, Hr, vr, { - label: 0, - Icon: 1, - show_label: 2, - disable: 3, - float: 4 - }); - } -} -const { - SvelteComponent: Ar, - append: Pr, - attr: ze, - binding_callbacks: Nr, - create_slot: Ir, - detach: Cr, - element: xt, - get_all_dirty_from_scope: Lr, - get_slot_changes: Or, - init: Mr, - insert: Rr, - safe_not_equal: Ur, - toggle_class: W, - transition_in: Dr, - transition_out: kr, - update_slot_base: Gr -} = window.__gradio__svelte__internal; -function Fr(e) { - let t, n, r; - const i = ( - /*#slots*/ - e[5].default - ), s = Ir( - i, - e, - /*$$scope*/ - e[4], - null - ); - return { - c() { - t = xt("div"), n = xt("div"), s && s.c(), ze(n, "class", "icon svelte-3w3rth"), ze(t, "class", "empty svelte-3w3rth"), ze(t, "aria-label", "Empty value"), W( - t, - "small", - /*size*/ - e[0] === "small" - ), W( - t, - "large", - /*size*/ - e[0] === "large" - ), W( - t, - "unpadded_box", - /*unpadded_box*/ - e[1] - ), W( - t, - "small_parent", - /*parent_height*/ - e[3] - ); - }, - m(l, o) { - Rr(l, t, o), Pr(t, n), s && s.m(n, null), e[6](t), r = !0; - }, - p(l, [o]) { - s && s.p && (!r || o & /*$$scope*/ - 16) && Gr( - s, - i, - l, - /*$$scope*/ - l[4], - r ? Or( - i, - /*$$scope*/ - l[4], - o, - null - ) : Lr( - /*$$scope*/ - l[4] - ), - null - ), (!r || o & /*size*/ - 1) && W( - t, - "small", - /*size*/ - l[0] === "small" - ), (!r || o & /*size*/ - 1) && W( - t, - "large", - /*size*/ - l[0] === "large" - ), (!r || o & /*unpadded_box*/ - 2) && W( - t, - "unpadded_box", - /*unpadded_box*/ - l[1] - ), (!r || o & /*parent_height*/ - 8) && W( - t, - "small_parent", - /*parent_height*/ - l[3] - ); - }, - i(l) { - r || (Dr(s, l), r = !0); - }, - o(l) { - kr(s, l), r = !1; - }, - d(l) { - l && Cr(t), s && s.d(l), e[6](null); - } - }; -} -function Vr(e) { - let t, n = e[0], r = 1; - for (; r < e.length; ) { - const i = e[r], s = e[r + 1]; - if (r += 2, (i === "optionalAccess" || i === "optionalCall") && n == null) - return; - i === "access" || i === "optionalAccess" ? (t = n, n = s(n)) : (i === "call" || i === "optionalCall") && (n = s((...l) => n.call(t, ...l)), t = void 0); - } - return n; -} -function jr(e, t, n) { - let r, { $$slots: i = {}, $$scope: s } = t, { size: l = "small" } = t, { unpadded_box: o = !1 } = t, a; - function u(c) { - if (!c) - return !1; - const { height: h } = c.getBoundingClientRect(), { height: _ } = Vr([ - c, - "access", - (d) => d.parentElement, - "optionalAccess", - (d) => d.getBoundingClientRect, - "call", - (d) => d() - ]) || { height: h }; - return h > _ + 2; - } - function f(c) { - Nr[c ? "unshift" : "push"](() => { - a = c, n(2, a); - }); - } - return e.$$set = (c) => { - "size" in c && n(0, l = c.size), "unpadded_box" in c && n(1, o = c.unpadded_box), "$$scope" in c && n(4, s = c.$$scope); - }, e.$$.update = () => { - e.$$.dirty & /*el*/ - 4 && n(3, r = u(a)); - }, [l, o, a, r, s, i, f]; -} -class Xr extends Ar { - constructor(t) { - super(), Mr(this, t, jr, Fr, Ur, { size: 0, unpadded_box: 1 }); - } -} -const { - SvelteComponent: qr, - append: ie, - attr: S, - detach: zr, - init: Zr, - insert: Wr, - noop: Ze, - safe_not_equal: Qr, - svg_element: $ -} = window.__gradio__svelte__internal; -function Jr(e) { - let t, n, r, i, s, l, o; - return { - c() { - t = $("svg"), n = $("circle"), r = $("circle"), i = $("circle"), s = $("circle"), l = $("circle"), o = $("path"), S(n, "cx", "20"), S(n, "cy", "4"), S(n, "r", "2"), S(n, "fill", "currentColor"), S(r, "cx", "8"), S(r, "cy", "16"), S(r, "r", "2"), S(r, "fill", "currentColor"), S(i, "cx", "28"), S(i, "cy", "12"), S(i, "r", "2"), S(i, "fill", "currentColor"), S(s, "cx", "11"), S(s, "cy", "7"), S(s, "r", "2"), S(s, "fill", "currentColor"), S(l, "cx", "16"), S(l, "cy", "24"), S(l, "r", "2"), S(l, "fill", "currentColor"), S(o, "fill", "currentColor"), S(o, "d", "M30 3.413L28.586 2L4 26.585V2H2v26a2 2 0 0 0 2 2h26v-2H5.413Z"), S(t, "xmlns", "http://www.w3.org/2000/svg"), S(t, "xmlns:xlink", "http://www.w3.org/1999/xlink"), S(t, "aria-hidden", "true"), S(t, "role", "img"), S(t, "class", "iconify iconify--carbon"), S(t, "width", "100%"), S(t, "height", "100%"), S(t, "preserveAspectRatio", "xMidYMid meet"), S(t, "viewBox", "0 0 32 32"); - }, - m(a, u) { - Wr(a, t, u), ie(t, n), ie(t, r), ie(t, i), ie(t, s), ie(t, l), ie(t, o); - }, - p: Ze, - i: Ze, - o: Ze, - d(a) { - a && zr(t); - } - }; -} -class dn extends qr { - constructor(t) { - super(), Zr(this, t, null, Jr, Qr, {}); - } -} -const Yr = [ - { color: "red", primary: 600, secondary: 100 }, - { color: "green", primary: 600, secondary: 100 }, - { color: "blue", primary: 600, secondary: 100 }, - { color: "yellow", primary: 500, secondary: 100 }, - { color: "purple", primary: 600, secondary: 100 }, - { color: "teal", primary: 600, secondary: 100 }, - { color: "orange", primary: 600, secondary: 100 }, - { color: "cyan", primary: 600, secondary: 100 }, - { color: "lime", primary: 500, secondary: 100 }, - { color: "pink", primary: 600, secondary: 100 } -], Tt = { - inherit: "inherit", - current: "currentColor", - transparent: "transparent", - black: "#000", - white: "#fff", - slate: { - 50: "#f8fafc", - 100: "#f1f5f9", - 200: "#e2e8f0", - 300: "#cbd5e1", - 400: "#94a3b8", - 500: "#64748b", - 600: "#475569", - 700: "#334155", - 800: "#1e293b", - 900: "#0f172a", - 950: "#020617" - }, - gray: { - 50: "#f9fafb", - 100: "#f3f4f6", - 200: "#e5e7eb", - 300: "#d1d5db", - 400: "#9ca3af", - 500: "#6b7280", - 600: "#4b5563", - 700: "#374151", - 800: "#1f2937", - 900: "#111827", - 950: "#030712" - }, - zinc: { - 50: "#fafafa", - 100: "#f4f4f5", - 200: "#e4e4e7", - 300: "#d4d4d8", - 400: "#a1a1aa", - 500: "#71717a", - 600: "#52525b", - 700: "#3f3f46", - 800: "#27272a", - 900: "#18181b", - 950: "#09090b" - }, - neutral: { - 50: "#fafafa", - 100: "#f5f5f5", - 200: "#e5e5e5", - 300: "#d4d4d4", - 400: "#a3a3a3", - 500: "#737373", - 600: "#525252", - 700: "#404040", - 800: "#262626", - 900: "#171717", - 950: "#0a0a0a" - }, - stone: { - 50: "#fafaf9", - 100: "#f5f5f4", - 200: "#e7e5e4", - 300: "#d6d3d1", - 400: "#a8a29e", - 500: "#78716c", - 600: "#57534e", - 700: "#44403c", - 800: "#292524", - 900: "#1c1917", - 950: "#0c0a09" - }, - red: { - 50: "#fef2f2", - 100: "#fee2e2", - 200: "#fecaca", - 300: "#fca5a5", - 400: "#f87171", - 500: "#ef4444", - 600: "#dc2626", - 700: "#b91c1c", - 800: "#991b1b", - 900: "#7f1d1d", - 950: "#450a0a" - }, - orange: { - 50: "#fff7ed", - 100: "#ffedd5", - 200: "#fed7aa", - 300: "#fdba74", - 400: "#fb923c", - 500: "#f97316", - 600: "#ea580c", - 700: "#c2410c", - 800: "#9a3412", - 900: "#7c2d12", - 950: "#431407" - }, - amber: { - 50: "#fffbeb", - 100: "#fef3c7", - 200: "#fde68a", - 300: "#fcd34d", - 400: "#fbbf24", - 500: "#f59e0b", - 600: "#d97706", - 700: "#b45309", - 800: "#92400e", - 900: "#78350f", - 950: "#451a03" - }, - yellow: { - 50: "#fefce8", - 100: "#fef9c3", - 200: "#fef08a", - 300: "#fde047", - 400: "#facc15", - 500: "#eab308", - 600: "#ca8a04", - 700: "#a16207", - 800: "#854d0e", - 900: "#713f12", - 950: "#422006" - }, - lime: { - 50: "#f7fee7", - 100: "#ecfccb", - 200: "#d9f99d", - 300: "#bef264", - 400: "#a3e635", - 500: "#84cc16", - 600: "#65a30d", - 700: "#4d7c0f", - 800: "#3f6212", - 900: "#365314", - 950: "#1a2e05" - }, - green: { - 50: "#f0fdf4", - 100: "#dcfce7", - 200: "#bbf7d0", - 300: "#86efac", - 400: "#4ade80", - 500: "#22c55e", - 600: "#16a34a", - 700: "#15803d", - 800: "#166534", - 900: "#14532d", - 950: "#052e16" - }, - emerald: { - 50: "#ecfdf5", - 100: "#d1fae5", - 200: "#a7f3d0", - 300: "#6ee7b7", - 400: "#34d399", - 500: "#10b981", - 600: "#059669", - 700: "#047857", - 800: "#065f46", - 900: "#064e3b", - 950: "#022c22" - }, - teal: { - 50: "#f0fdfa", - 100: "#ccfbf1", - 200: "#99f6e4", - 300: "#5eead4", - 400: "#2dd4bf", - 500: "#14b8a6", - 600: "#0d9488", - 700: "#0f766e", - 800: "#115e59", - 900: "#134e4a", - 950: "#042f2e" - }, - cyan: { - 50: "#ecfeff", - 100: "#cffafe", - 200: "#a5f3fc", - 300: "#67e8f9", - 400: "#22d3ee", - 500: "#06b6d4", - 600: "#0891b2", - 700: "#0e7490", - 800: "#155e75", - 900: "#164e63", - 950: "#083344" - }, - sky: { - 50: "#f0f9ff", - 100: "#e0f2fe", - 200: "#bae6fd", - 300: "#7dd3fc", - 400: "#38bdf8", - 500: "#0ea5e9", - 600: "#0284c7", - 700: "#0369a1", - 800: "#075985", - 900: "#0c4a6e", - 950: "#082f49" - }, - blue: { - 50: "#eff6ff", - 100: "#dbeafe", - 200: "#bfdbfe", - 300: "#93c5fd", - 400: "#60a5fa", - 500: "#3b82f6", - 600: "#2563eb", - 700: "#1d4ed8", - 800: "#1e40af", - 900: "#1e3a8a", - 950: "#172554" - }, - indigo: { - 50: "#eef2ff", - 100: "#e0e7ff", - 200: "#c7d2fe", - 300: "#a5b4fc", - 400: "#818cf8", - 500: "#6366f1", - 600: "#4f46e5", - 700: "#4338ca", - 800: "#3730a3", - 900: "#312e81", - 950: "#1e1b4b" - }, - violet: { - 50: "#f5f3ff", - 100: "#ede9fe", - 200: "#ddd6fe", - 300: "#c4b5fd", - 400: "#a78bfa", - 500: "#8b5cf6", - 600: "#7c3aed", - 700: "#6d28d9", - 800: "#5b21b6", - 900: "#4c1d95", - 950: "#2e1065" - }, - purple: { - 50: "#faf5ff", - 100: "#f3e8ff", - 200: "#e9d5ff", - 300: "#d8b4fe", - 400: "#c084fc", - 500: "#a855f7", - 600: "#9333ea", - 700: "#7e22ce", - 800: "#6b21a8", - 900: "#581c87", - 950: "#3b0764" - }, - fuchsia: { - 50: "#fdf4ff", - 100: "#fae8ff", - 200: "#f5d0fe", - 300: "#f0abfc", - 400: "#e879f9", - 500: "#d946ef", - 600: "#c026d3", - 700: "#a21caf", - 800: "#86198f", - 900: "#701a75", - 950: "#4a044e" - }, - pink: { - 50: "#fdf2f8", - 100: "#fce7f3", - 200: "#fbcfe8", - 300: "#f9a8d4", - 400: "#f472b6", - 500: "#ec4899", - 600: "#db2777", - 700: "#be185d", - 800: "#9d174d", - 900: "#831843", - 950: "#500724" - }, - rose: { - 50: "#fff1f2", - 100: "#ffe4e6", - 200: "#fecdd3", - 300: "#fda4af", - 400: "#fb7185", - 500: "#f43f5e", - 600: "#e11d48", - 700: "#be123c", - 800: "#9f1239", - 900: "#881337", - 950: "#4c0519" - } -}; -Yr.reduce( - (e, { color: t, primary: n, secondary: r }) => ({ - ...e, - [t]: { - primary: Tt[t][n], - secondary: Tt[t][r] - } - }), - {} -); -function ne() { -} -function Kr(e) { - return e(); -} -function $r(e) { - e.forEach(Kr); -} -function ei(e) { - return typeof e == "function"; -} -function ti(e, t) { - return e != e ? t == t : e !== t || e && typeof e == "object" || typeof e == "function"; -} -function bn(e, ...t) { - if (e == null) { - for (const r of t) - r(void 0); - return ne; - } - const n = e.subscribe(...t); - return n.unsubscribe ? () => n.unsubscribe() : n; -} -function ni(e) { - let t; - return bn(e, (n) => t = n)(), t; -} -const gn = typeof window < "u"; -let Ht = gn ? () => window.performance.now() : () => Date.now(), pn = gn ? (e) => requestAnimationFrame(e) : ne; -const oe = /* @__PURE__ */ new Set(); -function vn(e) { - oe.forEach((t) => { - t.c(e) || (oe.delete(t), t.f()); - }), oe.size !== 0 && pn(vn); -} -function ri(e) { - let t; - return oe.size === 0 && pn(vn), { - promise: new Promise((n) => { - oe.add(t = { c: e, f: n }); - }), - abort() { - oe.delete(t); - } - }; -} -const se = []; -function ii(e, t) { - return { - subscribe: we(e, t).subscribe - }; -} -function we(e, t = ne) { - let n; - const r = /* @__PURE__ */ new Set(); - function i(o) { - if (ti(e, o) && (e = o, n)) { - const a = !se.length; - for (const u of r) - u[1](), se.push(u, e); - if (a) { - for (let u = 0; u < se.length; u += 2) - se[u][0](se[u + 1]); - se.length = 0; - } - } - } - function s(o) { - i(o(e)); - } - function l(o, a = ne) { - const u = [o, a]; - return r.add(u), r.size === 1 && (n = t(i, s) || ne), o(e), () => { - r.delete(u), r.size === 0 && n && (n(), n = null); - }; - } - return { set: i, update: s, subscribe: l }; -} -function me(e, t, n) { - const r = !Array.isArray(e), i = r ? [e] : e; - if (!i.every(Boolean)) - throw new Error("derived() expects stores as input, got a falsy value"); - const s = t.length < 2; - return ii(n, (l, o) => { - let a = !1; - const u = []; - let f = 0, c = ne; - const h = () => { - if (f) - return; - c(); - const d = t(r ? u[0] : u, l, o); - s ? l(d) : c = ei(d) ? d : ne; - }, _ = i.map( - (d, E) => bn( - d, - (w) => { - u[E] = w, f &= ~(1 << E), a && h(); - }, - () => { - f |= 1 << E; - } - ) - ); - return a = !0, h(), function() { - $r(_), c(), a = !1; - }; - }); -} -function Bt(e) { - return Object.prototype.toString.call(e) === "[object Date]"; -} -function tt(e, t, n, r) { - if (typeof n == "number" || Bt(n)) { - const i = r - n, s = (n - t) / (e.dt || 1 / 60), l = e.opts.stiffness * i, o = e.opts.damping * s, a = (l - o) * e.inv_mass, u = (s + a) * e.dt; - return Math.abs(u) < e.opts.precision && Math.abs(i) < e.opts.precision ? r : (e.settled = !1, Bt(n) ? new Date(n.getTime() + u) : n + u); - } else { - if (Array.isArray(n)) - return n.map( - (i, s) => tt(e, t[s], n[s], r[s]) - ); - if (typeof n == "object") { - const i = {}; - for (const s in n) - i[s] = tt(e, t[s], n[s], r[s]); - return i; - } else - throw new Error(`Cannot spring ${typeof n} values`); - } -} -function St(e, t = {}) { - const n = we(e), { stiffness: r = 0.15, damping: i = 0.8, precision: s = 0.01 } = t; - let l, o, a, u = e, f = e, c = 1, h = 0, _ = !1; - function d(w, N = {}) { - f = w; - const b = a = {}; - return e == null || N.hard || E.stiffness >= 1 && E.damping >= 1 ? (_ = !0, l = Ht(), u = w, n.set(e = f), Promise.resolve()) : (N.soft && (h = 1 / ((N.soft === !0 ? 0.5 : +N.soft) * 60), c = 0), o || (l = Ht(), _ = !1, o = ri((m) => { - if (_) - return _ = !1, o = null, !1; - c = Math.min(c + h, 1); - const T = { - inv_mass: c, - opts: E, - settled: !0, - dt: (m - l) * 60 / 1e3 - }, g = tt(T, u, e, f); - return l = m, u = e, n.set(e = g), T.settled && (o = null), !T.settled; - })), new Promise((m) => { - o.promise.then(() => { - b === a && m(); - }); - })); - } - const E = { - set: d, - update: (w, N) => d(w(f, e), N), - subscribe: n.subscribe, - stiffness: r, - damping: i, - precision: s - }; - return E; -} -function si(e) { - return e && e.__esModule && Object.prototype.hasOwnProperty.call(e, "default") ? e.default : e; -} -var li = function(t) { - return oi(t) && !ai(t); -}; -function oi(e) { - return !!e && typeof e == "object"; -} -function ai(e) { - var t = Object.prototype.toString.call(e); - return t === "[object RegExp]" || t === "[object Date]" || hi(e); -} -var ui = typeof Symbol == "function" && Symbol.for, fi = ui ? Symbol.for("react.element") : 60103; -function hi(e) { - return e.$$typeof === fi; -} -function ci(e) { - return Array.isArray(e) ? [] : {}; -} -function ye(e, t) { - return t.clone !== !1 && t.isMergeableObject(e) ? ae(ci(e), e, t) : e; -} -function _i(e, t, n) { - return e.concat(t).map(function(r) { - return ye(r, n); - }); -} -function mi(e, t) { - if (!t.customMerge) - return ae; - var n = t.customMerge(e); - return typeof n == "function" ? n : ae; -} -function di(e) { - return Object.getOwnPropertySymbols ? Object.getOwnPropertySymbols(e).filter(function(t) { - return Object.propertyIsEnumerable.call(e, t); - }) : []; -} -function At(e) { - return Object.keys(e).concat(di(e)); -} -function yn(e, t) { - try { - return t in e; - } catch { - return !1; - } -} -function bi(e, t) { - return yn(e, t) && !(Object.hasOwnProperty.call(e, t) && Object.propertyIsEnumerable.call(e, t)); -} -function gi(e, t, n) { - var r = {}; - return n.isMergeableObject(e) && At(e).forEach(function(i) { - r[i] = ye(e[i], n); - }), At(t).forEach(function(i) { - bi(e, i) || (yn(e, i) && n.isMergeableObject(t[i]) ? r[i] = mi(i, n)(e[i], t[i], n) : r[i] = ye(t[i], n)); - }), r; -} -function ae(e, t, n) { - n = n || {}, n.arrayMerge = n.arrayMerge || _i, n.isMergeableObject = n.isMergeableObject || li, n.cloneUnlessOtherwiseSpecified = ye; - var r = Array.isArray(t), i = Array.isArray(e), s = r === i; - return s ? r ? n.arrayMerge(e, t, n) : gi(e, t, n) : ye(t, n); -} -ae.all = function(t, n) { - if (!Array.isArray(t)) - throw new Error("first argument should be an array"); - return t.reduce(function(r, i) { - return ae(r, i, n); - }, {}); -}; -var pi = ae, vi = pi; -const yi = /* @__PURE__ */ si(vi); -var nt = function(e, t) { - return nt = Object.setPrototypeOf || { __proto__: [] } instanceof Array && function(n, r) { - n.__proto__ = r; - } || function(n, r) { - for (var i in r) - Object.prototype.hasOwnProperty.call(r, i) && (n[i] = r[i]); - }, nt(e, t); -}; -function Ge(e, t) { - if (typeof t != "function" && t !== null) - throw new TypeError("Class extends value " + String(t) + " is not a constructor or null"); - nt(e, t); - function n() { - this.constructor = e; - } - e.prototype = t === null ? Object.create(t) : (n.prototype = t.prototype, new n()); -} -var A = function() { - return A = Object.assign || function(t) { - for (var n, r = 1, i = arguments.length; r < i; r++) { - n = arguments[r]; - for (var s in n) - Object.prototype.hasOwnProperty.call(n, s) && (t[s] = n[s]); - } - return t; - }, A.apply(this, arguments); -}; -function We(e, t, n) { - if (n || arguments.length === 2) - for (var r = 0, i = t.length, s; r < i; r++) - (s || !(r in t)) && (s || (s = Array.prototype.slice.call(t, 0, r)), s[r] = t[r]); - return e.concat(s || Array.prototype.slice.call(t)); -} -var x; -(function(e) { - e[e.EXPECT_ARGUMENT_CLOSING_BRACE = 1] = "EXPECT_ARGUMENT_CLOSING_BRACE", e[e.EMPTY_ARGUMENT = 2] = "EMPTY_ARGUMENT", e[e.MALFORMED_ARGUMENT = 3] = "MALFORMED_ARGUMENT", e[e.EXPECT_ARGUMENT_TYPE = 4] = "EXPECT_ARGUMENT_TYPE", e[e.INVALID_ARGUMENT_TYPE = 5] = "INVALID_ARGUMENT_TYPE", e[e.EXPECT_ARGUMENT_STYLE = 6] = "EXPECT_ARGUMENT_STYLE", e[e.INVALID_NUMBER_SKELETON = 7] = "INVALID_NUMBER_SKELETON", e[e.INVALID_DATE_TIME_SKELETON = 8] = "INVALID_DATE_TIME_SKELETON", e[e.EXPECT_NUMBER_SKELETON = 9] = "EXPECT_NUMBER_SKELETON", e[e.EXPECT_DATE_TIME_SKELETON = 10] = "EXPECT_DATE_TIME_SKELETON", e[e.UNCLOSED_QUOTE_IN_ARGUMENT_STYLE = 11] = "UNCLOSED_QUOTE_IN_ARGUMENT_STYLE", e[e.EXPECT_SELECT_ARGUMENT_OPTIONS = 12] = "EXPECT_SELECT_ARGUMENT_OPTIONS", e[e.EXPECT_PLURAL_ARGUMENT_OFFSET_VALUE = 13] = "EXPECT_PLURAL_ARGUMENT_OFFSET_VALUE", e[e.INVALID_PLURAL_ARGUMENT_OFFSET_VALUE = 14] = "INVALID_PLURAL_ARGUMENT_OFFSET_VALUE", e[e.EXPECT_SELECT_ARGUMENT_SELECTOR = 15] = "EXPECT_SELECT_ARGUMENT_SELECTOR", e[e.EXPECT_PLURAL_ARGUMENT_SELECTOR = 16] = "EXPECT_PLURAL_ARGUMENT_SELECTOR", e[e.EXPECT_SELECT_ARGUMENT_SELECTOR_FRAGMENT = 17] = "EXPECT_SELECT_ARGUMENT_SELECTOR_FRAGMENT", e[e.EXPECT_PLURAL_ARGUMENT_SELECTOR_FRAGMENT = 18] = "EXPECT_PLURAL_ARGUMENT_SELECTOR_FRAGMENT", e[e.INVALID_PLURAL_ARGUMENT_SELECTOR = 19] = "INVALID_PLURAL_ARGUMENT_SELECTOR", e[e.DUPLICATE_PLURAL_ARGUMENT_SELECTOR = 20] = "DUPLICATE_PLURAL_ARGUMENT_SELECTOR", e[e.DUPLICATE_SELECT_ARGUMENT_SELECTOR = 21] = "DUPLICATE_SELECT_ARGUMENT_SELECTOR", e[e.MISSING_OTHER_CLAUSE = 22] = "MISSING_OTHER_CLAUSE", e[e.INVALID_TAG = 23] = "INVALID_TAG", e[e.INVALID_TAG_NAME = 25] = "INVALID_TAG_NAME", e[e.UNMATCHED_CLOSING_TAG = 26] = "UNMATCHED_CLOSING_TAG", e[e.UNCLOSED_TAG = 27] = "UNCLOSED_TAG"; -})(x || (x = {})); -var P; -(function(e) { - e[e.literal = 0] = "literal", e[e.argument = 1] = "argument", e[e.number = 2] = "number", e[e.date = 3] = "date", e[e.time = 4] = "time", e[e.select = 5] = "select", e[e.plural = 6] = "plural", e[e.pound = 7] = "pound", e[e.tag = 8] = "tag"; -})(P || (P = {})); -var ue; -(function(e) { - e[e.number = 0] = "number", e[e.dateTime = 1] = "dateTime"; -})(ue || (ue = {})); -function Pt(e) { - return e.type === P.literal; -} -function Ei(e) { - return e.type === P.argument; -} -function En(e) { - return e.type === P.number; -} -function wn(e) { - return e.type === P.date; -} -function xn(e) { - return e.type === P.time; -} -function Tn(e) { - return e.type === P.select; -} -function Hn(e) { - return e.type === P.plural; -} -function wi(e) { - return e.type === P.pound; -} -function Bn(e) { - return e.type === P.tag; -} -function Sn(e) { - return !!(e && typeof e == "object" && e.type === ue.number); -} -function rt(e) { - return !!(e && typeof e == "object" && e.type === ue.dateTime); -} -var An = /[ \xA0\u1680\u2000-\u200A\u202F\u205F\u3000]/, xi = /(?:[Eec]{1,6}|G{1,5}|[Qq]{1,5}|(?:[yYur]+|U{1,5})|[ML]{1,5}|d{1,2}|D{1,3}|F{1}|[abB]{1,5}|[hkHK]{1,2}|w{1,2}|W{1}|m{1,2}|s{1,2}|[zZOvVxX]{1,4})(?=([^']*'[^']*')*[^']*$)/g; -function Ti(e) { - var t = {}; - return e.replace(xi, function(n) { - var r = n.length; - switch (n[0]) { - case "G": - t.era = r === 4 ? "long" : r === 5 ? "narrow" : "short"; - break; - case "y": - t.year = r === 2 ? "2-digit" : "numeric"; - break; - case "Y": - case "u": - case "U": - case "r": - throw new RangeError("`Y/u/U/r` (year) patterns are not supported, use `y` instead"); - case "q": - case "Q": - throw new RangeError("`q/Q` (quarter) patterns are not supported"); - case "M": - case "L": - t.month = ["numeric", "2-digit", "short", "long", "narrow"][r - 1]; - break; - case "w": - case "W": - throw new RangeError("`w/W` (week) patterns are not supported"); - case "d": - t.day = ["numeric", "2-digit"][r - 1]; - break; - case "D": - case "F": - case "g": - throw new RangeError("`D/F/g` (day) patterns are not supported, use `d` instead"); - case "E": - t.weekday = r === 4 ? "short" : r === 5 ? "narrow" : "short"; - break; - case "e": - if (r < 4) - throw new RangeError("`e..eee` (weekday) patterns are not supported"); - t.weekday = ["short", "long", "narrow", "short"][r - 4]; - break; - case "c": - if (r < 4) - throw new RangeError("`c..ccc` (weekday) patterns are not supported"); - t.weekday = ["short", "long", "narrow", "short"][r - 4]; - break; - case "a": - t.hour12 = !0; - break; - case "b": - case "B": - throw new RangeError("`b/B` (period) patterns are not supported, use `a` instead"); - case "h": - t.hourCycle = "h12", t.hour = ["numeric", "2-digit"][r - 1]; - break; - case "H": - t.hourCycle = "h23", t.hour = ["numeric", "2-digit"][r - 1]; - break; - case "K": - t.hourCycle = "h11", t.hour = ["numeric", "2-digit"][r - 1]; - break; - case "k": - t.hourCycle = "h24", t.hour = ["numeric", "2-digit"][r - 1]; - break; - case "j": - case "J": - case "C": - throw new RangeError("`j/J/C` (hour) patterns are not supported, use `h/H/K/k` instead"); - case "m": - t.minute = ["numeric", "2-digit"][r - 1]; - break; - case "s": - t.second = ["numeric", "2-digit"][r - 1]; - break; - case "S": - case "A": - throw new RangeError("`S/A` (second) patterns are not supported, use `s` instead"); - case "z": - t.timeZoneName = r < 4 ? "short" : "long"; - break; - case "Z": - case "O": - case "v": - case "V": - case "X": - case "x": - throw new RangeError("`Z/O/v/V/X/x` (timeZone) patterns are not supported, use `z` instead"); - } - return ""; - }), t; -} -var Hi = /[\t-\r \x85\u200E\u200F\u2028\u2029]/i; -function Bi(e) { - if (e.length === 0) - throw new Error("Number skeleton cannot be empty"); - for (var t = e.split(Hi).filter(function(h) { - return h.length > 0; - }), n = [], r = 0, i = t; r < i.length; r++) { - var s = i[r], l = s.split("/"); - if (l.length === 0) - throw new Error("Invalid number skeleton"); - for (var o = l[0], a = l.slice(1), u = 0, f = a; u < f.length; u++) { - var c = f[u]; - if (c.length === 0) - throw new Error("Invalid number skeleton"); - } - n.push({ stem: o, options: a }); - } - return n; -} -function Si(e) { - return e.replace(/^(.*?)-/, ""); -} -var Nt = /^\.(?:(0+)(\*)?|(#+)|(0+)(#+))$/g, Pn = /^(@+)?(\+|#+)?[rs]?$/g, Ai = /(\*)(0+)|(#+)(0+)|(0+)/g, Nn = /^(0+)$/; -function It(e) { - var t = {}; - return e[e.length - 1] === "r" ? t.roundingPriority = "morePrecision" : e[e.length - 1] === "s" && (t.roundingPriority = "lessPrecision"), e.replace(Pn, function(n, r, i) { - return typeof i != "string" ? (t.minimumSignificantDigits = r.length, t.maximumSignificantDigits = r.length) : i === "+" ? t.minimumSignificantDigits = r.length : r[0] === "#" ? t.maximumSignificantDigits = r.length : (t.minimumSignificantDigits = r.length, t.maximumSignificantDigits = r.length + (typeof i == "string" ? i.length : 0)), ""; - }), t; -} -function In(e) { - switch (e) { - case "sign-auto": - return { - signDisplay: "auto" - }; - case "sign-accounting": - case "()": - return { - currencySign: "accounting" - }; - case "sign-always": - case "+!": - return { - signDisplay: "always" - }; - case "sign-accounting-always": - case "()!": - return { - signDisplay: "always", - currencySign: "accounting" - }; - case "sign-except-zero": - case "+?": - return { - signDisplay: "exceptZero" - }; - case "sign-accounting-except-zero": - case "()?": - return { - signDisplay: "exceptZero", - currencySign: "accounting" - }; - case "sign-never": - case "+_": - return { - signDisplay: "never" - }; - } -} -function Pi(e) { - var t; - if (e[0] === "E" && e[1] === "E" ? (t = { - notation: "engineering" - }, e = e.slice(2)) : e[0] === "E" && (t = { - notation: "scientific" - }, e = e.slice(1)), t) { - var n = e.slice(0, 2); - if (n === "+!" ? (t.signDisplay = "always", e = e.slice(2)) : n === "+?" && (t.signDisplay = "exceptZero", e = e.slice(2)), !Nn.test(e)) - throw new Error("Malformed concise eng/scientific notation"); - t.minimumIntegerDigits = e.length; - } - return t; -} -function Ct(e) { - var t = {}, n = In(e); - return n || t; -} -function Ni(e) { - for (var t = {}, n = 0, r = e; n < r.length; n++) { - var i = r[n]; - switch (i.stem) { - case "percent": - case "%": - t.style = "percent"; - continue; - case "%x100": - t.style = "percent", t.scale = 100; - continue; - case "currency": - t.style = "currency", t.currency = i.options[0]; - continue; - case "group-off": - case ",_": - t.useGrouping = !1; - continue; - case "precision-integer": - case ".": - t.maximumFractionDigits = 0; - continue; - case "measure-unit": - case "unit": - t.style = "unit", t.unit = Si(i.options[0]); - continue; - case "compact-short": - case "K": - t.notation = "compact", t.compactDisplay = "short"; - continue; - case "compact-long": - case "KK": - t.notation = "compact", t.compactDisplay = "long"; - continue; - case "scientific": - t = A(A(A({}, t), { notation: "scientific" }), i.options.reduce(function(a, u) { - return A(A({}, a), Ct(u)); - }, {})); - continue; - case "engineering": - t = A(A(A({}, t), { notation: "engineering" }), i.options.reduce(function(a, u) { - return A(A({}, a), Ct(u)); - }, {})); - continue; - case "notation-simple": - t.notation = "standard"; - continue; - case "unit-width-narrow": - t.currencyDisplay = "narrowSymbol", t.unitDisplay = "narrow"; - continue; - case "unit-width-short": - t.currencyDisplay = "code", t.unitDisplay = "short"; - continue; - case "unit-width-full-name": - t.currencyDisplay = "name", t.unitDisplay = "long"; - continue; - case "unit-width-iso-code": - t.currencyDisplay = "symbol"; - continue; - case "scale": - t.scale = parseFloat(i.options[0]); - continue; - case "integer-width": - if (i.options.length > 1) - throw new RangeError("integer-width stems only accept a single optional option"); - i.options[0].replace(Ai, function(a, u, f, c, h, _) { - if (u) - t.minimumIntegerDigits = f.length; - else { - if (c && h) - throw new Error("We currently do not support maximum integer digits"); - if (_) - throw new Error("We currently do not support exact integer digits"); - } - return ""; - }); - continue; - } - if (Nn.test(i.stem)) { - t.minimumIntegerDigits = i.stem.length; - continue; - } - if (Nt.test(i.stem)) { - if (i.options.length > 1) - throw new RangeError("Fraction-precision stems only accept a single optional option"); - i.stem.replace(Nt, function(a, u, f, c, h, _) { - return f === "*" ? t.minimumFractionDigits = u.length : c && c[0] === "#" ? t.maximumFractionDigits = c.length : h && _ ? (t.minimumFractionDigits = h.length, t.maximumFractionDigits = h.length + _.length) : (t.minimumFractionDigits = u.length, t.maximumFractionDigits = u.length), ""; - }); - var s = i.options[0]; - s === "w" ? t = A(A({}, t), { trailingZeroDisplay: "stripIfInteger" }) : s && (t = A(A({}, t), It(s))); - continue; - } - if (Pn.test(i.stem)) { - t = A(A({}, t), It(i.stem)); - continue; - } - var l = In(i.stem); - l && (t = A(A({}, t), l)); - var o = Pi(i.stem); - o && (t = A(A({}, t), o)); - } - return t; -} -var Ne = { - AX: [ - "H" - ], - BQ: [ - "H" - ], - CP: [ - "H" - ], - CZ: [ - "H" - ], - DK: [ - "H" - ], - FI: [ - "H" - ], - ID: [ - "H" - ], - IS: [ - "H" - ], - ML: [ - "H" - ], - NE: [ - "H" - ], - RU: [ - "H" - ], - SE: [ - "H" - ], - SJ: [ - "H" - ], - SK: [ - "H" - ], - AS: [ - "h", - "H" - ], - BT: [ - "h", - "H" - ], - DJ: [ - "h", - "H" - ], - ER: [ - "h", - "H" - ], - GH: [ - "h", - "H" - ], - IN: [ - "h", - "H" - ], - LS: [ - "h", - "H" - ], - PG: [ - "h", - "H" - ], - PW: [ - "h", - "H" - ], - SO: [ - "h", - "H" - ], - TO: [ - "h", - "H" - ], - VU: [ - "h", - "H" - ], - WS: [ - "h", - "H" - ], - "001": [ - "H", - "h" - ], - AL: [ - "h", - "H", - "hB" - ], - TD: [ - "h", - "H", - "hB" - ], - "ca-ES": [ - "H", - "h", - "hB" - ], - CF: [ - "H", - "h", - "hB" - ], - CM: [ - "H", - "h", - "hB" - ], - "fr-CA": [ - "H", - "h", - "hB" - ], - "gl-ES": [ - "H", - "h", - "hB" - ], - "it-CH": [ - "H", - "h", - "hB" - ], - "it-IT": [ - "H", - "h", - "hB" - ], - LU: [ - "H", - "h", - "hB" - ], - NP: [ - "H", - "h", - "hB" - ], - PF: [ - "H", - "h", - "hB" - ], - SC: [ - "H", - "h", - "hB" - ], - SM: [ - "H", - "h", - "hB" - ], - SN: [ - "H", - "h", - "hB" - ], - TF: [ - "H", - "h", - "hB" - ], - VA: [ - "H", - "h", - "hB" - ], - CY: [ - "h", - "H", - "hb", - "hB" - ], - GR: [ - "h", - "H", - "hb", - "hB" - ], - CO: [ - "h", - "H", - "hB", - "hb" - ], - DO: [ - "h", - "H", - "hB", - "hb" - ], - KP: [ - "h", - "H", - "hB", - "hb" - ], - KR: [ - "h", - "H", - "hB", - "hb" - ], - NA: [ - "h", - "H", - "hB", - "hb" - ], - PA: [ - "h", - "H", - "hB", - "hb" - ], - PR: [ - "h", - "H", - "hB", - "hb" - ], - VE: [ - "h", - "H", - "hB", - "hb" - ], - AC: [ - "H", - "h", - "hb", - "hB" - ], - AI: [ - "H", - "h", - "hb", - "hB" - ], - BW: [ - "H", - "h", - "hb", - "hB" - ], - BZ: [ - "H", - "h", - "hb", - "hB" - ], - CC: [ - "H", - "h", - "hb", - "hB" - ], - CK: [ - "H", - "h", - "hb", - "hB" - ], - CX: [ - "H", - "h", - "hb", - "hB" - ], - DG: [ - "H", - "h", - "hb", - "hB" - ], - FK: [ - "H", - "h", - "hb", - "hB" - ], - GB: [ - "H", - "h", - "hb", - "hB" - ], - GG: [ - "H", - "h", - "hb", - "hB" - ], - GI: [ - "H", - "h", - "hb", - "hB" - ], - IE: [ - "H", - "h", - "hb", - "hB" - ], - IM: [ - "H", - "h", - "hb", - "hB" - ], - IO: [ - "H", - "h", - "hb", - "hB" - ], - JE: [ - "H", - "h", - "hb", - "hB" - ], - LT: [ - "H", - "h", - "hb", - "hB" - ], - MK: [ - "H", - "h", - "hb", - "hB" - ], - MN: [ - "H", - "h", - "hb", - "hB" - ], - MS: [ - "H", - "h", - "hb", - "hB" - ], - NF: [ - "H", - "h", - "hb", - "hB" - ], - NG: [ - "H", - "h", - "hb", - "hB" - ], - NR: [ - "H", - "h", - "hb", - "hB" - ], - NU: [ - "H", - "h", - "hb", - "hB" - ], - PN: [ - "H", - "h", - "hb", - "hB" - ], - SH: [ - "H", - "h", - "hb", - "hB" - ], - SX: [ - "H", - "h", - "hb", - "hB" - ], - TA: [ - "H", - "h", - "hb", - "hB" - ], - ZA: [ - "H", - "h", - "hb", - "hB" - ], - "af-ZA": [ - "H", - "h", - "hB", - "hb" - ], - AR: [ - "H", - "h", - "hB", - "hb" - ], - CL: [ - "H", - "h", - "hB", - "hb" - ], - CR: [ - "H", - "h", - "hB", - "hb" - ], - CU: [ - "H", - "h", - "hB", - "hb" - ], - EA: [ - "H", - "h", - "hB", - "hb" - ], - "es-BO": [ - "H", - "h", - "hB", - "hb" - ], - "es-BR": [ - "H", - "h", - "hB", - "hb" - ], - "es-EC": [ - "H", - "h", - "hB", - "hb" - ], - "es-ES": [ - "H", - "h", - "hB", - "hb" - ], - "es-GQ": [ - "H", - "h", - "hB", - "hb" - ], - "es-PE": [ - "H", - "h", - "hB", - "hb" - ], - GT: [ - "H", - "h", - "hB", - "hb" - ], - HN: [ - "H", - "h", - "hB", - "hb" - ], - IC: [ - "H", - "h", - "hB", - "hb" - ], - KG: [ - "H", - "h", - "hB", - "hb" - ], - KM: [ - "H", - "h", - "hB", - "hb" - ], - LK: [ - "H", - "h", - "hB", - "hb" - ], - MA: [ - "H", - "h", - "hB", - "hb" - ], - MX: [ - "H", - "h", - "hB", - "hb" - ], - NI: [ - "H", - "h", - "hB", - "hb" - ], - PY: [ - "H", - "h", - "hB", - "hb" - ], - SV: [ - "H", - "h", - "hB", - "hb" - ], - UY: [ - "H", - "h", - "hB", - "hb" - ], - JP: [ - "H", - "h", - "K" - ], - AD: [ - "H", - "hB" - ], - AM: [ - "H", - "hB" - ], - AO: [ - "H", - "hB" - ], - AT: [ - "H", - "hB" - ], - AW: [ - "H", - "hB" - ], - BE: [ - "H", - "hB" - ], - BF: [ - "H", - "hB" - ], - BJ: [ - "H", - "hB" - ], - BL: [ - "H", - "hB" - ], - BR: [ - "H", - "hB" - ], - CG: [ - "H", - "hB" - ], - CI: [ - "H", - "hB" - ], - CV: [ - "H", - "hB" - ], - DE: [ - "H", - "hB" - ], - EE: [ - "H", - "hB" - ], - FR: [ - "H", - "hB" - ], - GA: [ - "H", - "hB" - ], - GF: [ - "H", - "hB" - ], - GN: [ - "H", - "hB" - ], - GP: [ - "H", - "hB" - ], - GW: [ - "H", - "hB" - ], - HR: [ - "H", - "hB" - ], - IL: [ - "H", - "hB" - ], - IT: [ - "H", - "hB" - ], - KZ: [ - "H", - "hB" - ], - MC: [ - "H", - "hB" - ], - MD: [ - "H", - "hB" - ], - MF: [ - "H", - "hB" - ], - MQ: [ - "H", - "hB" - ], - MZ: [ - "H", - "hB" - ], - NC: [ - "H", - "hB" - ], - NL: [ - "H", - "hB" - ], - PM: [ - "H", - "hB" - ], - PT: [ - "H", - "hB" - ], - RE: [ - "H", - "hB" - ], - RO: [ - "H", - "hB" - ], - SI: [ - "H", - "hB" - ], - SR: [ - "H", - "hB" - ], - ST: [ - "H", - "hB" - ], - TG: [ - "H", - "hB" - ], - TR: [ - "H", - "hB" - ], - WF: [ - "H", - "hB" - ], - YT: [ - "H", - "hB" - ], - BD: [ - "h", - "hB", - "H" - ], - PK: [ - "h", - "hB", - "H" - ], - AZ: [ - "H", - "hB", - "h" - ], - BA: [ - "H", - "hB", - "h" - ], - BG: [ - "H", - "hB", - "h" - ], - CH: [ - "H", - "hB", - "h" - ], - GE: [ - "H", - "hB", - "h" - ], - LI: [ - "H", - "hB", - "h" - ], - ME: [ - "H", - "hB", - "h" - ], - RS: [ - "H", - "hB", - "h" - ], - UA: [ - "H", - "hB", - "h" - ], - UZ: [ - "H", - "hB", - "h" - ], - XK: [ - "H", - "hB", - "h" - ], - AG: [ - "h", - "hb", - "H", - "hB" - ], - AU: [ - "h", - "hb", - "H", - "hB" - ], - BB: [ - "h", - "hb", - "H", - "hB" - ], - BM: [ - "h", - "hb", - "H", - "hB" - ], - BS: [ - "h", - "hb", - "H", - "hB" - ], - CA: [ - "h", - "hb", - "H", - "hB" - ], - DM: [ - "h", - "hb", - "H", - "hB" - ], - "en-001": [ - "h", - "hb", - "H", - "hB" - ], - FJ: [ - "h", - "hb", - "H", - "hB" - ], - FM: [ - "h", - "hb", - "H", - "hB" - ], - GD: [ - "h", - "hb", - "H", - "hB" - ], - GM: [ - "h", - "hb", - "H", - "hB" - ], - GU: [ - "h", - "hb", - "H", - "hB" - ], - GY: [ - "h", - "hb", - "H", - "hB" - ], - JM: [ - "h", - "hb", - "H", - "hB" - ], - KI: [ - "h", - "hb", - "H", - "hB" - ], - KN: [ - "h", - "hb", - "H", - "hB" - ], - KY: [ - "h", - "hb", - "H", - "hB" - ], - LC: [ - "h", - "hb", - "H", - "hB" - ], - LR: [ - "h", - "hb", - "H", - "hB" - ], - MH: [ - "h", - "hb", - "H", - "hB" - ], - MP: [ - "h", - "hb", - "H", - "hB" - ], - MW: [ - "h", - "hb", - "H", - "hB" - ], - NZ: [ - "h", - "hb", - "H", - "hB" - ], - SB: [ - "h", - "hb", - "H", - "hB" - ], - SG: [ - "h", - "hb", - "H", - "hB" - ], - SL: [ - "h", - "hb", - "H", - "hB" - ], - SS: [ - "h", - "hb", - "H", - "hB" - ], - SZ: [ - "h", - "hb", - "H", - "hB" - ], - TC: [ - "h", - "hb", - "H", - "hB" - ], - TT: [ - "h", - "hb", - "H", - "hB" - ], - UM: [ - "h", - "hb", - "H", - "hB" - ], - US: [ - "h", - "hb", - "H", - "hB" - ], - VC: [ - "h", - "hb", - "H", - "hB" - ], - VG: [ - "h", - "hb", - "H", - "hB" - ], - VI: [ - "h", - "hb", - "H", - "hB" - ], - ZM: [ - "h", - "hb", - "H", - "hB" - ], - BO: [ - "H", - "hB", - "h", - "hb" - ], - EC: [ - "H", - "hB", - "h", - "hb" - ], - ES: [ - "H", - "hB", - "h", - "hb" - ], - GQ: [ - "H", - "hB", - "h", - "hb" - ], - PE: [ - "H", - "hB", - "h", - "hb" - ], - AE: [ - "h", - "hB", - "hb", - "H" - ], - "ar-001": [ - "h", - "hB", - "hb", - "H" - ], - BH: [ - "h", - "hB", - "hb", - "H" - ], - DZ: [ - "h", - "hB", - "hb", - "H" - ], - EG: [ - "h", - "hB", - "hb", - "H" - ], - EH: [ - "h", - "hB", - "hb", - "H" - ], - HK: [ - "h", - "hB", - "hb", - "H" - ], - IQ: [ - "h", - "hB", - "hb", - "H" - ], - JO: [ - "h", - "hB", - "hb", - "H" - ], - KW: [ - "h", - "hB", - "hb", - "H" - ], - LB: [ - "h", - "hB", - "hb", - "H" - ], - LY: [ - "h", - "hB", - "hb", - "H" - ], - MO: [ - "h", - "hB", - "hb", - "H" - ], - MR: [ - "h", - "hB", - "hb", - "H" - ], - OM: [ - "h", - "hB", - "hb", - "H" - ], - PH: [ - "h", - "hB", - "hb", - "H" - ], - PS: [ - "h", - "hB", - "hb", - "H" - ], - QA: [ - "h", - "hB", - "hb", - "H" - ], - SA: [ - "h", - "hB", - "hb", - "H" - ], - SD: [ - "h", - "hB", - "hb", - "H" - ], - SY: [ - "h", - "hB", - "hb", - "H" - ], - TN: [ - "h", - "hB", - "hb", - "H" - ], - YE: [ - "h", - "hB", - "hb", - "H" - ], - AF: [ - "H", - "hb", - "hB", - "h" - ], - LA: [ - "H", - "hb", - "hB", - "h" - ], - CN: [ - "H", - "hB", - "hb", - "h" - ], - LV: [ - "H", - "hB", - "hb", - "h" - ], - TL: [ - "H", - "hB", - "hb", - "h" - ], - "zu-ZA": [ - "H", - "hB", - "hb", - "h" - ], - CD: [ - "hB", - "H" - ], - IR: [ - "hB", - "H" - ], - "hi-IN": [ - "hB", - "h", - "H" - ], - "kn-IN": [ - "hB", - "h", - "H" - ], - "ml-IN": [ - "hB", - "h", - "H" - ], - "te-IN": [ - "hB", - "h", - "H" - ], - KH: [ - "hB", - "h", - "H", - "hb" - ], - "ta-IN": [ - "hB", - "h", - "hb", - "H" - ], - BN: [ - "hb", - "hB", - "h", - "H" - ], - MY: [ - "hb", - "hB", - "h", - "H" - ], - ET: [ - "hB", - "hb", - "h", - "H" - ], - "gu-IN": [ - "hB", - "hb", - "h", - "H" - ], - "mr-IN": [ - "hB", - "hb", - "h", - "H" - ], - "pa-IN": [ - "hB", - "hb", - "h", - "H" - ], - TW: [ - "hB", - "hb", - "h", - "H" - ], - KE: [ - "hB", - "hb", - "H", - "h" - ], - MM: [ - "hB", - "hb", - "H", - "h" - ], - TZ: [ - "hB", - "hb", - "H", - "h" - ], - UG: [ - "hB", - "hb", - "H", - "h" - ] -}; -function Ii(e, t) { - for (var n = "", r = 0; r < e.length; r++) { - var i = e.charAt(r); - if (i === "j") { - for (var s = 0; r + 1 < e.length && e.charAt(r + 1) === i; ) - s++, r++; - var l = 1 + (s & 1), o = s < 2 ? 1 : 3 + (s >> 1), a = "a", u = Ci(t); - for ((u == "H" || u == "k") && (o = 0); o-- > 0; ) - n += a; - for (; l-- > 0; ) - n = u + n; - } else - i === "J" ? n += "H" : n += i; - } - return n; -} -function Ci(e) { - var t = e.hourCycle; - if (t === void 0 && // @ts-ignore hourCycle(s) is not identified yet - e.hourCycles && // @ts-ignore - e.hourCycles.length && (t = e.hourCycles[0]), t) - switch (t) { - case "h24": - return "k"; - case "h23": - return "H"; - case "h12": - return "h"; - case "h11": - return "K"; - default: - throw new Error("Invalid hourCycle"); - } - var n = e.language, r; - n !== "root" && (r = e.maximize().region); - var i = Ne[r || ""] || Ne[n || ""] || Ne["".concat(n, "-001")] || Ne["001"]; - return i[0]; -} -var Qe, Li = new RegExp("^".concat(An.source, "*")), Oi = new RegExp("".concat(An.source, "*$")); -function H(e, t) { - return { start: e, end: t }; -} -var Mi = !!String.prototype.startsWith, Ri = !!String.fromCodePoint, Ui = !!Object.fromEntries, Di = !!String.prototype.codePointAt, ki = !!String.prototype.trimStart, Gi = !!String.prototype.trimEnd, Fi = !!Number.isSafeInteger, Vi = Fi ? Number.isSafeInteger : function(e) { - return typeof e == "number" && isFinite(e) && Math.floor(e) === e && Math.abs(e) <= 9007199254740991; -}, it = !0; -try { - var ji = Ln("([^\\p{White_Space}\\p{Pattern_Syntax}]*)", "yu"); - it = ((Qe = ji.exec("a")) === null || Qe === void 0 ? void 0 : Qe[0]) === "a"; -} catch { - it = !1; -} -var Lt = Mi ? ( - // Native - function(t, n, r) { - return t.startsWith(n, r); - } -) : ( - // For IE11 - function(t, n, r) { - return t.slice(r, r + n.length) === n; - } -), st = Ri ? String.fromCodePoint : ( - // IE11 - function() { - for (var t = [], n = 0; n < arguments.length; n++) - t[n] = arguments[n]; - for (var r = "", i = t.length, s = 0, l; i > s; ) { - if (l = t[s++], l > 1114111) - throw RangeError(l + " is not a valid code point"); - r += l < 65536 ? String.fromCharCode(l) : String.fromCharCode(((l -= 65536) >> 10) + 55296, l % 1024 + 56320); - } - return r; - } -), Ot = ( - // native - Ui ? Object.fromEntries : ( - // Ponyfill - function(t) { - for (var n = {}, r = 0, i = t; r < i.length; r++) { - var s = i[r], l = s[0], o = s[1]; - n[l] = o; - } - return n; - } - ) -), Cn = Di ? ( - // Native - function(t, n) { - return t.codePointAt(n); - } -) : ( - // IE 11 - function(t, n) { - var r = t.length; - if (!(n < 0 || n >= r)) { - var i = t.charCodeAt(n), s; - return i < 55296 || i > 56319 || n + 1 === r || (s = t.charCodeAt(n + 1)) < 56320 || s > 57343 ? i : (i - 55296 << 10) + (s - 56320) + 65536; - } - } -), Xi = ki ? ( - // Native - function(t) { - return t.trimStart(); - } -) : ( - // Ponyfill - function(t) { - return t.replace(Li, ""); - } -), qi = Gi ? ( - // Native - function(t) { - return t.trimEnd(); - } -) : ( - // Ponyfill - function(t) { - return t.replace(Oi, ""); - } -); -function Ln(e, t) { - return new RegExp(e, t); -} -var lt; -if (it) { - var Mt = Ln("([^\\p{White_Space}\\p{Pattern_Syntax}]*)", "yu"); - lt = function(t, n) { - var r; - Mt.lastIndex = n; - var i = Mt.exec(t); - return (r = i[1]) !== null && r !== void 0 ? r : ""; - }; -} else - lt = function(t, n) { - for (var r = []; ; ) { - var i = Cn(t, n); - if (i === void 0 || On(i) || Qi(i)) - break; - r.push(i), n += i >= 65536 ? 2 : 1; - } - return st.apply(void 0, r); - }; -var zi = ( - /** @class */ - function() { - function e(t, n) { - n === void 0 && (n = {}), this.message = t, this.position = { offset: 0, line: 1, column: 1 }, this.ignoreTag = !!n.ignoreTag, this.locale = n.locale, this.requiresOtherClause = !!n.requiresOtherClause, this.shouldParseSkeletons = !!n.shouldParseSkeletons; - } - return e.prototype.parse = function() { - if (this.offset() !== 0) - throw Error("parser can only be used once"); - return this.parseMessage(0, "", !1); - }, e.prototype.parseMessage = function(t, n, r) { - for (var i = []; !this.isEOF(); ) { - var s = this.char(); - if (s === 123) { - var l = this.parseArgument(t, r); - if (l.err) - return l; - i.push(l.val); - } else { - if (s === 125 && t > 0) - break; - if (s === 35 && (n === "plural" || n === "selectordinal")) { - var o = this.clonePosition(); - this.bump(), i.push({ - type: P.pound, - location: H(o, this.clonePosition()) - }); - } else if (s === 60 && !this.ignoreTag && this.peek() === 47) { - if (r) - break; - return this.error(x.UNMATCHED_CLOSING_TAG, H(this.clonePosition(), this.clonePosition())); - } else if (s === 60 && !this.ignoreTag && ot(this.peek() || 0)) { - var l = this.parseTag(t, n); - if (l.err) - return l; - i.push(l.val); - } else { - var l = this.parseLiteral(t, n); - if (l.err) - return l; - i.push(l.val); - } - } - } - return { val: i, err: null }; - }, e.prototype.parseTag = function(t, n) { - var r = this.clonePosition(); - this.bump(); - var i = this.parseTagName(); - if (this.bumpSpace(), this.bumpIf("/>")) - return { - val: { - type: P.literal, - value: "<".concat(i, "/>"), - location: H(r, this.clonePosition()) - }, - err: null - }; - if (this.bumpIf(">")) { - var s = this.parseMessage(t + 1, n, !0); - if (s.err) - return s; - var l = s.val, o = this.clonePosition(); - if (this.bumpIf("") ? { - val: { - type: P.tag, - value: i, - children: l, - location: H(r, this.clonePosition()) - }, - err: null - } : this.error(x.INVALID_TAG, H(o, this.clonePosition()))); - } else - return this.error(x.UNCLOSED_TAG, H(r, this.clonePosition())); - } else - return this.error(x.INVALID_TAG, H(r, this.clonePosition())); - }, e.prototype.parseTagName = function() { - var t = this.offset(); - for (this.bump(); !this.isEOF() && Wi(this.char()); ) - this.bump(); - return this.message.slice(t, this.offset()); - }, e.prototype.parseLiteral = function(t, n) { - for (var r = this.clonePosition(), i = ""; ; ) { - var s = this.tryParseQuote(n); - if (s) { - i += s; - continue; - } - var l = this.tryParseUnquoted(t, n); - if (l) { - i += l; - continue; - } - var o = this.tryParseLeftAngleBracket(); - if (o) { - i += o; - continue; - } - break; - } - var a = H(r, this.clonePosition()); - return { - val: { type: P.literal, value: i, location: a }, - err: null - }; - }, e.prototype.tryParseLeftAngleBracket = function() { - return !this.isEOF() && this.char() === 60 && (this.ignoreTag || // If at the opening tag or closing tag position, bail. - !Zi(this.peek() || 0)) ? (this.bump(), "<") : null; - }, e.prototype.tryParseQuote = function(t) { - if (this.isEOF() || this.char() !== 39) - return null; - switch (this.peek()) { - case 39: - return this.bump(), this.bump(), "'"; - case 123: - case 60: - case 62: - case 125: - break; - case 35: - if (t === "plural" || t === "selectordinal") - break; - return null; - default: - return null; - } - this.bump(); - var n = [this.char()]; - for (this.bump(); !this.isEOF(); ) { - var r = this.char(); - if (r === 39) - if (this.peek() === 39) - n.push(39), this.bump(); - else { - this.bump(); - break; - } - else - n.push(r); - this.bump(); - } - return st.apply(void 0, n); - }, e.prototype.tryParseUnquoted = function(t, n) { - if (this.isEOF()) - return null; - var r = this.char(); - return r === 60 || r === 123 || r === 35 && (n === "plural" || n === "selectordinal") || r === 125 && t > 0 ? null : (this.bump(), st(r)); - }, e.prototype.parseArgument = function(t, n) { - var r = this.clonePosition(); - if (this.bump(), this.bumpSpace(), this.isEOF()) - return this.error(x.EXPECT_ARGUMENT_CLOSING_BRACE, H(r, this.clonePosition())); - if (this.char() === 125) - return this.bump(), this.error(x.EMPTY_ARGUMENT, H(r, this.clonePosition())); - var i = this.parseIdentifierIfPossible().value; - if (!i) - return this.error(x.MALFORMED_ARGUMENT, H(r, this.clonePosition())); - if (this.bumpSpace(), this.isEOF()) - return this.error(x.EXPECT_ARGUMENT_CLOSING_BRACE, H(r, this.clonePosition())); - switch (this.char()) { - case 125: - return this.bump(), { - val: { - type: P.argument, - // value does not include the opening and closing braces. - value: i, - location: H(r, this.clonePosition()) - }, - err: null - }; - case 44: - return this.bump(), this.bumpSpace(), this.isEOF() ? this.error(x.EXPECT_ARGUMENT_CLOSING_BRACE, H(r, this.clonePosition())) : this.parseArgumentOptions(t, n, i, r); - default: - return this.error(x.MALFORMED_ARGUMENT, H(r, this.clonePosition())); - } - }, e.prototype.parseIdentifierIfPossible = function() { - var t = this.clonePosition(), n = this.offset(), r = lt(this.message, n), i = n + r.length; - this.bumpTo(i); - var s = this.clonePosition(), l = H(t, s); - return { value: r, location: l }; - }, e.prototype.parseArgumentOptions = function(t, n, r, i) { - var s, l = this.clonePosition(), o = this.parseIdentifierIfPossible().value, a = this.clonePosition(); - switch (o) { - case "": - return this.error(x.EXPECT_ARGUMENT_TYPE, H(l, a)); - case "number": - case "date": - case "time": { - this.bumpSpace(); - var u = null; - if (this.bumpIf(",")) { - this.bumpSpace(); - var f = this.clonePosition(), c = this.parseSimpleArgStyleIfPossible(); - if (c.err) - return c; - var h = qi(c.val); - if (h.length === 0) - return this.error(x.EXPECT_ARGUMENT_STYLE, H(this.clonePosition(), this.clonePosition())); - var _ = H(f, this.clonePosition()); - u = { style: h, styleLocation: _ }; - } - var d = this.tryParseArgumentClose(i); - if (d.err) - return d; - var E = H(i, this.clonePosition()); - if (u && Lt(u == null ? void 0 : u.style, "::", 0)) { - var w = Xi(u.style.slice(2)); - if (o === "number") { - var c = this.parseNumberSkeletonFromString(w, u.styleLocation); - return c.err ? c : { - val: { type: P.number, value: r, location: E, style: c.val }, - err: null - }; - } else { - if (w.length === 0) - return this.error(x.EXPECT_DATE_TIME_SKELETON, E); - var N = w; - this.locale && (N = Ii(w, this.locale)); - var h = { - type: ue.dateTime, - pattern: N, - location: u.styleLocation, - parsedOptions: this.shouldParseSkeletons ? Ti(N) : {} - }, b = o === "date" ? P.date : P.time; - return { - val: { type: b, value: r, location: E, style: h }, - err: null - }; - } - } - return { - val: { - type: o === "number" ? P.number : o === "date" ? P.date : P.time, - value: r, - location: E, - style: (s = u == null ? void 0 : u.style) !== null && s !== void 0 ? s : null - }, - err: null - }; - } - case "plural": - case "selectordinal": - case "select": { - var m = this.clonePosition(); - if (this.bumpSpace(), !this.bumpIf(",")) - return this.error(x.EXPECT_SELECT_ARGUMENT_OPTIONS, H(m, A({}, m))); - this.bumpSpace(); - var T = this.parseIdentifierIfPossible(), g = 0; - if (o !== "select" && T.value === "offset") { - if (!this.bumpIf(":")) - return this.error(x.EXPECT_PLURAL_ARGUMENT_OFFSET_VALUE, H(this.clonePosition(), this.clonePosition())); - this.bumpSpace(); - var c = this.tryParseDecimalInteger(x.EXPECT_PLURAL_ARGUMENT_OFFSET_VALUE, x.INVALID_PLURAL_ARGUMENT_OFFSET_VALUE); - if (c.err) - return c; - this.bumpSpace(), T = this.parseIdentifierIfPossible(), g = c.val; - } - var j = this.tryParsePluralOrSelectOptions(t, o, n, T); - if (j.err) - return j; - var d = this.tryParseArgumentClose(i); - if (d.err) - return d; - var X = H(i, this.clonePosition()); - return o === "select" ? { - val: { - type: P.select, - value: r, - options: Ot(j.val), - location: X - }, - err: null - } : { - val: { - type: P.plural, - value: r, - options: Ot(j.val), - offset: g, - pluralType: o === "plural" ? "cardinal" : "ordinal", - location: X - }, - err: null - }; - } - default: - return this.error(x.INVALID_ARGUMENT_TYPE, H(l, a)); - } - }, e.prototype.tryParseArgumentClose = function(t) { - return this.isEOF() || this.char() !== 125 ? this.error(x.EXPECT_ARGUMENT_CLOSING_BRACE, H(t, this.clonePosition())) : (this.bump(), { val: !0, err: null }); - }, e.prototype.parseSimpleArgStyleIfPossible = function() { - for (var t = 0, n = this.clonePosition(); !this.isEOF(); ) { - var r = this.char(); - switch (r) { - case 39: { - this.bump(); - var i = this.clonePosition(); - if (!this.bumpUntil("'")) - return this.error(x.UNCLOSED_QUOTE_IN_ARGUMENT_STYLE, H(i, this.clonePosition())); - this.bump(); - break; - } - case 123: { - t += 1, this.bump(); - break; - } - case 125: { - if (t > 0) - t -= 1; - else - return { - val: this.message.slice(n.offset, this.offset()), - err: null - }; - break; - } - default: - this.bump(); - break; - } - } - return { - val: this.message.slice(n.offset, this.offset()), - err: null - }; - }, e.prototype.parseNumberSkeletonFromString = function(t, n) { - var r = []; - try { - r = Bi(t); - } catch { - return this.error(x.INVALID_NUMBER_SKELETON, n); - } - return { - val: { - type: ue.number, - tokens: r, - location: n, - parsedOptions: this.shouldParseSkeletons ? Ni(r) : {} - }, - err: null - }; - }, e.prototype.tryParsePluralOrSelectOptions = function(t, n, r, i) { - for (var s, l = !1, o = [], a = /* @__PURE__ */ new Set(), u = i.value, f = i.location; ; ) { - if (u.length === 0) { - var c = this.clonePosition(); - if (n !== "select" && this.bumpIf("=")) { - var h = this.tryParseDecimalInteger(x.EXPECT_PLURAL_ARGUMENT_SELECTOR, x.INVALID_PLURAL_ARGUMENT_SELECTOR); - if (h.err) - return h; - f = H(c, this.clonePosition()), u = this.message.slice(c.offset, this.offset()); - } else - break; - } - if (a.has(u)) - return this.error(n === "select" ? x.DUPLICATE_SELECT_ARGUMENT_SELECTOR : x.DUPLICATE_PLURAL_ARGUMENT_SELECTOR, f); - u === "other" && (l = !0), this.bumpSpace(); - var _ = this.clonePosition(); - if (!this.bumpIf("{")) - return this.error(n === "select" ? x.EXPECT_SELECT_ARGUMENT_SELECTOR_FRAGMENT : x.EXPECT_PLURAL_ARGUMENT_SELECTOR_FRAGMENT, H(this.clonePosition(), this.clonePosition())); - var d = this.parseMessage(t + 1, n, r); - if (d.err) - return d; - var E = this.tryParseArgumentClose(_); - if (E.err) - return E; - o.push([ - u, - { - value: d.val, - location: H(_, this.clonePosition()) - } - ]), a.add(u), this.bumpSpace(), s = this.parseIdentifierIfPossible(), u = s.value, f = s.location; - } - return o.length === 0 ? this.error(n === "select" ? x.EXPECT_SELECT_ARGUMENT_SELECTOR : x.EXPECT_PLURAL_ARGUMENT_SELECTOR, H(this.clonePosition(), this.clonePosition())) : this.requiresOtherClause && !l ? this.error(x.MISSING_OTHER_CLAUSE, H(this.clonePosition(), this.clonePosition())) : { val: o, err: null }; - }, e.prototype.tryParseDecimalInteger = function(t, n) { - var r = 1, i = this.clonePosition(); - this.bumpIf("+") || this.bumpIf("-") && (r = -1); - for (var s = !1, l = 0; !this.isEOF(); ) { - var o = this.char(); - if (o >= 48 && o <= 57) - s = !0, l = l * 10 + (o - 48), this.bump(); - else - break; - } - var a = H(i, this.clonePosition()); - return s ? (l *= r, Vi(l) ? { val: l, err: null } : this.error(n, a)) : this.error(t, a); - }, e.prototype.offset = function() { - return this.position.offset; - }, e.prototype.isEOF = function() { - return this.offset() === this.message.length; - }, e.prototype.clonePosition = function() { - return { - offset: this.position.offset, - line: this.position.line, - column: this.position.column - }; - }, e.prototype.char = function() { - var t = this.position.offset; - if (t >= this.message.length) - throw Error("out of bound"); - var n = Cn(this.message, t); - if (n === void 0) - throw Error("Offset ".concat(t, " is at invalid UTF-16 code unit boundary")); - return n; - }, e.prototype.error = function(t, n) { - return { - val: null, - err: { - kind: t, - message: this.message, - location: n - } - }; - }, e.prototype.bump = function() { - if (!this.isEOF()) { - var t = this.char(); - t === 10 ? (this.position.line += 1, this.position.column = 1, this.position.offset += 1) : (this.position.column += 1, this.position.offset += t < 65536 ? 1 : 2); - } - }, e.prototype.bumpIf = function(t) { - if (Lt(this.message, t, this.offset())) { - for (var n = 0; n < t.length; n++) - this.bump(); - return !0; - } - return !1; - }, e.prototype.bumpUntil = function(t) { - var n = this.offset(), r = this.message.indexOf(t, n); - return r >= 0 ? (this.bumpTo(r), !0) : (this.bumpTo(this.message.length), !1); - }, e.prototype.bumpTo = function(t) { - if (this.offset() > t) - throw Error("targetOffset ".concat(t, " must be greater than or equal to the current offset ").concat(this.offset())); - for (t = Math.min(t, this.message.length); ; ) { - var n = this.offset(); - if (n === t) - break; - if (n > t) - throw Error("targetOffset ".concat(t, " is at invalid UTF-16 code unit boundary")); - if (this.bump(), this.isEOF()) - break; - } - }, e.prototype.bumpSpace = function() { - for (; !this.isEOF() && On(this.char()); ) - this.bump(); - }, e.prototype.peek = function() { - if (this.isEOF()) - return null; - var t = this.char(), n = this.offset(), r = this.message.charCodeAt(n + (t >= 65536 ? 2 : 1)); - return r ?? null; - }, e; - }() -); -function ot(e) { - return e >= 97 && e <= 122 || e >= 65 && e <= 90; -} -function Zi(e) { - return ot(e) || e === 47; -} -function Wi(e) { - return e === 45 || e === 46 || e >= 48 && e <= 57 || e === 95 || e >= 97 && e <= 122 || e >= 65 && e <= 90 || e == 183 || e >= 192 && e <= 214 || e >= 216 && e <= 246 || e >= 248 && e <= 893 || e >= 895 && e <= 8191 || e >= 8204 && e <= 8205 || e >= 8255 && e <= 8256 || e >= 8304 && e <= 8591 || e >= 11264 && e <= 12271 || e >= 12289 && e <= 55295 || e >= 63744 && e <= 64975 || e >= 65008 && e <= 65533 || e >= 65536 && e <= 983039; -} -function On(e) { - return e >= 9 && e <= 13 || e === 32 || e === 133 || e >= 8206 && e <= 8207 || e === 8232 || e === 8233; -} -function Qi(e) { - return e >= 33 && e <= 35 || e === 36 || e >= 37 && e <= 39 || e === 40 || e === 41 || e === 42 || e === 43 || e === 44 || e === 45 || e >= 46 && e <= 47 || e >= 58 && e <= 59 || e >= 60 && e <= 62 || e >= 63 && e <= 64 || e === 91 || e === 92 || e === 93 || e === 94 || e === 96 || e === 123 || e === 124 || e === 125 || e === 126 || e === 161 || e >= 162 && e <= 165 || e === 166 || e === 167 || e === 169 || e === 171 || e === 172 || e === 174 || e === 176 || e === 177 || e === 182 || e === 187 || e === 191 || e === 215 || e === 247 || e >= 8208 && e <= 8213 || e >= 8214 && e <= 8215 || e === 8216 || e === 8217 || e === 8218 || e >= 8219 && e <= 8220 || e === 8221 || e === 8222 || e === 8223 || e >= 8224 && e <= 8231 || e >= 8240 && e <= 8248 || e === 8249 || e === 8250 || e >= 8251 && e <= 8254 || e >= 8257 && e <= 8259 || e === 8260 || e === 8261 || e === 8262 || e >= 8263 && e <= 8273 || e === 8274 || e === 8275 || e >= 8277 && e <= 8286 || e >= 8592 && e <= 8596 || e >= 8597 && e <= 8601 || e >= 8602 && e <= 8603 || e >= 8604 && e <= 8607 || e === 8608 || e >= 8609 && e <= 8610 || e === 8611 || e >= 8612 && e <= 8613 || e === 8614 || e >= 8615 && e <= 8621 || e === 8622 || e >= 8623 && e <= 8653 || e >= 8654 && e <= 8655 || e >= 8656 && e <= 8657 || e === 8658 || e === 8659 || e === 8660 || e >= 8661 && e <= 8691 || e >= 8692 && e <= 8959 || e >= 8960 && e <= 8967 || e === 8968 || e === 8969 || e === 8970 || e === 8971 || e >= 8972 && e <= 8991 || e >= 8992 && e <= 8993 || e >= 8994 && e <= 9e3 || e === 9001 || e === 9002 || e >= 9003 && e <= 9083 || e === 9084 || e >= 9085 && e <= 9114 || e >= 9115 && e <= 9139 || e >= 9140 && e <= 9179 || e >= 9180 && e <= 9185 || e >= 9186 && e <= 9254 || e >= 9255 && e <= 9279 || e >= 9280 && e <= 9290 || e >= 9291 && e <= 9311 || e >= 9472 && e <= 9654 || e === 9655 || e >= 9656 && e <= 9664 || e === 9665 || e >= 9666 && e <= 9719 || e >= 9720 && e <= 9727 || e >= 9728 && e <= 9838 || e === 9839 || e >= 9840 && e <= 10087 || e === 10088 || e === 10089 || e === 10090 || e === 10091 || e === 10092 || e === 10093 || e === 10094 || e === 10095 || e === 10096 || e === 10097 || e === 10098 || e === 10099 || e === 10100 || e === 10101 || e >= 10132 && e <= 10175 || e >= 10176 && e <= 10180 || e === 10181 || e === 10182 || e >= 10183 && e <= 10213 || e === 10214 || e === 10215 || e === 10216 || e === 10217 || e === 10218 || e === 10219 || e === 10220 || e === 10221 || e === 10222 || e === 10223 || e >= 10224 && e <= 10239 || e >= 10240 && e <= 10495 || e >= 10496 && e <= 10626 || e === 10627 || e === 10628 || e === 10629 || e === 10630 || e === 10631 || e === 10632 || e === 10633 || e === 10634 || e === 10635 || e === 10636 || e === 10637 || e === 10638 || e === 10639 || e === 10640 || e === 10641 || e === 10642 || e === 10643 || e === 10644 || e === 10645 || e === 10646 || e === 10647 || e === 10648 || e >= 10649 && e <= 10711 || e === 10712 || e === 10713 || e === 10714 || e === 10715 || e >= 10716 && e <= 10747 || e === 10748 || e === 10749 || e >= 10750 && e <= 11007 || e >= 11008 && e <= 11055 || e >= 11056 && e <= 11076 || e >= 11077 && e <= 11078 || e >= 11079 && e <= 11084 || e >= 11085 && e <= 11123 || e >= 11124 && e <= 11125 || e >= 11126 && e <= 11157 || e === 11158 || e >= 11159 && e <= 11263 || e >= 11776 && e <= 11777 || e === 11778 || e === 11779 || e === 11780 || e === 11781 || e >= 11782 && e <= 11784 || e === 11785 || e === 11786 || e === 11787 || e === 11788 || e === 11789 || e >= 11790 && e <= 11798 || e === 11799 || e >= 11800 && e <= 11801 || e === 11802 || e === 11803 || e === 11804 || e === 11805 || e >= 11806 && e <= 11807 || e === 11808 || e === 11809 || e === 11810 || e === 11811 || e === 11812 || e === 11813 || e === 11814 || e === 11815 || e === 11816 || e === 11817 || e >= 11818 && e <= 11822 || e === 11823 || e >= 11824 && e <= 11833 || e >= 11834 && e <= 11835 || e >= 11836 && e <= 11839 || e === 11840 || e === 11841 || e === 11842 || e >= 11843 && e <= 11855 || e >= 11856 && e <= 11857 || e === 11858 || e >= 11859 && e <= 11903 || e >= 12289 && e <= 12291 || e === 12296 || e === 12297 || e === 12298 || e === 12299 || e === 12300 || e === 12301 || e === 12302 || e === 12303 || e === 12304 || e === 12305 || e >= 12306 && e <= 12307 || e === 12308 || e === 12309 || e === 12310 || e === 12311 || e === 12312 || e === 12313 || e === 12314 || e === 12315 || e === 12316 || e === 12317 || e >= 12318 && e <= 12319 || e === 12320 || e === 12336 || e === 64830 || e === 64831 || e >= 65093 && e <= 65094; -} -function at(e) { - e.forEach(function(t) { - if (delete t.location, Tn(t) || Hn(t)) - for (var n in t.options) - delete t.options[n].location, at(t.options[n].value); - else - En(t) && Sn(t.style) || (wn(t) || xn(t)) && rt(t.style) ? delete t.style.location : Bn(t) && at(t.children); - }); -} -function Ji(e, t) { - t === void 0 && (t = {}), t = A({ shouldParseSkeletons: !0, requiresOtherClause: !0 }, t); - var n = new zi(e, t).parse(); - if (n.err) { - var r = SyntaxError(x[n.err.kind]); - throw r.location = n.err.location, r.originalMessage = n.err.message, r; - } - return t != null && t.captureLocation || at(n.val), n.val; -} -function Je(e, t) { - var n = t && t.cache ? t.cache : ns, r = t && t.serializer ? t.serializer : ts, i = t && t.strategy ? t.strategy : Ki; - return i(e, { - cache: n, - serializer: r - }); -} -function Yi(e) { - return e == null || typeof e == "number" || typeof e == "boolean"; -} -function Mn(e, t, n, r) { - var i = Yi(r) ? r : n(r), s = t.get(i); - return typeof s > "u" && (s = e.call(this, r), t.set(i, s)), s; -} -function Rn(e, t, n) { - var r = Array.prototype.slice.call(arguments, 3), i = n(r), s = t.get(i); - return typeof s > "u" && (s = e.apply(this, r), t.set(i, s)), s; -} -function ct(e, t, n, r, i) { - return n.bind(t, e, r, i); -} -function Ki(e, t) { - var n = e.length === 1 ? Mn : Rn; - return ct(e, this, n, t.cache.create(), t.serializer); -} -function $i(e, t) { - return ct(e, this, Rn, t.cache.create(), t.serializer); -} -function es(e, t) { - return ct(e, this, Mn, t.cache.create(), t.serializer); -} -var ts = function() { - return JSON.stringify(arguments); -}; -function _t() { - this.cache = /* @__PURE__ */ Object.create(null); -} -_t.prototype.get = function(e) { - return this.cache[e]; -}; -_t.prototype.set = function(e, t) { - this.cache[e] = t; -}; -var ns = { - create: function() { - return new _t(); - } -}, Ye = { - variadic: $i, - monadic: es -}, fe; -(function(e) { - e.MISSING_VALUE = "MISSING_VALUE", e.INVALID_VALUE = "INVALID_VALUE", e.MISSING_INTL_API = "MISSING_INTL_API"; -})(fe || (fe = {})); -var Fe = ( - /** @class */ - function(e) { - Ge(t, e); - function t(n, r, i) { - var s = e.call(this, n) || this; - return s.code = r, s.originalMessage = i, s; - } - return t.prototype.toString = function() { - return "[formatjs Error: ".concat(this.code, "] ").concat(this.message); - }, t; - }(Error) -), Rt = ( - /** @class */ - function(e) { - Ge(t, e); - function t(n, r, i, s) { - return e.call(this, 'Invalid values for "'.concat(n, '": "').concat(r, '". Options are "').concat(Object.keys(i).join('", "'), '"'), fe.INVALID_VALUE, s) || this; - } - return t; - }(Fe) -), rs = ( - /** @class */ - function(e) { - Ge(t, e); - function t(n, r, i) { - return e.call(this, 'Value for "'.concat(n, '" must be of type ').concat(r), fe.INVALID_VALUE, i) || this; - } - return t; - }(Fe) -), is = ( - /** @class */ - function(e) { - Ge(t, e); - function t(n, r) { - return e.call(this, 'The intl string context variable "'.concat(n, '" was not provided to the string "').concat(r, '"'), fe.MISSING_VALUE, r) || this; - } - return t; - }(Fe) -), C; -(function(e) { - e[e.literal = 0] = "literal", e[e.object = 1] = "object"; -})(C || (C = {})); -function ss(e) { - return e.length < 2 ? e : e.reduce(function(t, n) { - var r = t[t.length - 1]; - return !r || r.type !== C.literal || n.type !== C.literal ? t.push(n) : r.value += n.value, t; - }, []); -} -function ls(e) { - return typeof e == "function"; -} -function Le(e, t, n, r, i, s, l) { - if (e.length === 1 && Pt(e[0])) - return [ - { - type: C.literal, - value: e[0].value - } - ]; - for (var o = [], a = 0, u = e; a < u.length; a++) { - var f = u[a]; - if (Pt(f)) { - o.push({ - type: C.literal, - value: f.value - }); - continue; - } - if (wi(f)) { - typeof s == "number" && o.push({ - type: C.literal, - value: n.getNumberFormat(t).format(s) - }); - continue; - } - var c = f.value; - if (!(i && c in i)) - throw new is(c, l); - var h = i[c]; - if (Ei(f)) { - (!h || typeof h == "string" || typeof h == "number") && (h = typeof h == "string" || typeof h == "number" ? String(h) : ""), o.push({ - type: typeof h == "string" ? C.literal : C.object, - value: h - }); - continue; - } - if (wn(f)) { - var _ = typeof f.style == "string" ? r.date[f.style] : rt(f.style) ? f.style.parsedOptions : void 0; - o.push({ - type: C.literal, - value: n.getDateTimeFormat(t, _).format(h) - }); - continue; - } - if (xn(f)) { - var _ = typeof f.style == "string" ? r.time[f.style] : rt(f.style) ? f.style.parsedOptions : r.time.medium; - o.push({ - type: C.literal, - value: n.getDateTimeFormat(t, _).format(h) - }); - continue; - } - if (En(f)) { - var _ = typeof f.style == "string" ? r.number[f.style] : Sn(f.style) ? f.style.parsedOptions : void 0; - _ && _.scale && (h = h * (_.scale || 1)), o.push({ - type: C.literal, - value: n.getNumberFormat(t, _).format(h) - }); - continue; - } - if (Bn(f)) { - var d = f.children, E = f.value, w = i[E]; - if (!ls(w)) - throw new rs(E, "function", l); - var N = Le(d, t, n, r, i, s), b = w(N.map(function(g) { - return g.value; - })); - Array.isArray(b) || (b = [b]), o.push.apply(o, b.map(function(g) { - return { - type: typeof g == "string" ? C.literal : C.object, - value: g - }; - })); - } - if (Tn(f)) { - var m = f.options[h] || f.options.other; - if (!m) - throw new Rt(f.value, h, Object.keys(f.options), l); - o.push.apply(o, Le(m.value, t, n, r, i)); - continue; - } - if (Hn(f)) { - var m = f.options["=".concat(h)]; - if (!m) { - if (!Intl.PluralRules) - throw new Fe(`Intl.PluralRules is not available in this environment. -Try polyfilling it using "@formatjs/intl-pluralrules" -`, fe.MISSING_INTL_API, l); - var T = n.getPluralRules(t, { type: f.pluralType }).select(h - (f.offset || 0)); - m = f.options[T] || f.options.other; - } - if (!m) - throw new Rt(f.value, h, Object.keys(f.options), l); - o.push.apply(o, Le(m.value, t, n, r, i, h - (f.offset || 0))); - continue; - } - } - return ss(o); -} -function os(e, t) { - return t ? A(A(A({}, e || {}), t || {}), Object.keys(e).reduce(function(n, r) { - return n[r] = A(A({}, e[r]), t[r] || {}), n; - }, {})) : e; -} -function as(e, t) { - return t ? Object.keys(e).reduce(function(n, r) { - return n[r] = os(e[r], t[r]), n; - }, A({}, e)) : e; -} -function Ke(e) { - return { - create: function() { - return { - get: function(t) { - return e[t]; - }, - set: function(t, n) { - e[t] = n; - } - }; - } - }; -} -function us(e) { - return e === void 0 && (e = { - number: {}, - dateTime: {}, - pluralRules: {} - }), { - getNumberFormat: Je(function() { - for (var t, n = [], r = 0; r < arguments.length; r++) - n[r] = arguments[r]; - return new ((t = Intl.NumberFormat).bind.apply(t, We([void 0], n, !1)))(); - }, { - cache: Ke(e.number), - strategy: Ye.variadic - }), - getDateTimeFormat: Je(function() { - for (var t, n = [], r = 0; r < arguments.length; r++) - n[r] = arguments[r]; - return new ((t = Intl.DateTimeFormat).bind.apply(t, We([void 0], n, !1)))(); - }, { - cache: Ke(e.dateTime), - strategy: Ye.variadic - }), - getPluralRules: Je(function() { - for (var t, n = [], r = 0; r < arguments.length; r++) - n[r] = arguments[r]; - return new ((t = Intl.PluralRules).bind.apply(t, We([void 0], n, !1)))(); - }, { - cache: Ke(e.pluralRules), - strategy: Ye.variadic - }) - }; -} -var fs = ( - /** @class */ - function() { - function e(t, n, r, i) { - var s = this; - if (n === void 0 && (n = e.defaultLocale), this.formatterCache = { - number: {}, - dateTime: {}, - pluralRules: {} - }, this.format = function(l) { - var o = s.formatToParts(l); - if (o.length === 1) - return o[0].value; - var a = o.reduce(function(u, f) { - return !u.length || f.type !== C.literal || typeof u[u.length - 1] != "string" ? u.push(f.value) : u[u.length - 1] += f.value, u; - }, []); - return a.length <= 1 ? a[0] || "" : a; - }, this.formatToParts = function(l) { - return Le(s.ast, s.locales, s.formatters, s.formats, l, void 0, s.message); - }, this.resolvedOptions = function() { - return { - locale: s.resolvedLocale.toString() - }; - }, this.getAst = function() { - return s.ast; - }, this.locales = n, this.resolvedLocale = e.resolveLocale(n), typeof t == "string") { - if (this.message = t, !e.__parse) - throw new TypeError("IntlMessageFormat.__parse must be set to process `message` of type `string`"); - this.ast = e.__parse(t, { - ignoreTag: i == null ? void 0 : i.ignoreTag, - locale: this.resolvedLocale - }); - } else - this.ast = t; - if (!Array.isArray(this.ast)) - throw new TypeError("A message must be provided as a String or AST."); - this.formats = as(e.formats, r), this.formatters = i && i.formatters || us(this.formatterCache); - } - return Object.defineProperty(e, "defaultLocale", { - get: function() { - return e.memoizedDefaultLocale || (e.memoizedDefaultLocale = new Intl.NumberFormat().resolvedOptions().locale), e.memoizedDefaultLocale; - }, - enumerable: !1, - configurable: !0 - }), e.memoizedDefaultLocale = null, e.resolveLocale = function(t) { - var n = Intl.NumberFormat.supportedLocalesOf(t); - return n.length > 0 ? new Intl.Locale(n[0]) : new Intl.Locale(typeof t == "string" ? t : t[0]); - }, e.__parse = Ji, e.formats = { - number: { - integer: { - maximumFractionDigits: 0 - }, - currency: { - style: "currency" - }, - percent: { - style: "percent" - } - }, - date: { - short: { - month: "numeric", - day: "numeric", - year: "2-digit" - }, - medium: { - month: "short", - day: "numeric", - year: "numeric" - }, - long: { - month: "long", - day: "numeric", - year: "numeric" - }, - full: { - weekday: "long", - month: "long", - day: "numeric", - year: "numeric" - } - }, - time: { - short: { - hour: "numeric", - minute: "numeric" - }, - medium: { - hour: "numeric", - minute: "numeric", - second: "numeric" - }, - long: { - hour: "numeric", - minute: "numeric", - second: "numeric", - timeZoneName: "short" - }, - full: { - hour: "numeric", - minute: "numeric", - second: "numeric", - timeZoneName: "short" - } - } - }, e; - }() -); -function hs(e, t) { - if (t == null) - return; - if (t in e) - return e[t]; - const n = t.split("."); - let r = e; - for (let i = 0; i < n.length; i++) - if (typeof r == "object") { - if (i > 0) { - const s = n.slice(i, n.length).join("."); - if (s in r) { - r = r[s]; - break; - } - } - r = r[n[i]]; - } else - r = void 0; - return r; -} -const Q = {}, cs = (e, t, n) => n && (t in Q || (Q[t] = {}), e in Q[t] || (Q[t][e] = n), n), Un = (e, t) => { - if (t == null) - return; - if (t in Q && e in Q[t]) - return Q[t][e]; - const n = Ve(t); - for (let r = 0; r < n.length; r++) { - const i = n[r], s = ms(i, e); - if (s) - return cs(e, t, s); - } -}; -let mt; -const xe = we({}); -function _s(e) { - return mt[e] || null; -} -function Dn(e) { - return e in mt; -} -function ms(e, t) { - if (!Dn(e)) - return null; - const n = _s(e); - return hs(n, t); -} -function ds(e) { - if (e == null) - return; - const t = Ve(e); - for (let n = 0; n < t.length; n++) { - const r = t[n]; - if (Dn(r)) - return r; - } -} -function bs(e, ...t) { - delete Q[e], xe.update((n) => (n[e] = yi.all([n[e] || {}, ...t]), n)); -} -me( - [xe], - ([e]) => Object.keys(e) -); -xe.subscribe((e) => mt = e); -const Oe = {}; -function gs(e, t) { - Oe[e].delete(t), Oe[e].size === 0 && delete Oe[e]; -} -function kn(e) { - return Oe[e]; -} -function ps(e) { - return Ve(e).map((t) => { - const n = kn(t); - return [t, n ? [...n] : []]; - }).filter(([, t]) => t.length > 0); -} -function ut(e) { - return e == null ? !1 : Ve(e).some( - (t) => { - var n; - return (n = kn(t)) == null ? void 0 : n.size; - } - ); -} -function vs(e, t) { - return Promise.all( - t.map((r) => (gs(e, r), r().then((i) => i.default || i))) - ).then((r) => bs(e, ...r)); -} -const ve = {}; -function Gn(e) { - if (!ut(e)) - return e in ve ? ve[e] : Promise.resolve(); - const t = ps(e); - return ve[e] = Promise.all( - t.map( - ([n, r]) => vs(n, r) - ) - ).then(() => { - if (ut(e)) - return Gn(e); - delete ve[e]; - }), ve[e]; -} -const ys = { - number: { - scientific: { notation: "scientific" }, - engineering: { notation: "engineering" }, - compactLong: { notation: "compact", compactDisplay: "long" }, - compactShort: { notation: "compact", compactDisplay: "short" } - }, - date: { - short: { month: "numeric", day: "numeric", year: "2-digit" }, - medium: { month: "short", day: "numeric", year: "numeric" }, - long: { month: "long", day: "numeric", year: "numeric" }, - full: { weekday: "long", month: "long", day: "numeric", year: "numeric" } - }, - time: { - short: { hour: "numeric", minute: "numeric" }, - medium: { hour: "numeric", minute: "numeric", second: "numeric" }, - long: { - hour: "numeric", - minute: "numeric", - second: "numeric", - timeZoneName: "short" - }, - full: { - hour: "numeric", - minute: "numeric", - second: "numeric", - timeZoneName: "short" - } - } -}, Es = { - fallbackLocale: null, - loadingDelay: 200, - formats: ys, - warnOnMissingMessages: !0, - handleMissingMessage: void 0, - ignoreTag: !0 -}, ws = Es; -function he() { - return ws; -} -const $e = we(!1); -var xs = Object.defineProperty, Ts = Object.defineProperties, Hs = Object.getOwnPropertyDescriptors, Ut = Object.getOwnPropertySymbols, Bs = Object.prototype.hasOwnProperty, Ss = Object.prototype.propertyIsEnumerable, Dt = (e, t, n) => t in e ? xs(e, t, { enumerable: !0, configurable: !0, writable: !0, value: n }) : e[t] = n, As = (e, t) => { - for (var n in t || (t = {})) - Bs.call(t, n) && Dt(e, n, t[n]); - if (Ut) - for (var n of Ut(t)) - Ss.call(t, n) && Dt(e, n, t[n]); - return e; -}, Ps = (e, t) => Ts(e, Hs(t)); -let ft; -const Ue = we(null); -function kt(e) { - return e.split("-").map((t, n, r) => r.slice(0, n + 1).join("-")).reverse(); -} -function Ve(e, t = he().fallbackLocale) { - const n = kt(e); - return t ? [.../* @__PURE__ */ new Set([...n, ...kt(t)])] : n; -} -function re() { - return ft ?? void 0; -} -Ue.subscribe((e) => { - ft = e ?? void 0, typeof window < "u" && e != null && document.documentElement.setAttribute("lang", e); -}); -const Ns = (e) => { - if (e && ds(e) && ut(e)) { - const { loadingDelay: t } = he(); - let n; - return typeof window < "u" && re() != null && t ? n = window.setTimeout( - () => $e.set(!0), - t - ) : $e.set(!0), Gn(e).then(() => { - Ue.set(e); - }).finally(() => { - clearTimeout(n), $e.set(!1); - }); - } - return Ue.set(e); -}, Te = Ps(As({}, Ue), { - set: Ns -}), je = (e) => { - const t = /* @__PURE__ */ Object.create(null); - return (r) => { - const i = JSON.stringify(r); - return i in t ? t[i] : t[i] = e(r); - }; -}; -var Is = Object.defineProperty, De = Object.getOwnPropertySymbols, Fn = Object.prototype.hasOwnProperty, Vn = Object.prototype.propertyIsEnumerable, Gt = (e, t, n) => t in e ? Is(e, t, { enumerable: !0, configurable: !0, writable: !0, value: n }) : e[t] = n, dt = (e, t) => { - for (var n in t || (t = {})) - Fn.call(t, n) && Gt(e, n, t[n]); - if (De) - for (var n of De(t)) - Vn.call(t, n) && Gt(e, n, t[n]); - return e; -}, de = (e, t) => { - var n = {}; - for (var r in e) - Fn.call(e, r) && t.indexOf(r) < 0 && (n[r] = e[r]); - if (e != null && De) - for (var r of De(e)) - t.indexOf(r) < 0 && Vn.call(e, r) && (n[r] = e[r]); - return n; -}; -const Ee = (e, t) => { - const { formats: n } = he(); - if (e in n && t in n[e]) - return n[e][t]; - throw new Error(`[svelte-i18n] Unknown "${t}" ${e} format.`); -}, Cs = je( - (e) => { - var t = e, { locale: n, format: r } = t, i = de(t, ["locale", "format"]); - if (n == null) - throw new Error('[svelte-i18n] A "locale" must be set to format numbers'); - return r && (i = Ee("number", r)), new Intl.NumberFormat(n, i); - } -), Ls = je( - (e) => { - var t = e, { locale: n, format: r } = t, i = de(t, ["locale", "format"]); - if (n == null) - throw new Error('[svelte-i18n] A "locale" must be set to format dates'); - return r ? i = Ee("date", r) : Object.keys(i).length === 0 && (i = Ee("date", "short")), new Intl.DateTimeFormat(n, i); - } -), Os = je( - (e) => { - var t = e, { locale: n, format: r } = t, i = de(t, ["locale", "format"]); - if (n == null) - throw new Error( - '[svelte-i18n] A "locale" must be set to format time values' - ); - return r ? i = Ee("time", r) : Object.keys(i).length === 0 && (i = Ee("time", "short")), new Intl.DateTimeFormat(n, i); - } -), Ms = (e = {}) => { - var t = e, { - locale: n = re() - } = t, r = de(t, [ - "locale" - ]); - return Cs(dt({ locale: n }, r)); -}, Rs = (e = {}) => { - var t = e, { - locale: n = re() - } = t, r = de(t, [ - "locale" - ]); - return Ls(dt({ locale: n }, r)); -}, Us = (e = {}) => { - var t = e, { - locale: n = re() - } = t, r = de(t, [ - "locale" - ]); - return Os(dt({ locale: n }, r)); -}, Ds = je( - // eslint-disable-next-line @typescript-eslint/no-non-null-assertion - (e, t = re()) => new fs(e, t, he().formats, { - ignoreTag: he().ignoreTag - }) -), ks = (e, t = {}) => { - var n, r, i, s; - let l = t; - typeof e == "object" && (l = e, e = l.id); - const { - values: o, - locale: a = re(), - default: u - } = l; - if (a == null) - throw new Error( - "[svelte-i18n] Cannot format a message without first setting the initial locale." - ); - let f = Un(e, a); - if (!f) - f = (s = (i = (r = (n = he()).handleMissingMessage) == null ? void 0 : r.call(n, { locale: a, id: e, defaultValue: u })) != null ? i : u) != null ? s : e; - else if (typeof f != "string") - return console.warn( - `[svelte-i18n] Message with id "${e}" must be of type "string", found: "${typeof f}". Gettin its value through the "$format" method is deprecated; use the "json" method instead.` - ), f; - if (!o) - return f; - let c = f; - try { - c = Ds(f, a).format(o); - } catch (h) { - h instanceof Error && console.warn( - `[svelte-i18n] Message "${e}" has syntax error:`, - h.message - ); - } - return c; -}, Gs = (e, t) => Us(t).format(e), Fs = (e, t) => Rs(t).format(e), Vs = (e, t) => Ms(t).format(e), js = (e, t = re()) => Un(e, t), Xs = me([Te, xe], () => ks); -me([Te], () => Gs); -me([Te], () => Fs); -me([Te], () => Vs); -me([Te, xe], () => js); -ni(Xs); -function le(e) { - let t = ["", "k", "M", "G", "T", "P", "E", "Z"], n = 0; - for (; e > 1e3 && n < t.length - 1; ) - e /= 1e3, n++; - let r = t[n]; - return (Number.isInteger(e) ? e : e.toFixed(1)) + r; -} -const { - SvelteComponent: qs, - append: R, - attr: B, - component_subscribe: Ft, - detach: zs, - element: Zs, - init: Ws, - insert: Qs, - noop: Vt, - safe_not_equal: Js, - set_style: Ie, - svg_element: U, - toggle_class: jt -} = window.__gradio__svelte__internal, { onMount: Ys } = window.__gradio__svelte__internal; -function Ks(e) { - let t, n, r, i, s, l, o, a, u, f, c, h; - return { - c() { - t = Zs("div"), n = U("svg"), r = U("g"), i = U("path"), s = U("path"), l = U("path"), o = U("path"), a = U("g"), u = U("path"), f = U("path"), c = U("path"), h = U("path"), B(i, "d", "M255.926 0.754768L509.702 139.936V221.027L255.926 81.8465V0.754768Z"), B(i, "fill", "#FF7C00"), B(i, "fill-opacity", "0.4"), B(i, "class", "svelte-43sxxs"), B(s, "d", "M509.69 139.936L254.981 279.641V361.255L509.69 221.55V139.936Z"), B(s, "fill", "#FF7C00"), B(s, "class", "svelte-43sxxs"), B(l, "d", "M0.250138 139.937L254.981 279.641V361.255L0.250138 221.55V139.937Z"), B(l, "fill", "#FF7C00"), B(l, "fill-opacity", "0.4"), B(l, "class", "svelte-43sxxs"), B(o, "d", "M255.923 0.232622L0.236328 139.936V221.55L255.923 81.8469V0.232622Z"), B(o, "fill", "#FF7C00"), B(o, "class", "svelte-43sxxs"), Ie(r, "transform", "translate(" + /*$top*/ - e[1][0] + "px, " + /*$top*/ - e[1][1] + "px)"), B(u, "d", "M255.926 141.5L509.702 280.681V361.773L255.926 222.592V141.5Z"), B(u, "fill", "#FF7C00"), B(u, "fill-opacity", "0.4"), B(u, "class", "svelte-43sxxs"), B(f, "d", "M509.69 280.679L254.981 420.384V501.998L509.69 362.293V280.679Z"), B(f, "fill", "#FF7C00"), B(f, "class", "svelte-43sxxs"), B(c, "d", "M0.250138 280.681L254.981 420.386V502L0.250138 362.295V280.681Z"), B(c, "fill", "#FF7C00"), B(c, "fill-opacity", "0.4"), B(c, "class", "svelte-43sxxs"), B(h, "d", "M255.923 140.977L0.236328 280.68V362.294L255.923 222.591V140.977Z"), B(h, "fill", "#FF7C00"), B(h, "class", "svelte-43sxxs"), Ie(a, "transform", "translate(" + /*$bottom*/ - e[2][0] + "px, " + /*$bottom*/ - e[2][1] + "px)"), B(n, "viewBox", "-1200 -1200 3000 3000"), B(n, "fill", "none"), B(n, "xmlns", "http://www.w3.org/2000/svg"), B(n, "class", "svelte-43sxxs"), B(t, "class", "svelte-43sxxs"), jt( - t, - "margin", - /*margin*/ - e[0] - ); - }, - m(_, d) { - Qs(_, t, d), R(t, n), R(n, r), R(r, i), R(r, s), R(r, l), R(r, o), R(n, a), R(a, u), R(a, f), R(a, c), R(a, h); - }, - p(_, [d]) { - d & /*$top*/ - 2 && Ie(r, "transform", "translate(" + /*$top*/ - _[1][0] + "px, " + /*$top*/ - _[1][1] + "px)"), d & /*$bottom*/ - 4 && Ie(a, "transform", "translate(" + /*$bottom*/ - _[2][0] + "px, " + /*$bottom*/ - _[2][1] + "px)"), d & /*margin*/ - 1 && jt( - t, - "margin", - /*margin*/ - _[0] - ); - }, - i: Vt, - o: Vt, - d(_) { - _ && zs(t); - } - }; -} -function $s(e, t, n) { - let r, i, { margin: s = !0 } = t; - const l = St([0, 0]); - Ft(e, l, (h) => n(1, r = h)); - const o = St([0, 0]); - Ft(e, o, (h) => n(2, i = h)); - let a; - async function u() { - await Promise.all([l.set([125, 140]), o.set([-125, -140])]), await Promise.all([l.set([-125, 140]), o.set([125, -140])]), await Promise.all([l.set([-125, 0]), o.set([125, -0])]), await Promise.all([l.set([125, 0]), o.set([-125, 0])]); - } - async function f() { - await u(), a || f(); - } - async function c() { - await Promise.all([l.set([125, 0]), o.set([-125, 0])]), f(); - } - return Ys(() => (c(), () => a = !0)), e.$$set = (h) => { - "margin" in h && n(0, s = h.margin); - }, [s, r, i, l, o]; -} -class el extends qs { - constructor(t) { - super(), Ws(this, t, $s, Ks, Js, { margin: 0 }); - } -} -const { - SvelteComponent: tl, - append: te, - attr: k, - binding_callbacks: Xt, - check_outros: jn, - create_component: nl, - create_slot: rl, - destroy_component: il, - destroy_each: Xn, - detach: v, - element: V, - empty: be, - ensure_array_like: ke, - get_all_dirty_from_scope: sl, - get_slot_changes: ll, - group_outros: qn, - init: ol, - insert: y, - mount_component: al, - noop: ht, - safe_not_equal: ul, - set_data: M, - set_style: J, - space: G, - text: I, - toggle_class: O, - transition_in: ce, - transition_out: _e, - update_slot_base: fl -} = window.__gradio__svelte__internal, { tick: hl } = window.__gradio__svelte__internal, { onDestroy: cl } = window.__gradio__svelte__internal, _l = (e) => ({}), qt = (e) => ({}); -function zt(e, t, n) { - const r = e.slice(); - return r[38] = t[n], r[40] = n, r; -} -function Zt(e, t, n) { - const r = e.slice(); - return r[38] = t[n], r; -} -function ml(e) { - let t, n = ( - /*i18n*/ - e[1]("common.error") + "" - ), r, i, s; - const l = ( - /*#slots*/ - e[29].error - ), o = rl( - l, - e, - /*$$scope*/ - e[28], - qt - ); - return { - c() { - t = V("span"), r = I(n), i = G(), o && o.c(), k(t, "class", "error svelte-14miwb5"); - }, - m(a, u) { - y(a, t, u), te(t, r), y(a, i, u), o && o.m(a, u), s = !0; - }, - p(a, u) { - (!s || u[0] & /*i18n*/ - 2) && n !== (n = /*i18n*/ - a[1]("common.error") + "") && M(r, n), o && o.p && (!s || u[0] & /*$$scope*/ - 268435456) && fl( - o, - l, - a, - /*$$scope*/ - a[28], - s ? ll( - l, - /*$$scope*/ - a[28], - u, - _l - ) : sl( - /*$$scope*/ - a[28] - ), - qt - ); - }, - i(a) { - s || (ce(o, a), s = !0); - }, - o(a) { - _e(o, a), s = !1; - }, - d(a) { - a && (v(t), v(i)), o && o.d(a); - } - }; -} -function dl(e) { - let t, n, r, i, s, l, o, a, u, f = ( - /*variant*/ - e[8] === "default" && /*show_eta_bar*/ - e[18] && /*show_progress*/ - e[6] === "full" && Wt(e) - ); - function c(m, T) { - if ( - /*progress*/ - m[7] - ) - return pl; - if ( - /*queue_position*/ - m[2] !== null && /*queue_size*/ - m[3] !== void 0 && /*queue_position*/ - m[2] >= 0 - ) - return gl; - if ( - /*queue_position*/ - m[2] === 0 - ) - return bl; - } - let h = c(e), _ = h && h(e), d = ( - /*timer*/ - e[5] && Yt(e) - ); - const E = [wl, El], w = []; - function N(m, T) { - return ( - /*last_progress_level*/ - m[15] != null ? 0 : ( - /*show_progress*/ - m[6] === "full" ? 1 : -1 - ) - ); - } - ~(s = N(e)) && (l = w[s] = E[s](e)); - let b = !/*timer*/ - e[5] && sn(e); - return { - c() { - f && f.c(), t = G(), n = V("div"), _ && _.c(), r = G(), d && d.c(), i = G(), l && l.c(), o = G(), b && b.c(), a = be(), k(n, "class", "progress-text svelte-14miwb5"), O( - n, - "meta-text-center", - /*variant*/ - e[8] === "center" - ), O( - n, - "meta-text", - /*variant*/ - e[8] === "default" - ); - }, - m(m, T) { - f && f.m(m, T), y(m, t, T), y(m, n, T), _ && _.m(n, null), te(n, r), d && d.m(n, null), y(m, i, T), ~s && w[s].m(m, T), y(m, o, T), b && b.m(m, T), y(m, a, T), u = !0; - }, - p(m, T) { - /*variant*/ - m[8] === "default" && /*show_eta_bar*/ - m[18] && /*show_progress*/ - m[6] === "full" ? f ? f.p(m, T) : (f = Wt(m), f.c(), f.m(t.parentNode, t)) : f && (f.d(1), f = null), h === (h = c(m)) && _ ? _.p(m, T) : (_ && _.d(1), _ = h && h(m), _ && (_.c(), _.m(n, r))), /*timer*/ - m[5] ? d ? d.p(m, T) : (d = Yt(m), d.c(), d.m(n, null)) : d && (d.d(1), d = null), (!u || T[0] & /*variant*/ - 256) && O( - n, - "meta-text-center", - /*variant*/ - m[8] === "center" - ), (!u || T[0] & /*variant*/ - 256) && O( - n, - "meta-text", - /*variant*/ - m[8] === "default" - ); - let g = s; - s = N(m), s === g ? ~s && w[s].p(m, T) : (l && (qn(), _e(w[g], 1, 1, () => { - w[g] = null; - }), jn()), ~s ? (l = w[s], l ? l.p(m, T) : (l = w[s] = E[s](m), l.c()), ce(l, 1), l.m(o.parentNode, o)) : l = null), /*timer*/ - m[5] ? b && (b.d(1), b = null) : b ? b.p(m, T) : (b = sn(m), b.c(), b.m(a.parentNode, a)); - }, - i(m) { - u || (ce(l), u = !0); - }, - o(m) { - _e(l), u = !1; - }, - d(m) { - m && (v(t), v(n), v(i), v(o), v(a)), f && f.d(m), _ && _.d(), d && d.d(), ~s && w[s].d(m), b && b.d(m); - } - }; -} -function Wt(e) { - let t, n = `translateX(${/*eta_level*/ - (e[17] || 0) * 100 - 100}%)`; - return { - c() { - t = V("div"), k(t, "class", "eta-bar svelte-14miwb5"), J(t, "transform", n); - }, - m(r, i) { - y(r, t, i); - }, - p(r, i) { - i[0] & /*eta_level*/ - 131072 && n !== (n = `translateX(${/*eta_level*/ - (r[17] || 0) * 100 - 100}%)`) && J(t, "transform", n); - }, - d(r) { - r && v(t); - } - }; -} -function bl(e) { - let t; - return { - c() { - t = I("processing |"); - }, - m(n, r) { - y(n, t, r); - }, - p: ht, - d(n) { - n && v(t); - } - }; -} -function gl(e) { - let t, n = ( - /*queue_position*/ - e[2] + 1 + "" - ), r, i, s, l; - return { - c() { - t = I("queue: "), r = I(n), i = I("/"), s = I( - /*queue_size*/ - e[3] - ), l = I(" |"); - }, - m(o, a) { - y(o, t, a), y(o, r, a), y(o, i, a), y(o, s, a), y(o, l, a); - }, - p(o, a) { - a[0] & /*queue_position*/ - 4 && n !== (n = /*queue_position*/ - o[2] + 1 + "") && M(r, n), a[0] & /*queue_size*/ - 8 && M( - s, - /*queue_size*/ - o[3] - ); - }, - d(o) { - o && (v(t), v(r), v(i), v(s), v(l)); - } - }; -} -function pl(e) { - let t, n = ke( - /*progress*/ - e[7] - ), r = []; - for (let i = 0; i < n.length; i += 1) - r[i] = Jt(Zt(e, n, i)); - return { - c() { - for (let i = 0; i < r.length; i += 1) - r[i].c(); - t = be(); - }, - m(i, s) { - for (let l = 0; l < r.length; l += 1) - r[l] && r[l].m(i, s); - y(i, t, s); - }, - p(i, s) { - if (s[0] & /*progress*/ - 128) { - n = ke( - /*progress*/ - i[7] - ); - let l; - for (l = 0; l < n.length; l += 1) { - const o = Zt(i, n, l); - r[l] ? r[l].p(o, s) : (r[l] = Jt(o), r[l].c(), r[l].m(t.parentNode, t)); - } - for (; l < r.length; l += 1) - r[l].d(1); - r.length = n.length; - } - }, - d(i) { - i && v(t), Xn(r, i); - } - }; -} -function Qt(e) { - let t, n = ( - /*p*/ - e[38].unit + "" - ), r, i, s = " ", l; - function o(f, c) { - return ( - /*p*/ - f[38].length != null ? yl : vl - ); - } - let a = o(e), u = a(e); - return { - c() { - u.c(), t = G(), r = I(n), i = I(" | "), l = I(s); - }, - m(f, c) { - u.m(f, c), y(f, t, c), y(f, r, c), y(f, i, c), y(f, l, c); - }, - p(f, c) { - a === (a = o(f)) && u ? u.p(f, c) : (u.d(1), u = a(f), u && (u.c(), u.m(t.parentNode, t))), c[0] & /*progress*/ - 128 && n !== (n = /*p*/ - f[38].unit + "") && M(r, n); - }, - d(f) { - f && (v(t), v(r), v(i), v(l)), u.d(f); - } - }; -} -function vl(e) { - let t = le( - /*p*/ - e[38].index || 0 - ) + "", n; - return { - c() { - n = I(t); - }, - m(r, i) { - y(r, n, i); - }, - p(r, i) { - i[0] & /*progress*/ - 128 && t !== (t = le( - /*p*/ - r[38].index || 0 - ) + "") && M(n, t); - }, - d(r) { - r && v(n); - } - }; -} -function yl(e) { - let t = le( - /*p*/ - e[38].index || 0 - ) + "", n, r, i = le( - /*p*/ - e[38].length - ) + "", s; - return { - c() { - n = I(t), r = I("/"), s = I(i); - }, - m(l, o) { - y(l, n, o), y(l, r, o), y(l, s, o); - }, - p(l, o) { - o[0] & /*progress*/ - 128 && t !== (t = le( - /*p*/ - l[38].index || 0 - ) + "") && M(n, t), o[0] & /*progress*/ - 128 && i !== (i = le( - /*p*/ - l[38].length - ) + "") && M(s, i); - }, - d(l) { - l && (v(n), v(r), v(s)); - } - }; -} -function Jt(e) { - let t, n = ( - /*p*/ - e[38].index != null && Qt(e) - ); - return { - c() { - n && n.c(), t = be(); - }, - m(r, i) { - n && n.m(r, i), y(r, t, i); - }, - p(r, i) { - /*p*/ - r[38].index != null ? n ? n.p(r, i) : (n = Qt(r), n.c(), n.m(t.parentNode, t)) : n && (n.d(1), n = null); - }, - d(r) { - r && v(t), n && n.d(r); - } - }; -} -function Yt(e) { - let t, n = ( - /*eta*/ - e[0] ? `/${/*formatted_eta*/ - e[19]}` : "" - ), r, i; - return { - c() { - t = I( - /*formatted_timer*/ - e[20] - ), r = I(n), i = I("s"); - }, - m(s, l) { - y(s, t, l), y(s, r, l), y(s, i, l); - }, - p(s, l) { - l[0] & /*formatted_timer*/ - 1048576 && M( - t, - /*formatted_timer*/ - s[20] - ), l[0] & /*eta, formatted_eta*/ - 524289 && n !== (n = /*eta*/ - s[0] ? `/${/*formatted_eta*/ - s[19]}` : "") && M(r, n); - }, - d(s) { - s && (v(t), v(r), v(i)); - } - }; -} -function El(e) { - let t, n; - return t = new el({ - props: { margin: ( - /*variant*/ - e[8] === "default" - ) } - }), { - c() { - nl(t.$$.fragment); - }, - m(r, i) { - al(t, r, i), n = !0; - }, - p(r, i) { - const s = {}; - i[0] & /*variant*/ - 256 && (s.margin = /*variant*/ - r[8] === "default"), t.$set(s); - }, - i(r) { - n || (ce(t.$$.fragment, r), n = !0); - }, - o(r) { - _e(t.$$.fragment, r), n = !1; - }, - d(r) { - il(t, r); - } - }; -} -function wl(e) { - let t, n, r, i, s, l = `${/*last_progress_level*/ - e[15] * 100}%`, o = ( - /*progress*/ - e[7] != null && Kt(e) - ); - return { - c() { - t = V("div"), n = V("div"), o && o.c(), r = G(), i = V("div"), s = V("div"), k(n, "class", "progress-level-inner svelte-14miwb5"), k(s, "class", "progress-bar svelte-14miwb5"), J(s, "width", l), k(i, "class", "progress-bar-wrap svelte-14miwb5"), k(t, "class", "progress-level svelte-14miwb5"); - }, - m(a, u) { - y(a, t, u), te(t, n), o && o.m(n, null), te(t, r), te(t, i), te(i, s), e[30](s); - }, - p(a, u) { - /*progress*/ - a[7] != null ? o ? o.p(a, u) : (o = Kt(a), o.c(), o.m(n, null)) : o && (o.d(1), o = null), u[0] & /*last_progress_level*/ - 32768 && l !== (l = `${/*last_progress_level*/ - a[15] * 100}%`) && J(s, "width", l); - }, - i: ht, - o: ht, - d(a) { - a && v(t), o && o.d(), e[30](null); - } - }; -} -function Kt(e) { - let t, n = ke( - /*progress*/ - e[7] - ), r = []; - for (let i = 0; i < n.length; i += 1) - r[i] = rn(zt(e, n, i)); - return { - c() { - for (let i = 0; i < r.length; i += 1) - r[i].c(); - t = be(); - }, - m(i, s) { - for (let l = 0; l < r.length; l += 1) - r[l] && r[l].m(i, s); - y(i, t, s); - }, - p(i, s) { - if (s[0] & /*progress_level, progress*/ - 16512) { - n = ke( - /*progress*/ - i[7] - ); - let l; - for (l = 0; l < n.length; l += 1) { - const o = zt(i, n, l); - r[l] ? r[l].p(o, s) : (r[l] = rn(o), r[l].c(), r[l].m(t.parentNode, t)); - } - for (; l < r.length; l += 1) - r[l].d(1); - r.length = n.length; - } - }, - d(i) { - i && v(t), Xn(r, i); - } - }; -} -function $t(e) { - let t, n, r, i, s = ( - /*i*/ - e[40] !== 0 && xl() - ), l = ( - /*p*/ - e[38].desc != null && en(e) - ), o = ( - /*p*/ - e[38].desc != null && /*progress_level*/ - e[14] && /*progress_level*/ - e[14][ - /*i*/ - e[40] - ] != null && tn() - ), a = ( - /*progress_level*/ - e[14] != null && nn(e) - ); - return { - c() { - s && s.c(), t = G(), l && l.c(), n = G(), o && o.c(), r = G(), a && a.c(), i = be(); - }, - m(u, f) { - s && s.m(u, f), y(u, t, f), l && l.m(u, f), y(u, n, f), o && o.m(u, f), y(u, r, f), a && a.m(u, f), y(u, i, f); - }, - p(u, f) { - /*p*/ - u[38].desc != null ? l ? l.p(u, f) : (l = en(u), l.c(), l.m(n.parentNode, n)) : l && (l.d(1), l = null), /*p*/ - u[38].desc != null && /*progress_level*/ - u[14] && /*progress_level*/ - u[14][ - /*i*/ - u[40] - ] != null ? o || (o = tn(), o.c(), o.m(r.parentNode, r)) : o && (o.d(1), o = null), /*progress_level*/ - u[14] != null ? a ? a.p(u, f) : (a = nn(u), a.c(), a.m(i.parentNode, i)) : a && (a.d(1), a = null); - }, - d(u) { - u && (v(t), v(n), v(r), v(i)), s && s.d(u), l && l.d(u), o && o.d(u), a && a.d(u); - } - }; -} -function xl(e) { - let t; - return { - c() { - t = I(" /"); - }, - m(n, r) { - y(n, t, r); - }, - d(n) { - n && v(t); - } - }; -} -function en(e) { - let t = ( - /*p*/ - e[38].desc + "" - ), n; - return { - c() { - n = I(t); - }, - m(r, i) { - y(r, n, i); - }, - p(r, i) { - i[0] & /*progress*/ - 128 && t !== (t = /*p*/ - r[38].desc + "") && M(n, t); - }, - d(r) { - r && v(n); - } - }; -} -function tn(e) { - let t; - return { - c() { - t = I("-"); - }, - m(n, r) { - y(n, t, r); - }, - d(n) { - n && v(t); - } - }; -} -function nn(e) { - let t = (100 * /*progress_level*/ - (e[14][ - /*i*/ - e[40] - ] || 0)).toFixed(1) + "", n, r; - return { - c() { - n = I(t), r = I("%"); - }, - m(i, s) { - y(i, n, s), y(i, r, s); - }, - p(i, s) { - s[0] & /*progress_level*/ - 16384 && t !== (t = (100 * /*progress_level*/ - (i[14][ - /*i*/ - i[40] - ] || 0)).toFixed(1) + "") && M(n, t); - }, - d(i) { - i && (v(n), v(r)); - } - }; -} -function rn(e) { - let t, n = ( - /*p*/ - (e[38].desc != null || /*progress_level*/ - e[14] && /*progress_level*/ - e[14][ - /*i*/ - e[40] - ] != null) && $t(e) - ); - return { - c() { - n && n.c(), t = be(); - }, - m(r, i) { - n && n.m(r, i), y(r, t, i); - }, - p(r, i) { - /*p*/ - r[38].desc != null || /*progress_level*/ - r[14] && /*progress_level*/ - r[14][ - /*i*/ - r[40] - ] != null ? n ? n.p(r, i) : (n = $t(r), n.c(), n.m(t.parentNode, t)) : n && (n.d(1), n = null); - }, - d(r) { - r && v(t), n && n.d(r); - } - }; -} -function sn(e) { - let t, n; - return { - c() { - t = V("p"), n = I( - /*loading_text*/ - e[9] - ), k(t, "class", "loading svelte-14miwb5"); - }, - m(r, i) { - y(r, t, i), te(t, n); - }, - p(r, i) { - i[0] & /*loading_text*/ - 512 && M( - n, - /*loading_text*/ - r[9] - ); - }, - d(r) { - r && v(t); - } - }; -} -function Tl(e) { - let t, n, r, i, s; - const l = [dl, ml], o = []; - function a(u, f) { - return ( - /*status*/ - u[4] === "pending" ? 0 : ( - /*status*/ - u[4] === "error" ? 1 : -1 - ) - ); - } - return ~(n = a(e)) && (r = o[n] = l[n](e)), { - c() { - t = V("div"), r && r.c(), k(t, "class", i = "wrap " + /*variant*/ - e[8] + " " + /*show_progress*/ - e[6] + " svelte-14miwb5"), O(t, "hide", !/*status*/ - e[4] || /*status*/ - e[4] === "complete" || /*show_progress*/ - e[6] === "hidden"), O( - t, - "translucent", - /*variant*/ - e[8] === "center" && /*status*/ - (e[4] === "pending" || /*status*/ - e[4] === "error") || /*translucent*/ - e[11] || /*show_progress*/ - e[6] === "minimal" - ), O( - t, - "generating", - /*status*/ - e[4] === "generating" - ), O( - t, - "border", - /*border*/ - e[12] - ), J( - t, - "position", - /*absolute*/ - e[10] ? "absolute" : "static" - ), J( - t, - "padding", - /*absolute*/ - e[10] ? "0" : "var(--size-8) 0" - ); - }, - m(u, f) { - y(u, t, f), ~n && o[n].m(t, null), e[31](t), s = !0; - }, - p(u, f) { - let c = n; - n = a(u), n === c ? ~n && o[n].p(u, f) : (r && (qn(), _e(o[c], 1, 1, () => { - o[c] = null; - }), jn()), ~n ? (r = o[n], r ? r.p(u, f) : (r = o[n] = l[n](u), r.c()), ce(r, 1), r.m(t, null)) : r = null), (!s || f[0] & /*variant, show_progress*/ - 320 && i !== (i = "wrap " + /*variant*/ - u[8] + " " + /*show_progress*/ - u[6] + " svelte-14miwb5")) && k(t, "class", i), (!s || f[0] & /*variant, show_progress, status, show_progress*/ - 336) && O(t, "hide", !/*status*/ - u[4] || /*status*/ - u[4] === "complete" || /*show_progress*/ - u[6] === "hidden"), (!s || f[0] & /*variant, show_progress, variant, status, translucent, show_progress*/ - 2384) && O( - t, - "translucent", - /*variant*/ - u[8] === "center" && /*status*/ - (u[4] === "pending" || /*status*/ - u[4] === "error") || /*translucent*/ - u[11] || /*show_progress*/ - u[6] === "minimal" - ), (!s || f[0] & /*variant, show_progress, status*/ - 336) && O( - t, - "generating", - /*status*/ - u[4] === "generating" - ), (!s || f[0] & /*variant, show_progress, border*/ - 4416) && O( - t, - "border", - /*border*/ - u[12] - ), f[0] & /*absolute*/ - 1024 && J( - t, - "position", - /*absolute*/ - u[10] ? "absolute" : "static" - ), f[0] & /*absolute*/ - 1024 && J( - t, - "padding", - /*absolute*/ - u[10] ? "0" : "var(--size-8) 0" - ); - }, - i(u) { - s || (ce(r), s = !0); - }, - o(u) { - _e(r), s = !1; - }, - d(u) { - u && v(t), ~n && o[n].d(), e[31](null); - } - }; -} -let Ce = [], et = !1; -async function Hl(e, t = !0) { - if (!(window.__gradio_mode__ === "website" || window.__gradio_mode__ !== "app" && t !== !0)) { - if (Ce.push(e), !et) - et = !0; - else - return; - await hl(), requestAnimationFrame(() => { - let n = [0, 0]; - for (let r = 0; r < Ce.length; r++) { - const s = Ce[r].getBoundingClientRect(); - (r === 0 || s.top + window.scrollY <= n[0]) && (n[0] = s.top + window.scrollY, n[1] = r); - } - window.scrollTo({ top: n[0] - 20, behavior: "smooth" }), et = !1, Ce = []; - }); - } -} -function Bl(e, t, n) { - let r, { $$slots: i = {}, $$scope: s } = t, { i18n: l } = t, { eta: o = null } = t, { queue: a = !1 } = t, { queue_position: u } = t, { queue_size: f } = t, { status: c } = t, { scroll_to_output: h = !1 } = t, { timer: _ = !0 } = t, { show_progress: d = "full" } = t, { message: E = null } = t, { progress: w = null } = t, { variant: N = "default" } = t, { loading_text: b = "Loading..." } = t, { absolute: m = !0 } = t, { translucent: T = !1 } = t, { border: g = !1 } = t, { autoscroll: j } = t, X, ge = !1, Ae = 0, Y = 0, Xe = null, bt = 0, K = null, pe, q = null, gt = !0; - const Zn = () => { - n(25, Ae = performance.now()), n(26, Y = 0), ge = !0, pt(); - }; - function pt() { - requestAnimationFrame(() => { - n(26, Y = (performance.now() - Ae) / 1e3), ge && pt(); - }); - } - function vt() { - n(26, Y = 0), ge && (ge = !1); - } - cl(() => { - ge && vt(); - }); - let yt = null; - function Wn(p) { - Xt[p ? "unshift" : "push"](() => { - q = p, n(16, q), n(7, w), n(14, K), n(15, pe); - }); - } - function Qn(p) { - Xt[p ? "unshift" : "push"](() => { - X = p, n(13, X); - }); - } - return e.$$set = (p) => { - "i18n" in p && n(1, l = p.i18n), "eta" in p && n(0, o = p.eta), "queue" in p && n(21, a = p.queue), "queue_position" in p && n(2, u = p.queue_position), "queue_size" in p && n(3, f = p.queue_size), "status" in p && n(4, c = p.status), "scroll_to_output" in p && n(22, h = p.scroll_to_output), "timer" in p && n(5, _ = p.timer), "show_progress" in p && n(6, d = p.show_progress), "message" in p && n(23, E = p.message), "progress" in p && n(7, w = p.progress), "variant" in p && n(8, N = p.variant), "loading_text" in p && n(9, b = p.loading_text), "absolute" in p && n(10, m = p.absolute), "translucent" in p && n(11, T = p.translucent), "border" in p && n(12, g = p.border), "autoscroll" in p && n(24, j = p.autoscroll), "$$scope" in p && n(28, s = p.$$scope); - }, e.$$.update = () => { - e.$$.dirty[0] & /*eta, old_eta, queue, timer_start*/ - 169869313 && (o === null ? n(0, o = Xe) : a && n(0, o = (performance.now() - Ae) / 1e3 + o), o != null && (n(19, yt = o.toFixed(1)), n(27, Xe = o))), e.$$.dirty[0] & /*eta, timer_diff*/ - 67108865 && n(17, bt = o === null || o <= 0 || !Y ? null : Math.min(Y / o, 1)), e.$$.dirty[0] & /*progress*/ - 128 && w != null && n(18, gt = !1), e.$$.dirty[0] & /*progress, progress_level, progress_bar, last_progress_level*/ - 114816 && (w != null ? n(14, K = w.map((p) => { - if (p.index != null && p.length != null) - return p.index / p.length; - if (p.progress != null) - return p.progress; - })) : n(14, K = null), K ? (n(15, pe = K[K.length - 1]), q && (pe === 0 ? n(16, q.style.transition = "0", q) : n(16, q.style.transition = "150ms", q))) : n(15, pe = void 0)), e.$$.dirty[0] & /*status*/ - 16 && (c === "pending" ? Zn() : vt()), e.$$.dirty[0] & /*el, scroll_to_output, status, autoscroll*/ - 20979728 && X && h && (c === "pending" || c === "complete") && Hl(X, j), e.$$.dirty[0] & /*status, message*/ - 8388624, e.$$.dirty[0] & /*timer_diff*/ - 67108864 && n(20, r = Y.toFixed(1)); - }, [ - o, - l, - u, - f, - c, - _, - d, - w, - N, - b, - m, - T, - g, - X, - K, - pe, - q, - bt, - gt, - yt, - r, - a, - h, - E, - j, - Ae, - Y, - Xe, - s, - i, - Wn, - Qn - ]; -} -class Sl extends tl { - constructor(t) { - super(), ol( - this, - t, - Bl, - Tl, - ul, - { - i18n: 1, - eta: 0, - queue: 21, - queue_position: 2, - queue_size: 3, - status: 4, - scroll_to_output: 22, - timer: 5, - show_progress: 6, - message: 23, - progress: 7, - variant: 8, - loading_text: 9, - absolute: 10, - translucent: 11, - border: 12, - autoscroll: 24 - }, - null, - [-1, -1] - ); - } -} -function zn(e, t, n) { - if (e == null) - return null; - if (typeof e == "string") - return { - name: "file_data", - data: e - }; - if (Array.isArray(e)) { - const r = []; - for (const i of e) - i === null ? r.push(null) : r.push(zn(i, t, n)); - return r; - } else - e.is_file ? e.data = Pl(e.name, t, n) : e.is_stream && (n == null ? e.data = t + "/stream/" + e.name : e.data = "/proxy=" + n + "stream/" + e.name); - return e; -} -function Al(e) { - try { - const t = new URL(e); - return t.protocol === "http:" || t.protocol === "https:"; - } catch { - return !1; - } -} -function Pl(e, t, n) { - return e == null ? n ? `/proxy=${n}file=` : `${t}/file=` : Al(e) ? e : n ? `/proxy=${n}file=${e}` : `${t}/file=${e}`; -} -new Intl.Collator(0, { numeric: 1 }).compare; -const { - SvelteComponent: Nl, - assign: Il, - attr: ee, - check_outros: ln, - create_component: He, - destroy_component: Be, - detach: Me, - element: Cl, - empty: Ll, - get_spread_object: Ol, - get_spread_update: Ml, - group_outros: on, - init: Rl, - insert: Re, - mount_component: Se, - noop: an, - safe_not_equal: Ul, - space: un, - src_url_equal: fn, - transition_in: D, - transition_out: F -} = window.__gradio__svelte__internal; -function hn(e) { - let t, n; - const r = [ - { - autoscroll: ( - /*gradio*/ - e[10].autoscroll - ) - }, - { i18n: ( - /*gradio*/ - e[10].i18n - ) }, - /*loading_status*/ - e[9] - ]; - let i = {}; - for (let s = 0; s < r.length; s += 1) - i = Il(i, r[s]); - return t = new Sl({ props: i }), { - c() { - He(t.$$.fragment); - }, - m(s, l) { - Se(t, s, l), n = !0; - }, - p(s, l) { - const o = l & /*gradio, loading_status*/ - 1536 ? Ml(r, [ - l & /*gradio*/ - 1024 && { - autoscroll: ( - /*gradio*/ - s[10].autoscroll - ) - }, - l & /*gradio*/ - 1024 && { i18n: ( - /*gradio*/ - s[10].i18n - ) }, - l & /*loading_status*/ - 512 && Ol( - /*loading_status*/ - s[9] - ) - ]) : {}; - t.$set(o); - }, - i(s) { - n || (D(t.$$.fragment, s), n = !0); - }, - o(s) { - F(t.$$.fragment, s), n = !1; - }, - d(s) { - Be(t, s); - } - }; -} -function Dl(e) { - let t, n; - return t = new Xr({ - props: { - unpadded_box: !0, - size: "large", - $$slots: { default: [Gl] }, - $$scope: { ctx: e } - } - }), { - c() { - He(t.$$.fragment); - }, - m(r, i) { - Se(t, r, i), n = !0; - }, - p(r, i) { - const s = {}; - i & /*$$scope*/ - 32768 && (s.$$scope = { dirty: i, ctx: r }), t.$set(s); - }, - i(r) { - n || (D(t.$$.fragment, r), n = !0); - }, - o(r) { - F(t.$$.fragment, r), n = !1; - }, - d(r) { - Be(t, r); - } - }; -} -function kl(e) { - let t, n, r; - return { - c() { - t = Cl("iframe"), fn(t.src, n = /*new_value*/ - e[11].data) || ee(t, "src", n), ee( - t, - "title", - /*label*/ - e[0] - ), ee(t, "height", r = /*height*/ - e[1] + "px"), ee(t, "class", "svelte-1orump4"); - }, - m(i, s) { - Re(i, t, s); - }, - p(i, s) { - s & /*new_value*/ - 2048 && !fn(t.src, n = /*new_value*/ - i[11].data) && ee(t, "src", n), s & /*label*/ - 1 && ee( - t, - "title", - /*label*/ - i[0] - ), s & /*height*/ - 2 && r !== (r = /*height*/ - i[1] + "px") && ee(t, "height", r); - }, - i: an, - o: an, - d(i) { - i && Me(t); - } - }; -} -function Gl(e) { - let t, n; - return t = new dn({}), { - c() { - He(t.$$.fragment); - }, - m(r, i) { - Se(t, r, i), n = !0; - }, - i(r) { - n || (D(t.$$.fragment, r), n = !0); - }, - o(r) { - F(t.$$.fragment, r), n = !1; - }, - d(r) { - Be(t, r); - } - }; -} -function Fl(e) { - let t, n, r, i, s, l, o, a = ( - /*loading_status*/ - e[9] && hn(e) - ); - n = new Sr({ - props: { - show_label: !0, - Icon: dn, - label: ( - /*label*/ - e[0] || "Folium Map" - ) - } - }); - const u = [kl, Dl], f = []; - function c(h, _) { - return ( - /*value*/ - h[5] ? 0 : 1 - ); - } - return i = c(e), s = f[i] = u[i](e), { - c() { - a && a.c(), t = un(), He(n.$$.fragment), r = un(), s.c(), l = Ll(); - }, - m(h, _) { - a && a.m(h, _), Re(h, t, _), Se(n, h, _), Re(h, r, _), f[i].m(h, _), Re(h, l, _), o = !0; - }, - p(h, _) { - /*loading_status*/ - h[9] ? a ? (a.p(h, _), _ & /*loading_status*/ - 512 && D(a, 1)) : (a = hn(h), a.c(), D(a, 1), a.m(t.parentNode, t)) : a && (on(), F(a, 1, 1, () => { - a = null; - }), ln()); - const d = {}; - _ & /*label*/ - 1 && (d.label = /*label*/ - h[0] || "Folium Map"), n.$set(d); - let E = i; - i = c(h), i === E ? f[i].p(h, _) : (on(), F(f[E], 1, 1, () => { - f[E] = null; - }), ln(), s = f[i], s ? s.p(h, _) : (s = f[i] = u[i](h), s.c()), D(s, 1), s.m(l.parentNode, l)); - }, - i(h) { - o || (D(a), D(n.$$.fragment, h), D(s), o = !0); - }, - o(h) { - F(a), F(n.$$.fragment, h), F(s), o = !1; - }, - d(h) { - h && (Me(t), Me(r), Me(l)), a && a.d(h), Be(n, h), f[i].d(h); - } - }; -} -function Vl(e) { - let t, n; - return t = new hr({ - props: { - visible: ( - /*visible*/ - e[4] - ), - elem_id: ( - /*elem_id*/ - e[2] - ), - elem_classes: ( - /*elem_classes*/ - e[3] - ), - container: ( - /*container*/ - e[6] - ), - scale: ( - /*scale*/ - e[7] - ), - min_width: ( - /*min_width*/ - e[8] - ), - $$slots: { default: [Fl] }, - $$scope: { ctx: e } - } - }), { - c() { - He(t.$$.fragment); - }, - m(r, i) { - Se(t, r, i), n = !0; - }, - p(r, [i]) { - const s = {}; - i & /*visible*/ - 16 && (s.visible = /*visible*/ - r[4]), i & /*elem_id*/ - 4 && (s.elem_id = /*elem_id*/ - r[2]), i & /*elem_classes*/ - 8 && (s.elem_classes = /*elem_classes*/ - r[3]), i & /*container*/ - 64 && (s.container = /*container*/ - r[6]), i & /*scale*/ - 128 && (s.scale = /*scale*/ - r[7]), i & /*min_width*/ - 256 && (s.min_width = /*min_width*/ - r[8]), i & /*$$scope, new_value, label, height, value, gradio, loading_status*/ - 36387 && (s.$$scope = { dirty: i, ctx: r }), t.$set(s); - }, - i(r) { - n || (D(t.$$.fragment, r), n = !0); - }, - o(r) { - F(t.$$.fragment, r), n = !1; - }, - d(r) { - Be(t, r); - } - }; -} -function cn(e, t) { - return e ?? t(); -} -function jl(e, t, n) { - let { elem_id: r = "" } = t, { elem_classes: i = [] } = t, { visible: s = !0 } = t, { value: l } = t, { label: o } = t, { container: a = !0 } = t, { scale: u = null } = t, { min_width: f = void 0 } = t, { loading_status: c } = t, { root: h } = t, { root_url: _ } = t, { height: d = null } = t, { gradio: E } = t, w; - async function N() { - E.dispatch("change"); - } - return e.$$set = (b) => { - "elem_id" in b && n(2, r = b.elem_id), "elem_classes" in b && n(3, i = b.elem_classes), "visible" in b && n(4, s = b.visible), "value" in b && n(5, l = b.value), "label" in b && n(0, o = b.label), "container" in b && n(6, a = b.container), "scale" in b && n(7, u = b.scale), "min_width" in b && n(8, f = b.min_width), "loading_status" in b && n(9, c = b.loading_status), "root" in b && n(12, h = b.root), "root_url" in b && n(13, _ = b.root_url), "height" in b && n(1, d = b.height), "gradio" in b && n(10, E = b.gradio); - }, e.$$.update = () => { - e.$$.dirty & /*label*/ - 1 && n(0, o = cn(o, () => "Folium Map")), e.$$.dirty & /*height*/ - 2 && n(1, d = cn(d, () => 500)), e.$$.dirty & /*value, root, root_url*/ - 12320 && n(11, w = { ...zn(l, h, _) }), e.$$.dirty & /*new_value*/ - 2048 && N(); - }, [ - o, - d, - r, - i, - s, - l, - a, - u, - f, - c, - E, - w, - h, - _ - ]; -} -class Xl extends Nl { - constructor(t) { - super(), Rl(this, t, jl, Vl, Ul, { - elem_id: 2, - elem_classes: 3, - visible: 4, - value: 5, - label: 0, - container: 6, - scale: 7, - min_width: 8, - loading_status: 9, - root: 12, - root_url: 13, - height: 1, - gradio: 10 - }); - } -} -export { - Xl as default -}; diff --git a/spaces/fredinrh2026/Video-Games/README.md b/spaces/fredinrh2026/Video-Games/README.md deleted file mode 100644 index 8a7dc28d3b2677ce328019913755d0cab0d4960e..0000000000000000000000000000000000000000 --- a/spaces/fredinrh2026/Video-Games/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Video Games -emoji: 🏃 -colorFrom: gray -colorTo: pink -sdk: gradio -sdk_version: 3.19.1 -app_file: app.py -pinned: false -license: afl-3.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/gotiQspiryo/whisper-ui/examples/Download Heartless Full Movie In Hindi Mp4 Watch the Shocking Twist in this Romantic Thriller.md b/spaces/gotiQspiryo/whisper-ui/examples/Download Heartless Full Movie In Hindi Mp4 Watch the Shocking Twist in this Romantic Thriller.md deleted file mode 100644 index 7978c0468bfcc2f2f69e72434f13731c10174dae..0000000000000000000000000000000000000000 --- a/spaces/gotiQspiryo/whisper-ui/examples/Download Heartless Full Movie In Hindi Mp4 Watch the Shocking Twist in this Romantic Thriller.md +++ /dev/null @@ -1,9 +0,0 @@ - -

      Mp3 Juice is the most popular free mp3 search engine tool and music downloader, is very popular. MP3 Juice is a great tool to convert and download youtube videos and music. The Mp3 Juice website is the best way to quickly and easily download mp3 music. Its simplicity makes Mp3juice easy to use, so anyone can search for and download high-quality audio files

      -

      Heartless Full Movie In Hindi Mp4 Free Download


      Download ::: https://urlgoal.com/2uyLrq



      -

      This website offers unlimited downloading of youtube music and Mp3 juice song free download in HD quality. You can also click "PLAY" to play the audio file before you download it. Mp3juices take only 2-5 seconds to convert and download audio files.

      -

      You can access this free mp3 download website online via an internet connection or WiFi. Bookmark this website to make it easy to access on a regular basis. Once you have downloaded the audio file, open it in any audio player to listen offline in high-quality.

      -

      MP3 juice music is easy to navigate through and provides a simple interface for downloading the audio. You might be wondering why people prefer mp3juices to get mp3 juice for free. This tool provides high-speed audio downloads, and users don't need to give any personal information.

      -

      aaccfb2cb3
      -
      -
      \ No newline at end of file diff --git a/spaces/gotiQspiryo/whisper-ui/examples/IMMO Universal Decoding LT 3.2 azerus limewere hous Free Download and Installation Guide.md b/spaces/gotiQspiryo/whisper-ui/examples/IMMO Universal Decoding LT 3.2 azerus limewere hous Free Download and Installation Guide.md deleted file mode 100644 index c5813d9e942b7f08638876cf75c72c2a57c31359..0000000000000000000000000000000000000000 --- a/spaces/gotiQspiryo/whisper-ui/examples/IMMO Universal Decoding LT 3.2 azerus limewere hous Free Download and Installation Guide.md +++ /dev/null @@ -1,6 +0,0 @@ -

      IMMO Universal Decoding LT 3.2 azerus limewere hous


      Download Zip 🔗 https://urlgoal.com/2uyMgx



      -
      - aaccfb2cb3
      -
      -
      -

      diff --git a/spaces/gradio/HuBERT/examples/speech_recognition/datasets/asr_prep_json.py b/spaces/gradio/HuBERT/examples/speech_recognition/datasets/asr_prep_json.py deleted file mode 100644 index b8db8ff16691158fae034a8ab3faad622b351caf..0000000000000000000000000000000000000000 --- a/spaces/gradio/HuBERT/examples/speech_recognition/datasets/asr_prep_json.py +++ /dev/null @@ -1,125 +0,0 @@ -#!/usr/bin/env python3 -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from __future__ import absolute_import, division, print_function, unicode_literals - -import argparse -import concurrent.futures -import json -import multiprocessing -import os -from collections import namedtuple -from itertools import chain - -import sentencepiece as spm -from fairseq.data import Dictionary - - -MILLISECONDS_TO_SECONDS = 0.001 - - -def process_sample(aud_path, lable, utt_id, sp, tgt_dict): - import torchaudio - - input = {} - output = {} - si, ei = torchaudio.info(aud_path) - input["length_ms"] = int( - si.length / si.channels / si.rate / MILLISECONDS_TO_SECONDS - ) - input["path"] = aud_path - - token = " ".join(sp.EncodeAsPieces(lable)) - ids = tgt_dict.encode_line(token, append_eos=False) - output["text"] = lable - output["token"] = token - output["tokenid"] = ", ".join(map(str, [t.tolist() for t in ids])) - return {utt_id: {"input": input, "output": output}} - - -def main(): - parser = argparse.ArgumentParser() - parser.add_argument( - "--audio-dirs", - nargs="+", - default=["-"], - required=True, - help="input directories with audio files", - ) - parser.add_argument( - "--labels", - required=True, - help="aggregated input labels with format per line", - type=argparse.FileType("r", encoding="UTF-8"), - ) - parser.add_argument( - "--spm-model", - required=True, - help="sentencepiece model to use for encoding", - type=argparse.FileType("r", encoding="UTF-8"), - ) - parser.add_argument( - "--dictionary", - required=True, - help="file to load fairseq dictionary from", - type=argparse.FileType("r", encoding="UTF-8"), - ) - parser.add_argument("--audio-format", choices=["flac", "wav"], default="wav") - parser.add_argument( - "--output", - required=True, - type=argparse.FileType("w"), - help="path to save json output", - ) - args = parser.parse_args() - - sp = spm.SentencePieceProcessor() - sp.Load(args.spm_model.name) - - tgt_dict = Dictionary.load(args.dictionary) - - labels = {} - for line in args.labels: - (utt_id, label) = line.split(" ", 1) - labels[utt_id] = label - if len(labels) == 0: - raise Exception("No labels found in ", args.labels_path) - - Sample = namedtuple("Sample", "aud_path utt_id") - samples = [] - for path, _, files in chain.from_iterable( - os.walk(path) for path in args.audio_dirs - ): - for f in files: - if f.endswith(args.audio_format): - if len(os.path.splitext(f)) != 2: - raise Exception("Expect file name. Got: ", f) - utt_id = os.path.splitext(f)[0] - if utt_id not in labels: - continue - samples.append(Sample(os.path.join(path, f), utt_id)) - - utts = {} - num_cpu = multiprocessing.cpu_count() - with concurrent.futures.ThreadPoolExecutor(max_workers=num_cpu) as executor: - future_to_sample = { - executor.submit( - process_sample, s.aud_path, labels[s.utt_id], s.utt_id, sp, tgt_dict - ): s - for s in samples - } - for future in concurrent.futures.as_completed(future_to_sample): - try: - data = future.result() - except Exception as exc: - print("generated an exception: ", exc) - else: - utts.update(data) - json.dump({"utts": utts}, args.output, indent=4) - - -if __name__ == "__main__": - main() diff --git a/spaces/gradio/HuBERT/fairseq/models/speech_to_text/__init__.py b/spaces/gradio/HuBERT/fairseq/models/speech_to_text/__init__.py deleted file mode 100644 index c6ae9b17ba37a228163fddcb6fed199e61ef02c8..0000000000000000000000000000000000000000 --- a/spaces/gradio/HuBERT/fairseq/models/speech_to_text/__init__.py +++ /dev/null @@ -1,8 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from .berard import * # noqa -from .convtransformer import * # noqa -from .s2t_transformer import * # noqa diff --git a/spaces/gradio/same-person-or-different/app.py b/spaces/gradio/same-person-or-different/app.py deleted file mode 100644 index ef40fbc77f3f0382383de3545957f73a38f4818d..0000000000000000000000000000000000000000 --- a/spaces/gradio/same-person-or-different/app.py +++ /dev/null @@ -1,106 +0,0 @@ -import gradio as gr -import torch -from torchaudio.sox_effects import apply_effects_file -from transformers import AutoFeatureExtractor, AutoModelForAudioXVector -device = torch.device("cuda" if torch.cuda.is_available() else "cpu") - -OUTPUT_OK = ( - """ -
      -

      The speakers are

      -

      {:.1f}%

      -

      similar

      -

      Welcome, human!

      -
      (You must get at least 85% to be considered the same person)
      -
      -""" -) -OUTPUT_FAIL = ( - """ -
      -

      The speakers are

      -

      {:.1f}%

      -

      similar

      -

      You shall not pass!

      -
      (You must get at least 85% to be considered the same person)
      -
      -""" -) - -EFFECTS = [ - ["remix", "-"], - ["channels", "1"], - ["rate", "16000"], - ["gain", "-1.0"], - ["silence", "1", "0.1", "0.1%", "-1", "0.1", "0.1%"], - ["trim", "0", "10"], -] - -THRESHOLD = 0.85 - -model_name = "microsoft/unispeech-sat-base-plus-sv" -feature_extractor = AutoFeatureExtractor.from_pretrained(model_name) -model = AutoModelForAudioXVector.from_pretrained(model_name).to(device) -cosine_sim = torch.nn.CosineSimilarity(dim=-1) - - -def similarity_fn(path1, path2): - if not (path1 and path2): - return 'ERROR: Please record audio for *both* speakers!' - - wav1, _ = apply_effects_file(path1, EFFECTS) - wav2, _ = apply_effects_file(path2, EFFECTS) - print(wav1.shape, wav2.shape) - - input1 = feature_extractor(wav1.squeeze(0), return_tensors="pt", sampling_rate=16000).input_values.to(device) - input2 = feature_extractor(wav2.squeeze(0), return_tensors="pt", sampling_rate=16000).input_values.to(device) - - with torch.no_grad(): - emb1 = model(input1).embeddings - emb2 = model(input2).embeddings - emb1 = torch.nn.functional.normalize(emb1, dim=-1).cpu() - emb2 = torch.nn.functional.normalize(emb2, dim=-1).cpu() - similarity = cosine_sim(emb1, emb2).numpy()[0] - - if similarity >= THRESHOLD: - output = OUTPUT_OK.format(similarity * 100) - else: - output = OUTPUT_FAIL.format(similarity * 100) - - return output - -inputs = [ - gr.inputs.Audio(source="microphone", type="filepath", optional=True, label="Speaker #1"), - gr.inputs.Audio(source="microphone", type="filepath", optional=True, label="Speaker #2"), -] -output = gr.outputs.HTML(label="") - - -description = ( - "This demo from Microsoft will compare two speech samples and determine if they are from the same speaker. " - "Try it with your own voice!" -) -article = ( - "

      " - "🎙️ Learn more about UniSpeech-SAT | " - "📚 UniSpeech-SAT paper | " - "📚 X-Vector paper" - "

      " -) -examples = [ - ["samples/cate_blanch.mp3", "samples/cate_blanch_2.mp3"], - ["samples/cate_blanch.mp3", "samples/heath_ledger.mp3"], -] - -interface = gr.Interface( - fn=similarity_fn, - inputs=inputs, - outputs=output, - layout="horizontal", - theme="huggingface", - allow_flagging=False, - live=False, - examples=examples, - cache_examples=False -) -interface.launch() diff --git a/spaces/gradio/xgboost-income-prediction-with-explainability/run.py b/spaces/gradio/xgboost-income-prediction-with-explainability/run.py deleted file mode 100644 index 27ef4a3f1de405ae7ee4e90b84c5cd209d694053..0000000000000000000000000000000000000000 --- a/spaces/gradio/xgboost-income-prediction-with-explainability/run.py +++ /dev/null @@ -1,163 +0,0 @@ -import gradio as gr -import random -import matplotlib.pyplot as plt -import pandas as pd -import shap -import xgboost as xgb -from datasets import load_dataset - - -dataset = load_dataset("scikit-learn/adult-census-income") -X_train = dataset["train"].to_pandas() -_ = X_train.pop("fnlwgt") -_ = X_train.pop("race") -y_train = X_train.pop("income") -y_train = (y_train == ">50K").astype(int) -categorical_columns = [ - "workclass", - "education", - "marital.status", - "occupation", - "relationship", - "sex", - "native.country", -] -X_train = X_train.astype({col: "category" for col in categorical_columns}) -data = xgb.DMatrix(X_train, label=y_train, enable_categorical=True) -model = xgb.train(params={"objective": "binary:logistic"}, dtrain=data) -explainer = shap.TreeExplainer(model) - -def predict(*args): - df = pd.DataFrame([args], columns=X_train.columns) - df = df.astype({col: "category" for col in categorical_columns}) - pos_pred = model.predict(xgb.DMatrix(df, enable_categorical=True)) - return {">50K": float(pos_pred[0]), "<=50K": 1 - float(pos_pred[0])} - - -def interpret(*args): - df = pd.DataFrame([args], columns=X_train.columns) - df = df.astype({col: "category" for col in categorical_columns}) - shap_values = explainer.shap_values(xgb.DMatrix(df, enable_categorical=True)) - scores_desc = list(zip(shap_values[0], X_train.columns)) - scores_desc = sorted(scores_desc) - fig_m = plt.figure(tight_layout=True) - plt.barh([s[1] for s in scores_desc], [s[0] for s in scores_desc]) - plt.title("Feature Shap Values") - plt.ylabel("Shap Value") - plt.xlabel("Feature") - plt.tight_layout() - return fig_m - - -unique_class = sorted(X_train["workclass"].unique()) -unique_education = sorted(X_train["education"].unique()) -unique_marital_status = sorted(X_train["marital.status"].unique()) -unique_relationship = sorted(X_train["relationship"].unique()) -unique_occupation = sorted(X_train["occupation"].unique()) -unique_sex = sorted(X_train["sex"].unique()) -unique_country = sorted(X_train["native.country"].unique()) - -with gr.Blocks() as demo: - gr.Markdown(""" - **Income Classification with XGBoost 💰**: This demo uses an XGBoost classifier predicts income based on demographic factors, along with Shapley value-based *explanations*. The [source code for this Gradio demo is here](https://huggingface.co/spaces/gradio/xgboost-income-prediction-with-explainability/blob/main/app.py). - """) - with gr.Row(): - with gr.Column(): - age = gr.Slider(label="Age", minimum=17, maximum=90, step=1, randomize=True) - work_class = gr.Dropdown( - label="Workclass", - choices=unique_class, - value=lambda: random.choice(unique_class), - ) - education = gr.Dropdown( - label="Education Level", - choices=unique_education, - value=lambda: random.choice(unique_education), - ) - years = gr.Slider( - label="Years of schooling", - minimum=1, - maximum=16, - step=1, - randomize=True, - ) - marital_status = gr.Dropdown( - label="Marital Status", - choices=unique_marital_status, - value=lambda: random.choice(unique_marital_status), - ) - occupation = gr.Dropdown( - label="Occupation", - choices=unique_occupation, - value=lambda: random.choice(unique_occupation), - ) - relationship = gr.Dropdown( - label="Relationship Status", - choices=unique_relationship, - value=lambda: random.choice(unique_relationship), - ) - sex = gr.Dropdown( - label="Sex", choices=unique_sex, value=lambda: random.choice(unique_sex) - ) - capital_gain = gr.Slider( - label="Capital Gain", - minimum=0, - maximum=100000, - step=500, - randomize=True, - ) - capital_loss = gr.Slider( - label="Capital Loss", minimum=0, maximum=10000, step=500, randomize=True - ) - hours_per_week = gr.Slider( - label="Hours Per Week Worked", minimum=1, maximum=99, step=1 - ) - country = gr.Dropdown( - label="Native Country", - choices=unique_country, - value=lambda: random.choice(unique_country), - ) - with gr.Column(): - label = gr.Label() - plot = gr.Plot() - with gr.Row(): - predict_btn = gr.Button(value="Predict") - interpret_btn = gr.Button(value="Explain") - predict_btn.click( - predict, - inputs=[ - age, - work_class, - education, - years, - marital_status, - occupation, - relationship, - sex, - capital_gain, - capital_loss, - hours_per_week, - country, - ], - outputs=[label], - ) - interpret_btn.click( - interpret, - inputs=[ - age, - work_class, - education, - years, - marital_status, - occupation, - relationship, - sex, - capital_gain, - capital_loss, - hours_per_week, - country, - ], - outputs=[plot], - ) - -demo.launch() diff --git a/spaces/gwang-kim/DATID-3D/eg3d/projector/w_plus_projector.py b/spaces/gwang-kim/DATID-3D/eg3d/projector/w_plus_projector.py deleted file mode 100644 index 46a8040cbb93637314c03c15061784900d993b40..0000000000000000000000000000000000000000 --- a/spaces/gwang-kim/DATID-3D/eg3d/projector/w_plus_projector.py +++ /dev/null @@ -1,182 +0,0 @@ -# Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved. -# -# NVIDIA CORPORATION and its licensors retain all intellectual property -# and proprietary rights in and to this software, related documentation -# and any modifications thereto. Any use, reproduction, disclosure or -# distribution of this software and related documentation without an express -# license agreement from NVIDIA CORPORATION is strictly prohibited. - -"""Project given image to the latent space of pretrained network pickle.""" - -import copy -import os -import numpy as np -import torch -import torch.nn.functional as F -from tqdm import tqdm -import dnnlib -import PIL -from camera_utils import LookAtPoseSampler - -def project( - G, - c, - outdir, - target: torch.Tensor, # [C,H,W] and dynamic range [0,255], W & H must match G output resolution - *, - num_steps=1000, - w_avg_samples=10000, - initial_learning_rate=0.01, - initial_noise_factor=0.05, - lr_rampdown_length=0.25, - lr_rampup_length=0.05, - noise_ramp_length=0.75, - regularize_noise_weight=1e5, - verbose=False, - device: torch.device, - initial_w=None, - image_log_step=100, - w_name: str -): - os.makedirs(f'{outdir}/{w_name}_w_plus', exist_ok=True) - outdir = f'{outdir}/{w_name}_w_plus' - assert target.shape == (G.img_channels, G.img_resolution, G.img_resolution) - - def logprint(*args): - if verbose: - print(*args) - - G = copy.deepcopy(G).eval().requires_grad_(False).to(device).float() # type: ignore - - # Compute w stats. - w_avg_path = './w_avg.npy' - w_std_path = './w_std.npy' - if (not os.path.exists(w_avg_path)) or (not os.path.exists(w_std_path)): - print(f'Computing W midpoint and stddev using {w_avg_samples} samples...') - z_samples = np.random.RandomState(123).randn(w_avg_samples, G.z_dim) - # c_samples = c.repeat(w_avg_samples, 1) - - # use avg look at point - - camera_lookat_point = torch.tensor(G.rendering_kwargs['avg_camera_pivot'], device=device) - cam2world_pose = LookAtPoseSampler.sample(3.14 / 2, 3.14 / 2, camera_lookat_point, - radius=G.rendering_kwargs['avg_camera_radius'], device=device) - focal_length = 4.2647 # FFHQ's FOV - intrinsics = torch.tensor([[focal_length, 0, 0.5], [0, focal_length, 0.5], [0, 0, 1]], device=device) - c_samples = torch.cat([cam2world_pose.reshape(-1, 16), intrinsics.reshape(-1, 9)], 1) - c_samples = c_samples.repeat(w_avg_samples, 1) - - w_samples = G.mapping(torch.from_numpy(z_samples).to(device), c_samples) # [N, L, C] - w_samples = w_samples[:, :1, :].cpu().numpy().astype(np.float32) # [N, 1, C] - w_avg = np.mean(w_samples, axis=0, keepdims=True) # [1, 1, C] - # print('save w_avg to ./w_avg.npy') - # np.save('./w_avg.npy',w_avg) - w_avg_tensor = torch.from_numpy(w_avg).cuda() - w_std = (np.sum((w_samples - w_avg) ** 2) / w_avg_samples) ** 0.5 - - # np.save(w_avg_path, w_avg) - # np.save(w_std_path, w_std) - else: - # w_avg = np.load(w_avg_path) - # w_std = np.load(w_std_path) - raise Exception(' ') - - # z_samples = np.random.RandomState(123).randn(w_avg_samples, G.z_dim) - # c_samples = c.repeat(w_avg_samples, 1) - # w_samples = G.mapping(torch.from_numpy(z_samples).to(device), c_samples) # [N, L, C] - # w_samples = w_samples[:, :1, :].cpu().numpy().astype(np.float32) # [N, 1, C] - # w_avg = np.mean(w_samples, axis=0, keepdims=True) # [1, 1, C] - # w_avg_tensor = torch.from_numpy(w_avg).cuda() - # w_std = (np.sum((w_samples - w_avg) ** 2) / w_avg_samples) ** 0.5 - - start_w = initial_w if initial_w is not None else w_avg - - # Setup noise inputs. - noise_bufs = {name: buf for (name, buf) in G.backbone.synthesis.named_buffers() if 'noise_const' in name} - - # Load VGG16 feature detector. - url = 'https://nvlabs-fi-cdn.nvidia.com/stylegan2-ada-pytorch/pretrained/metrics/vgg16.pt' - # url = './networks/vgg16.pt' - with dnnlib.util.open_url(url) as f: - vgg16 = torch.jit.load(f).eval().to(device) - - # Features for target image. - target_images = target.unsqueeze(0).to(device).to(torch.float32) - if target_images.shape[2] > 256: - target_images = F.interpolate(target_images, size=(256, 256), mode='area') - target_features = vgg16(target_images, resize_images=False, return_lpips=True) - - start_w = np.repeat(start_w, G.backbone.mapping.num_ws, axis=1) - w_opt = torch.tensor(start_w, dtype=torch.float32, device=device, - requires_grad=True) # pylint: disable=not-callable - - optimizer = torch.optim.Adam([w_opt] + list(noise_bufs.values()), betas=(0.9, 0.999), - lr=0.1) - - # Init noise. - for buf in noise_bufs.values(): - buf[:] = torch.randn_like(buf) - buf.requires_grad = True - - for step in tqdm(range(num_steps), position=0, leave=True): - - # Learning rate schedule. - t = step / num_steps - w_noise_scale = w_std * initial_noise_factor * max(0.0, 1.0 - t / noise_ramp_length) ** 2 - lr_ramp = min(1.0, (1.0 - t) / lr_rampdown_length) - lr_ramp = 0.5 - 0.5 * np.cos(lr_ramp * np.pi) - lr_ramp = lr_ramp * min(1.0, t / lr_rampup_length) - lr = initial_learning_rate * lr_ramp - for param_group in optimizer.param_groups: - param_group['lr'] = lr - - # Synth images from opt_w. - w_noise = torch.randn_like(w_opt) * w_noise_scale - ws = (w_opt + w_noise) - synth_images = G.synthesis(ws,c, noise_mode='const')['image'] - - if step % image_log_step == 0: - with torch.no_grad(): - vis_img = (synth_images.permute(0, 2, 3, 1) * 127.5 + 128).clamp(0, 255).to(torch.uint8) - - PIL.Image.fromarray(vis_img[0].cpu().numpy(), 'RGB').save(f'{outdir}/{step}.png') - - # Downsample image to 256x256 if it's larger than that. VGG was built for 224x224 images. - synth_images = (synth_images + 1) * (255 / 2) - if synth_images.shape[2] > 256: - synth_images = F.interpolate(synth_images, size=(256, 256), mode='area') - - # Features for synth images. - synth_features = vgg16(synth_images, resize_images=False, return_lpips=True) - dist = (target_features - synth_features).square().sum() - - # Noise regularization. - reg_loss = 0.0 - for v in noise_bufs.values(): - noise = v[None, None, :, :] # must be [1,1,H,W] for F.avg_pool2d() - while True: - reg_loss += (noise * torch.roll(noise, shifts=1, dims=3)).mean() ** 2 - reg_loss += (noise * torch.roll(noise, shifts=1, dims=2)).mean() ** 2 - if noise.shape[2] <= 8: - break - noise = F.avg_pool2d(noise, kernel_size=2) - loss = dist + reg_loss * regularize_noise_weight - - # if step % 10 == 0: - # with torch.no_grad(): - # print({f'step {step}, first projection _{w_name}': loss.detach().cpu()}) - - # Step - optimizer.zero_grad(set_to_none=True) - loss.backward() - optimizer.step() - logprint(f'step {step + 1:>4d}/{num_steps}: dist {dist:<4.2f} loss {float(loss):<5.2f}') - - # Normalize noise. - with torch.no_grad(): - for buf in noise_bufs.values(): - buf -= buf.mean() - buf *= buf.square().mean().rsqrt() - - del G - return w_opt diff --git a/spaces/gypq/gypq3/README.md b/spaces/gypq/gypq3/README.md deleted file mode 100644 index 7ea1f5997fc9dcc7e3356254e64703782310788d..0000000000000000000000000000000000000000 --- a/spaces/gypq/gypq3/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Gypq3 -emoji: 🐨 -colorFrom: purple -colorTo: yellow -sdk: docker -pinned: false -license: mit -app_port: 8080 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference \ No newline at end of file diff --git a/spaces/h2oai/h2o_wave_whisper/utils.py b/spaces/h2oai/h2o_wave_whisper/utils.py deleted file mode 100644 index 120ea8b463bc262ac6341d3c379d45495a887822..0000000000000000000000000000000000000000 --- a/spaces/h2oai/h2o_wave_whisper/utils.py +++ /dev/null @@ -1,9 +0,0 @@ -from h2o_wave import ui - - -def get_inline_script(text: str) -> ui.InlineScript: - """ - Get Wave's Inline Script. - """ - - return ui.inline_script(text) diff --git a/spaces/hamacojr/SAM-CAT-Seg/cat_seg/utils/misc.py b/spaces/hamacojr/SAM-CAT-Seg/cat_seg/utils/misc.py deleted file mode 100644 index 874d9805b482f52bbffc1be620e36e0cffc07c46..0000000000000000000000000000000000000000 --- a/spaces/hamacojr/SAM-CAT-Seg/cat_seg/utils/misc.py +++ /dev/null @@ -1,111 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# Modified by Bowen Cheng from https://github.com/facebookresearch/detr/blob/master/util/misc.py -""" -Misc functions, including distributed helpers. - -Mostly copy-paste from torchvision references. -""" -from typing import List, Optional - -import torch -import torch.distributed as dist -import torchvision -from torch import Tensor - - -def _max_by_axis(the_list): - # type: (List[List[int]]) -> List[int] - maxes = the_list[0] - for sublist in the_list[1:]: - for index, item in enumerate(sublist): - maxes[index] = max(maxes[index], item) - return maxes - - -class NestedTensor(object): - def __init__(self, tensors, mask: Optional[Tensor]): - self.tensors = tensors - self.mask = mask - - def to(self, device): - # type: (Device) -> NestedTensor # noqa - cast_tensor = self.tensors.to(device) - mask = self.mask - if mask is not None: - assert mask is not None - cast_mask = mask.to(device) - else: - cast_mask = None - return NestedTensor(cast_tensor, cast_mask) - - def decompose(self): - return self.tensors, self.mask - - def __repr__(self): - return str(self.tensors) - - -def nested_tensor_from_tensor_list(tensor_list: List[Tensor]): - # TODO make this more general - if tensor_list[0].ndim == 3: - if torchvision._is_tracing(): - # nested_tensor_from_tensor_list() does not export well to ONNX - # call _onnx_nested_tensor_from_tensor_list() instead - return _onnx_nested_tensor_from_tensor_list(tensor_list) - - # TODO make it support different-sized images - max_size = _max_by_axis([list(img.shape) for img in tensor_list]) - # min_size = tuple(min(s) for s in zip(*[img.shape for img in tensor_list])) - batch_shape = [len(tensor_list)] + max_size - b, c, h, w = batch_shape - dtype = tensor_list[0].dtype - device = tensor_list[0].device - tensor = torch.zeros(batch_shape, dtype=dtype, device=device) - mask = torch.ones((b, h, w), dtype=torch.bool, device=device) - for img, pad_img, m in zip(tensor_list, tensor, mask): - pad_img[: img.shape[0], : img.shape[1], : img.shape[2]].copy_(img) - m[: img.shape[1], : img.shape[2]] = False - else: - raise ValueError("not supported") - return NestedTensor(tensor, mask) - - -# _onnx_nested_tensor_from_tensor_list() is an implementation of -# nested_tensor_from_tensor_list() that is supported by ONNX tracing. -@torch.jit.unused -def _onnx_nested_tensor_from_tensor_list(tensor_list: List[Tensor]) -> NestedTensor: - max_size = [] - for i in range(tensor_list[0].dim()): - max_size_i = torch.max( - torch.stack([img.shape[i] for img in tensor_list]).to(torch.float32) - ).to(torch.int64) - max_size.append(max_size_i) - max_size = tuple(max_size) - - # work around for - # pad_img[: img.shape[0], : img.shape[1], : img.shape[2]].copy_(img) - # m[: img.shape[1], :img.shape[2]] = False - # which is not yet supported in onnx - padded_imgs = [] - padded_masks = [] - for img in tensor_list: - padding = [(s1 - s2) for s1, s2 in zip(max_size, tuple(img.shape))] - padded_img = torch.nn.functional.pad(img, (0, padding[2], 0, padding[1], 0, padding[0])) - padded_imgs.append(padded_img) - - m = torch.zeros_like(img[0], dtype=torch.int, device=img.device) - padded_mask = torch.nn.functional.pad(m, (0, padding[2], 0, padding[1]), "constant", 1) - padded_masks.append(padded_mask.to(torch.bool)) - - tensor = torch.stack(padded_imgs) - mask = torch.stack(padded_masks) - - return NestedTensor(tensor, mask=mask) - - -def is_dist_avail_and_initialized(): - if not dist.is_available(): - return False - if not dist.is_initialized(): - return False - return True diff --git a/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/detectron2/data/samplers/__init__.py b/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/detectron2/data/samplers/__init__.py deleted file mode 100644 index 9cfa8a65259a850b8259016d482a0eac1bbafb38..0000000000000000000000000000000000000000 --- a/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/detectron2/data/samplers/__init__.py +++ /dev/null @@ -1,10 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved -from .distributed_sampler import InferenceSampler, RepeatFactorTrainingSampler, TrainingSampler -from .grouped_batch_sampler import GroupedBatchSampler - -__all__ = [ - "GroupedBatchSampler", - "TrainingSampler", - "InferenceSampler", - "RepeatFactorTrainingSampler", -] diff --git a/spaces/hasibzunair/masksup-segmentation-demo/description.html b/spaces/hasibzunair/masksup-segmentation-demo/description.html deleted file mode 100644 index 45aba8e0dfabaf2b03c1a8107180ecda250a0499..0000000000000000000000000000000000000000 --- a/spaces/hasibzunair/masksup-segmentation-demo/description.html +++ /dev/null @@ -1,10 +0,0 @@ - - - - - Title - - - This is a demo of our BMVC'2022 Oral paper Masked Supervised Learning for Semantic Segmentation.
      - - \ No newline at end of file diff --git "a/spaces/hbestm/gpt-academic-play/crazy_functions/\347\220\206\350\247\243PDF\346\226\207\346\241\243\345\206\205\345\256\271.py" "b/spaces/hbestm/gpt-academic-play/crazy_functions/\347\220\206\350\247\243PDF\346\226\207\346\241\243\345\206\205\345\256\271.py" deleted file mode 100644 index 505086455af8d2676055ab084cf97058b954c7d5..0000000000000000000000000000000000000000 --- "a/spaces/hbestm/gpt-academic-play/crazy_functions/\347\220\206\350\247\243PDF\346\226\207\346\241\243\345\206\205\345\256\271.py" +++ /dev/null @@ -1,112 +0,0 @@ -from toolbox import update_ui -from toolbox import CatchException, report_execption -from .crazy_utils import read_and_clean_pdf_text -from .crazy_utils import request_gpt_model_in_new_thread_with_ui_alive -fast_debug = False - - -def 解析PDF(file_name, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt): - import tiktoken - print('begin analysis on:', file_name) - - ############################## <第 0 步,切割PDF> ################################## - # 递归地切割PDF文件,每一块(尽量是完整的一个section,比如introduction,experiment等,必要时再进行切割) - # 的长度必须小于 2500 个 Token - file_content, page_one = read_and_clean_pdf_text(file_name) # (尝试)按照章节切割PDF - - TOKEN_LIMIT_PER_FRAGMENT = 2500 - - from .crazy_utils import breakdown_txt_to_satisfy_token_limit_for_pdf - from request_llm.bridge_all import model_info - enc = model_info["gpt-3.5-turbo"]['tokenizer'] - def get_token_num(txt): return len(enc.encode(txt, disallowed_special=())) - paper_fragments = breakdown_txt_to_satisfy_token_limit_for_pdf( - txt=file_content, get_token_fn=get_token_num, limit=TOKEN_LIMIT_PER_FRAGMENT) - page_one_fragments = breakdown_txt_to_satisfy_token_limit_for_pdf( - txt=str(page_one), get_token_fn=get_token_num, limit=TOKEN_LIMIT_PER_FRAGMENT//4) - # 为了更好的效果,我们剥离Introduction之后的部分(如果有) - paper_meta = page_one_fragments[0].split('introduction')[0].split('Introduction')[0].split('INTRODUCTION')[0] - - ############################## <第 1 步,从摘要中提取高价值信息,放到history中> ################################## - final_results = [] - final_results.append(paper_meta) - - ############################## <第 2 步,迭代地历遍整个文章,提取精炼信息> ################################## - i_say_show_user = f'首先你在英文语境下通读整篇论文。'; gpt_say = "[Local Message] 收到。" # 用户提示 - chatbot.append([i_say_show_user, gpt_say]); yield from update_ui(chatbot=chatbot, history=[]) # 更新UI - - iteration_results = [] - last_iteration_result = paper_meta # 初始值是摘要 - MAX_WORD_TOTAL = 4096 - n_fragment = len(paper_fragments) - if n_fragment >= 20: print('文章极长,不能达到预期效果') - for i in range(n_fragment): - NUM_OF_WORD = MAX_WORD_TOTAL // n_fragment - i_say = f"Read this section, recapitulate the content of this section with less than {NUM_OF_WORD} words: {paper_fragments[i]}" - i_say_show_user = f"[{i+1}/{n_fragment}] Read this section, recapitulate the content of this section with less than {NUM_OF_WORD} words: {paper_fragments[i][:200]}" - gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive(i_say, i_say_show_user, # i_say=真正给chatgpt的提问, i_say_show_user=给用户看的提问 - llm_kwargs, chatbot, - history=["The main idea of the previous section is?", last_iteration_result], # 迭代上一次的结果 - sys_prompt="Extract the main idea of this section." # 提示 - ) - iteration_results.append(gpt_say) - last_iteration_result = gpt_say - - ############################## <第 3 步,整理history> ################################## - final_results.extend(iteration_results) - final_results.append(f'接下来,你是一名专业的学术教授,利用以上信息,使用中文回答我的问题。') - # 接下来两句话只显示在界面上,不起实际作用 - i_say_show_user = f'接下来,你是一名专业的学术教授,利用以上信息,使用中文回答我的问题。'; gpt_say = "[Local Message] 收到。" - chatbot.append([i_say_show_user, gpt_say]) - - ############################## <第 4 步,设置一个token上限,防止回答时Token溢出> ################################## - from .crazy_utils import input_clipping - _, final_results = input_clipping("", final_results, max_token_limit=3200) - yield from update_ui(chatbot=chatbot, history=final_results) # 注意这里的历史记录被替代了 - - -@CatchException -def 理解PDF文档内容标准文件输入(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port): - import glob, os - - # 基本信息:功能、贡献者 - chatbot.append([ - "函数插件功能?", - "理解PDF论文内容,并且将结合上下文内容,进行学术解答。函数插件贡献者: Hanzoe, binary-husky"]) - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - - # 尝试导入依赖,如果缺少依赖,则给出安装建议 - try: - import fitz - except: - report_execption(chatbot, history, - a = f"解析项目: {txt}", - b = f"导入软件依赖失败。使用该模块需要额外依赖,安装方法```pip install --upgrade pymupdf```。") - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - return - - # 清空历史,以免输入溢出 - history = [] - - # 检测输入参数,如没有给定输入参数,直接退出 - if os.path.exists(txt): - project_folder = txt - else: - if txt == "": - txt = '空空如也的输入栏' - report_execption(chatbot, history, - a=f"解析项目: {txt}", b=f"找不到本地项目或无权访问: {txt}") - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - return - - # 搜索需要处理的文件清单 - file_manifest = [f for f in glob.glob(f'{project_folder}/**/*.pdf', recursive=True)] - # 如果没找到任何文件 - if len(file_manifest) == 0: - report_execption(chatbot, history, - a=f"解析项目: {txt}", b=f"找不到任何.tex或.pdf文件: {txt}") - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - return - txt = file_manifest[0] - # 开始正式执行任务 - yield from 解析PDF(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt) diff --git a/spaces/huggingface-projects/llama-2-13b-chat/app.py b/spaces/huggingface-projects/llama-2-13b-chat/app.py deleted file mode 100644 index 7c8eb0f6703bb52948d66be311a07b7ce0d79534..0000000000000000000000000000000000000000 --- a/spaces/huggingface-projects/llama-2-13b-chat/app.py +++ /dev/null @@ -1,147 +0,0 @@ -import os -from threading import Thread -from typing import Iterator - -import gradio as gr -import spaces -import torch -from transformers import AutoModelForCausalLM, AutoTokenizer, TextIteratorStreamer - -MAX_MAX_NEW_TOKENS = 2048 -DEFAULT_MAX_NEW_TOKENS = 1024 -MAX_INPUT_TOKEN_LENGTH = int(os.getenv("MAX_INPUT_TOKEN_LENGTH", "4096")) - -DESCRIPTION = """\ -# Llama-2 13B Chat - -This Space demonstrates model [Llama-2-13b-chat](https://huggingface.co/meta-llama/Llama-2-13b-chat) by Meta, a Llama 2 model with 13B parameters fine-tuned for chat instructions. Feel free to play with it, or duplicate to run generations without a queue! If you want to run your own service, you can also [deploy the model on Inference Endpoints](https://huggingface.co/inference-endpoints). - -🔎 For more details about the Llama 2 family of models and how to use them with `transformers`, take a look [at our blog post](https://huggingface.co/blog/llama2). - -🔨 Looking for an even more powerful model? Check out the large [**70B** model demo](https://huggingface.co/spaces/ysharma/Explore_llamav2_with_TGI). -🐇 For a smaller model that you can run on many GPUs, check our [7B model demo](https://huggingface.co/spaces/huggingface-projects/llama-2-7b-chat). - -""" - -LICENSE = """ -

      - ---- -As a derivate work of [Llama-2-13b-chat](https://huggingface.co/meta-llama/Llama-2-13b-chat) by Meta, -this demo is governed by the original [license](https://huggingface.co/spaces/huggingface-projects/llama-2-13b-chat/blob/main/LICENSE.txt) and [acceptable use policy](https://huggingface.co/spaces/huggingface-projects/llama-2-13b-chat/blob/main/USE_POLICY.md). -""" - -if not torch.cuda.is_available(): - DESCRIPTION += "\n

      Running on CPU 🥶 This demo does not work on CPU.

      " - - -if torch.cuda.is_available(): - model_id = "meta-llama/Llama-2-13b-chat-hf" - model = AutoModelForCausalLM.from_pretrained(model_id, device_map="auto", load_in_4bit=True) - tokenizer = AutoTokenizer.from_pretrained(model_id) - tokenizer.use_default_system_prompt = False - - -@spaces.GPU -def generate( - message: str, - chat_history: list[tuple[str, str]], - system_prompt: str, - max_new_tokens: int = 1024, - temperature: float = 0.6, - top_p: float = 0.9, - top_k: int = 50, - repetition_penalty: float = 1.2, -) -> Iterator[str]: - conversation = [] - if system_prompt: - conversation.append({"role": "system", "content": system_prompt}) - for user, assistant in chat_history: - conversation.extend([{"role": "user", "content": user}, {"role": "assistant", "content": assistant}]) - conversation.append({"role": "user", "content": message}) - - input_ids = tokenizer.apply_chat_template(conversation, return_tensors="pt") - if input_ids.shape[1] > MAX_INPUT_TOKEN_LENGTH: - input_ids = input_ids[:, -MAX_INPUT_TOKEN_LENGTH:] - gr.Warning(f"Trimmed input from conversation as it was longer than {MAX_INPUT_TOKEN_LENGTH} tokens.") - input_ids = input_ids.to(model.device) - - streamer = TextIteratorStreamer(tokenizer, timeout=10.0, skip_prompt=True, skip_special_tokens=True) - generate_kwargs = dict( - {"input_ids": input_ids}, - streamer=streamer, - max_new_tokens=max_new_tokens, - do_sample=True, - top_p=top_p, - top_k=top_k, - temperature=temperature, - num_beams=1, - repetition_penalty=repetition_penalty, - ) - t = Thread(target=model.generate, kwargs=generate_kwargs) - t.start() - - outputs = [] - for text in streamer: - outputs.append(text) - yield "".join(outputs) - - -chat_interface = gr.ChatInterface( - fn=generate, - additional_inputs=[ - gr.Textbox(label="System prompt", lines=6), - gr.Slider( - label="Max new tokens", - minimum=1, - maximum=MAX_MAX_NEW_TOKENS, - step=1, - value=DEFAULT_MAX_NEW_TOKENS, - ), - gr.Slider( - label="Temperature", - minimum=0.1, - maximum=4.0, - step=0.1, - value=0.6, - ), - gr.Slider( - label="Top-p (nucleus sampling)", - minimum=0.05, - maximum=1.0, - step=0.05, - value=0.9, - ), - gr.Slider( - label="Top-k", - minimum=1, - maximum=1000, - step=1, - value=50, - ), - gr.Slider( - label="Repetition penalty", - minimum=1.0, - maximum=2.0, - step=0.05, - value=1.2, - ), - ], - stop_btn=None, - examples=[ - ["Hello there! How are you doing?"], - ["Can you explain briefly to me what is the Python programming language?"], - ["Explain the plot of Cinderella in a sentence."], - ["How many hours does it take a man to eat a Helicopter?"], - ["Write a 100-word article on 'Benefits of Open-Source in AI research'"], - ], -) - -with gr.Blocks(css="style.css") as demo: - gr.Markdown(DESCRIPTION) - gr.DuplicateButton(value="Duplicate Space for private use", elem_id="duplicate-button") - chat_interface.render() - gr.Markdown(LICENSE) - -if __name__ == "__main__": - demo.queue(max_size=20).launch() diff --git a/spaces/hunkim/es-gpt/Dockerfile b/spaces/hunkim/es-gpt/Dockerfile deleted file mode 100644 index 4a5a821629c9a08569f0e83004405a13032cd177..0000000000000000000000000000000000000000 --- a/spaces/hunkim/es-gpt/Dockerfile +++ /dev/null @@ -1,11 +0,0 @@ -FROM python:3.9 - -WORKDIR /code - -COPY ./requirements.txt /code/requirements.txt - -RUN pip install --no-cache-dir --upgrade -r /code/requirements.txt - -COPY . . - -CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "7860"] diff --git a/spaces/hyxue/HiFiFace-inference-demo/Deep3DFaceRecon_pytorch/models/arcface_torch/docs/eval.md b/spaces/hyxue/HiFiFace-inference-demo/Deep3DFaceRecon_pytorch/models/arcface_torch/docs/eval.md deleted file mode 100644 index 9ce1621357c03ee8a25c004e5f01850990df1628..0000000000000000000000000000000000000000 --- a/spaces/hyxue/HiFiFace-inference-demo/Deep3DFaceRecon_pytorch/models/arcface_torch/docs/eval.md +++ /dev/null @@ -1,43 +0,0 @@ -## Eval on ICCV2021-MFR - -coming soon. - - -## Eval IJBC -You can eval ijbc with pytorch or onnx. - - -1. Eval IJBC With Onnx -```shell -CUDA_VISIBLE_DEVICES=0 python onnx_ijbc.py --model-root ms1mv3_arcface_r50 --image-path IJB_release/IJBC --result-dir ms1mv3_arcface_r50 -``` - -2. Eval IJBC With Pytorch -```shell -CUDA_VISIBLE_DEVICES=0,1 python eval_ijbc.py \ ---model-prefix ms1mv3_arcface_r50/backbone.pth \ ---image-path IJB_release/IJBC \ ---result-dir ms1mv3_arcface_r50 \ ---batch-size 128 \ ---job ms1mv3_arcface_r50 \ ---target IJBC \ ---network iresnet50 -``` - - -## Inference - -```shell -python inference.py --weight ms1mv3_arcface_r50/backbone.pth --network r50 -``` - - -## Result - -| Datasets | Backbone | **MFR-ALL** | IJB-C(1E-4) | IJB-C(1E-5) | -|:---------------|:--------------------|:------------|:------------|:------------| -| WF12M-PFC-0.05 | r100 | 94.05 | 97.51 | 95.75 | -| WF12M-PFC-0.1 | r100 | 94.49 | 97.56 | 95.92 | -| WF12M-PFC-0.2 | r100 | 94.75 | 97.60 | 95.90 | -| WF12M-PFC-0.3 | r100 | 94.71 | 97.64 | 96.01 | -| WF12M | r100 | 94.69 | 97.59 | 95.97 | \ No newline at end of file diff --git a/spaces/hyxue/HiFiFace-inference-demo/configs/train_config.py b/spaces/hyxue/HiFiFace-inference-demo/configs/train_config.py deleted file mode 100644 index b7d91ba5b2a9d9a22d4b1d3118c4a2fa0729d8f6..0000000000000000000000000000000000000000 --- a/spaces/hyxue/HiFiFace-inference-demo/configs/train_config.py +++ /dev/null @@ -1,56 +0,0 @@ -import os -import time -from dataclasses import dataclass - -from configs.mode import FaceSwapMode -from configs.singleton import Singleton - - -@Singleton -@dataclass -class TrainConfig: - mode = FaceSwapMode.MANY_TO_MANY - source_name: str = "" - - dataset_index: str = "/data/dataset/faceswap/full.pkl" - dataset_root: str = "/data/dataset/faceswap" - - batch_size: int = 8 - num_threads: int = 8 - same_rate: float = 0.5 - lr: float = 5e-5 - grad_clip: float = 1000.0 - - use_ddp: bool = True - - mouth_mask: bool = True - eye_hm_loss: bool = False - mouth_hm_loss: bool = False - - load_checkpoint = None # ("/data/checkpoints/hififace/rebuilt_discriminator_SFF_c256_1683367464544", 400000) - - identity_extractor_config = { - "f_3d_checkpoint_path": "/checkpoints/Deep3DFaceRecon/epoch_20_new.pth", - "f_id_checkpoint_path": "/checkpoints/arcface/ms1mv3_arcface_r100_fp16_backbone.pth", - "bfm_folder": "/checkpoints/useful_ckpt/BFM", - "hrnet_path": "/checkpoints/useful_ckpt/face_98lmks/HR18-WFLW.pth", - } - - visualize_interval: int = 100 - plot_interval: int = 100 - max_iters: int = 1000000 - checkpoint_interval: int = 40000 - - exp_name: str = "exp_base" - log_basedir: str = "/data/logs/hififace/" - checkpoint_basedir = "/data/checkpoints/hififace" - - def __post_init__(self): - time_stamp = int(time.time() * 1000) - self.log_dir = os.path.join(self.log_basedir, f"{self.exp_name}_{time_stamp}") - self.checkpoint_dir = os.path.join(self.checkpoint_basedir, f"{self.exp_name}_{time_stamp}") - - -if __name__ == "__main__": - tc = TrainConfig() - print(tc.log_dir) diff --git a/spaces/hzwluoye/gpt4/client/js/sidebar-toggler.js b/spaces/hzwluoye/gpt4/client/js/sidebar-toggler.js deleted file mode 100644 index b23f94e3bfba5bac53432e1b557765736dabbab4..0000000000000000000000000000000000000000 --- a/spaces/hzwluoye/gpt4/client/js/sidebar-toggler.js +++ /dev/null @@ -1,34 +0,0 @@ -const sidebar = document.querySelector(".sidebar"); -const menuButton = document.querySelector(".menu-button"); - -function toggleSidebar(event) { - if (sidebar.classList.contains("shown")) { - hideSidebar(event.target); - } else { - showSidebar(event.target); - } - window.scrollTo(0, 0); -} - -function showSidebar(target) { - sidebar.classList.add("shown"); - target.classList.add("rotated"); - document.body.style.overflow = "hidden"; -} - -function hideSidebar(target) { - sidebar.classList.remove("shown"); - target.classList.remove("rotated"); - document.body.style.overflow = "auto"; -} - -menuButton.addEventListener("click", toggleSidebar); - -document.body.addEventListener('click', function(event) { - if (event.target.matches('.conversation-title')) { - const menuButtonStyle = window.getComputedStyle(menuButton); - if (menuButtonStyle.display !== 'none') { - hideSidebar(menuButton); - } - } -}); diff --git a/spaces/hzy123/bingo/src/components/ui/textarea.tsx b/spaces/hzy123/bingo/src/components/ui/textarea.tsx deleted file mode 100644 index e25af722c7a5dc1121a9ab58d6716952f9f76081..0000000000000000000000000000000000000000 --- a/spaces/hzy123/bingo/src/components/ui/textarea.tsx +++ /dev/null @@ -1,24 +0,0 @@ -import * as React from 'react' - -import { cn } from '@/lib/utils' - -export interface TextareaProps - extends React.TextareaHTMLAttributes {} - -const Textarea = React.forwardRef( - ({ className, ...props }, ref) => { - return ( -